AWS Glue Architecture

We define jobs in AWS Glue to accomplish the work that is required to extract, transform and load data from a data source to a data target. So if we talk about the workflow, the first step here is we define a crawler to populate our AWS data catalog with metadata and table definitions. We point our crawler at a data source post and the crawler creates table definitions in the data catalog. In addition to table definitions, the data catalog contains other metadata that is required to define ETL jobs. we use this metadata when we define a job to transform our data in the second step. AWS Glue can generate a script to transform our data or we can also provide the script in the AWS Glue console. In the third step, we can run our job on demand or we can set it up to start when a specified trigger occurs. The trigger can be a time-based schedule or an event. Finally, when our job runs, a script extracts data from our data source, transforms the data, and loads it into our target. The script runs in an Apache Spark environment in AWS Glue.

  • Data Catalog: It is the persistent metadata store in AWS Glue. It contains table definitions, job definitions, etc. AWS Glue has one data catalog per region.
  • Database: It is a set of associated data catalog table definitions organized into a logical group in the AWS group. 
  • Crawler: It is a program that connects to our data store. Maybe a source or a target progresses through a prioritized list of classifiers to determine the schema for our data and then it creates metadata tables in the data catalog. 
  • Connection: AWS Glue Connection is the data catalog that holds the information needed to connect to a certain data storage.
  • Classifier: It determines the schema of our data. AWS Glue provides classifiers for common file types such as CSV, JSON, etc. It also provides classifiers for common relational database management systems using a JDBC connection.  
  • Data Store: It is a repository for persistently storing our data. Examples include Amazon S3, buckets, and relational databases.

  • Data Source: It is a target data store that is used as an input to process or transform.
  • Data Target: It is a data store where the transformed data is written. 
  • Development Endpoint: It is an environment where we can develop and test our AWS Glue ETL scripts.
  • Job: It is a business logic required to perform the ETL work It is composed of a transformation script data sources and data targets. They can be initiated by triggers that can be scheduled or triggered by events. 
  • Trigger: It initiates an ETL job. We can define triggers based on a scheduled time or an event. 
  • Notebook Server: It is a web-based environment that we can use to run our PySpark statements, which is a Python dialect used for ETL programming. 
  • Script: It contains the code that extracts data from sources transforms it and loads it into the targets. 
  • Table: It contains the name of columns, data types, definitions, and other metadata about a base dataset.
  • Transform: We use the code logic to manipulate our data into different formats using the transform.

Introduction To AWS Glue ETL

The Extract, Transform, Load(ETL) process has been designed specifically for the purpose of transferring data from its source database to the data warehouse. However, the challenges and complexities of ETL can make it hard to implement them successfully for all our enterprise data. For this reason, Amazon has introduced AWS Glue.

AWS Glue is a fully managed ETL(Extract, Transform, and Load) service that makes it simple and cost-effective to categorize our data, clean it, enrich it, and move it reliably between various data stores. It consists of a central metadata repository known as the AWS Glue data catalog an ETL engine that automatically generates Python code and a flexible scheduler that handles dependency resolution job monitoring. AWS Glue is serverless which means that there is no infrastructure to set or manage a setup.  

Similar Reads

AWS Glue

AWS Glue is used to prepare data from different sources and prepare that data for analytics, machine learning, and application development. It will reduce the manual effort by performing the automation of the jobs like data integration, data transformation, and data loading. AWS glue is a serverless data integration service which makes it more useful for the preparation of the data also the data that has been prepared will be maintained centrally in a catalog which makes it easy to find and understand the data....

How To Use AWS Glue ETL

Follow the steps mentioned below to use AWS Glue ETL...

Best Practices For AWS Glue ETL

Following are the some of the best practices that you can follow while implementing the AWS Glue ETL....

Case studies of AWS Glue ETL

Follwing are the some of the organization that are using the AWS glue ETL. To Know How to create AWS Account refer to the Amazon Web Services (AWS) – Free Tier Account Set up....

Future of AWS Glue ETL

Enhanced Machine Learning Integration: You can integrate with other service in the AWS like SageMaker, ML models in the amazon console. The AWS Glue can automate the data and feature engineering for machine learning models. Real-Time Data Processing: AWS glue can enhance the real time data which can be used for crucial requirements of the applications which requires immediate insights from data streams. Serverless Architecture Expansion: The serverless architecture of AWS Glue will keep growing, offering even more precise control over resource distribution and cost reduction. This will guarantee effective resource utilisation by enabling users to scale their ETL processes in accordance with exact requirements. Advanced Data Transformation: The feature is all about data AWS glue may introduce the features like data cleansing, enrichment and analysis to support increasingly complex ETL requirements....

AWS Glue Architecture

We define jobs in AWS Glue to accomplish the work that is required to extract, transform and load data from a data source to a data target. So if we talk about the workflow, the first step here is we define a crawler to populate our AWS data catalog with metadata and table definitions. We point our crawler at a data source post and the crawler creates table definitions in the data catalog. In addition to table definitions, the data catalog contains other metadata that is required to define ETL jobs. we use this metadata when we define a job to transform our data in the second step. AWS Glue can generate a script to transform our data or we can also provide the script in the AWS Glue console. In the third step, we can run our job on demand or we can set it up to start when a specified trigger occurs. The trigger can be a time-based schedule or an event. Finally, when our job runs, a script extracts data from our data source, transforms the data, and loads it into our target. The script runs in an Apache Spark environment in AWS Glue....

Use Cases of AWS Glue

To build a Data Warehouse to Organize, Cleanse, Validate, and Format Data: We can transform and move AWS cloud data into our data store. We can also load data from different sources into our data warehouse for regular reporting and analysis. By storing it in the warehouse, we integrate information from different parts of our business and form a common source of data for decision-making. When we run Serverless Queries against our Amazon S3 Data Link: S3 here means simple storage service. AWS Glue can catalog our simple storage service that is Amazon S3 data making it available for querying with Amazon Athena and Amazon RedShift Spectrum. With crawlers, our metadata stays in synchronization with the underlying data. AWS RedShift Spectrum can access and analyze data through one unified interface without loading it into multiple data. Creating Event-driven ETL Pipelines: We can run our ETL jobs as soon as new data becomes available in Amazon S3 by invoking our AWS Glue ETL jobs from an AWS Lambda function. We can also register this new data in the AWS load data catalog as a part of our details. To Understand our Data Assets: We can store our data using various AWS services and still maintain a unique, unified view of our data using the AWS Glue data catalog.  We can view the data catalog to quickly search and discover the datasets that we own and maintain the relative data in one central location....

Benifits of AWS Glue

Less Hassle: AWS Glue is integrated across a wide range of AWS services. AWS Glue natively supports data stored in Amazon Aurora and other Amazon Relational Database Service engines, Amazon RedShift and Amazon S3 along with common database engines and databases in our virtual private cloud running on Amazon EC2. Cost Effective: AWS Glue is serverless. There is no infrastructure to provision or manage AWS Glue handles, provisioning, configuration, and scaling of the resources required to run our ETL jobs. We only pay for the resources that we use while our jobs are running. More Power: AWS Glue automates much of the effort in building, maintaining, and running ETL jobs. It identifies data formats and suggests schemas and transformations. Glue automatically generates the code to execute our data transformations and loading processes....

Disadvantages of AWS Glue

Amount of Work Involved: It is not a full-fledged ETL service. Hence in order to customize the services as per our requirements, we need experienced and skillful candidates. And it involves a huge amount of work to be done as well. Platform Compatibility: AWS Glue is specifically made for the AWS console and its subsidiaries. And hence it isn’t compatible with other technologies. Limited Data Sources: It only supports limited data sources like S3 and JDBC High Skillset Requirement: AWS Glue is a serverless application, and it is still a new technology. Hence, the skillset required to implement and operate the AWS Glue is high....

FAQs On AWS Glue

1. AWS Data Catalog...