How To AWS Date Lake: A Step-By-Step Guide

We are going to create an AWS Data Lake, using a combination of AWS services. AWS services will be using are:

  • AWS Glue: for performing the ETL job, processing job and cataloging of the data.
  • Lake Formation: to provide access control over the data.
  • Amazon Athena: for querying and analyzing the data in amazon S3 bucket.
  • Amazon S3: To store our data.

Step 1: Create IAM User

  • we need to create IAM user first, for controlled access to AWS services. we are going to create IAM user namely β€œAmazon-sales-user” for our dataset.
  • Search for IAM (Identity and Access Management) in AWS Console Search bar and navigate to IAM.

  • Click on β€œUsers” option from the menu and click on β€œcreate User” button.

  • Enter User Name in the user name box and click on β€œNext”

  • Now, we have to give permission to user, select β€œAttach policies directly” option to set the permissions.
  • Search and Select below permissions:
    • AmazonS3FullAccess
    • AmazonAthenaFullAccess
    • AWSCloudFormationReadOnlyAccess
    • AWSGlueConsoleFullAccess
    • CloudWatchLogsReadOnlyAccess
  • After selecting Permission Policies, click on β€œNext”, Review the User Details and hit the β€œCreate User” button.

  • The following screenshot, successful creation of User.

Step 2: Create IAM Role

  • After creating IAM User, now we have to create IAM Role, to catalog the data which is stored in Amazon S3 Bucket for our Data Lake.
  • Navigate to IAM Console again. click on the β€œRoles” option from the menu, which you will find on your left-hand side then click on β€œCreate role” tab.

  • Next, select β€œAWS Service” option. and type β€œGlue” as the AWS service in Use case or service box and click on β€œNext” button.

  • Now, we have to add permissions, search for β€œPowerUserAccess” policy and click on Next button.

  • On next screen, you will have to enter β€œRole Name” as per your wish. scroll down and click on β€œCreate Role” button.

  • And our IAM Role is Successfully Created.

Step 3: Create S3 Bucket to Store the Data

  • We have successfully created our IAM users and IAM role for our AWS Data Lake, now to store our data we need to create Amazon S3 Bucket. in this demonstration we are uploading data manually into the S3.
  • Search for Amazon S3 in AWS Management Console Search Bar and navigate to the S3 Console.

  • Click on β€œCreate Bucket” button and create a bucket with a name of your choice, after entering bucket name click on β€œCreate Bucketβ€œ.

  • Choose Default encryption as server side encryption and bucket key as disable mode.

  • The following illustrates that we successful created the bucket.

  • Our bucket is now created. select your bucket to open it. click on β€œUpload” button to upload our data file in the created bucket. click on β€œadd file” tab choose your data file and click on β€œuploadβ€œ.

  • Upload the files as shown in the figure, by clicking on the upload files option as shown in the figure. And our data is ready!

Step 4: Data Lake Set Up using AWS Lake Formation

  • Our data is ready to ingest into the data lake. now will begin to set up our Data Lake. in data lake we will create a database. Search and navigate to the AWS Lake formation console.
  • Add administrator that performs administrative tasks of data lake. click on β€œAdd Administrators” button to add administrators for your data lake (if you are working with AWS Lake Formation for the First time, only then β€œAdd administrators” window will pop up).
  • Administrator is added, now it’s time to create a database. you will find the option to create a database in left hand side menu click on β€œDatabases” and under databases click on β€œCreate database” button.

  • Enter Database Name as per your wish. after that you have to browse and provide your S3 bucket path in which your data is stored, in the β€œLocation” box.

  • Also make sure to uncheck the β€œUse Only IAM Access Control for New tables in this database” checkbox. after that click on β€œCreate Database” button. and here you go your database is created in no time.

  • Database is created, now we have to register our S3 bucket as a storage for our data lake. for that find and click on β€œData Lake locations” option from the left-hand side menu, click on β€œRegister Locationβ€œ, browse and enter S3 bucket path where data is stored. after giving S3 path, choose IAM role as ” AWSServiceRoleForLakeFormationDataAccess” by default and click on β€œRegister Locationβ€œ.

Step 5: Data Cataloging using AWS Glue Crawlers

  • While building the Data Lake, it is essential for data in the data Lake should be catalogued. using AWS Glue the process of data cataloging becomes easy.
  • AWS Glue provides ETL (Extract, Transform, Load) service, meaning AWS Glue first transform, cleanse and organize data coming from multiple data sources before loading data into the Data Lake. AWS Glue makes data preparation process efficient by automating the ETL jobs.
  • AWS Glue offers crawlers which automates the data catalog process, for better discovery, search and query big data.
  • To create a data catalog in the Database, AWS Glue Crawler will use IAM role which we have created in previous step.
  • Go back to the AWS Lake Formation console again, click on β€œDatabases” option you will see your previously created database. select your database and you will see an β€œAction” button, under Action Dropdown menu click on β€œGrant” option.

  • On the next window, you have to choose your previously created IAM Role for β€œIAM Users & rolesβ€œ. scroll down you will see Database Permissions field, check boxes for only β€œCreate Table” and β€œAlter” permissions and click on β€œGrant” button.

  • Scroll down you will see Database Permissions field, check boxes for only β€œCreate Table” and β€œAlter” permissions and click on β€œGrant” button.

  • After that, navigate to AWS Glue console, on the left-hand side menu you will see the β€œData catalog” option under β€œData Catalog” you will find β€œCrawlers” option click on the that then click on β€œCreate Crawler” button, Enter Name for your Crawler of your choice, you can also add description if you want, and then click on β€œNextβ€œ.

  • Set the crawler properties as shown in the bleow screenshot.

  • Clicking on β€œNext”, Choose data sources and classifiers window will open, we have to choose the data source of data to be crawled. for S3 path, browse and provide S3 bucket path in which our data exist and click on β€œAdd an S3 Data Source” your data source is now added now click on β€œNext”.

  • Add the data source and location of S3 data as shown in the below screenshot.

  • On the next screen, we need to add IAM role, choose previously created IAM Role from the drop-down list and click on the β€œNext”.

  • For Set output and scheduling, choose our created Database, for Crawler schedule select β€œOn Demand” as Frequency and click on β€œNextβ€œ.

  • Finally, review all the AWS Glue Crawler configuration and click on β€œCreate Crawler” button to save and create the Crawler. Crawler is now ready! it may take few seconds to finish crawling the S3 bucket, after that you will see tables created successfully and automatically by the crawler in Database.
  • Navigate to the AWS Lake Formation console, click on β€œtables” from the menu, you can check here also table is created.

Step 6: Data Query with Amazon Athena

  • Amazon Athena is a Query Service offered by AWS, Amazon Athena allows us to analyze data which stored in Amazon S3 Bucket efficiently using Standard SQL.
  • When we are working with a large amount of data, we need some sort of querying tool for analyzing the data or big data, and here is where Amazon Athena comes into play, using Amazon Athena makes it easy for analyzing the data present in Amazon S3 Bucket.
  • When we are using Amazon Athena, we don’t need to be good at SQL (Structured Query Language) for querying data, by default Athena supports Standard SQL Query language, because of that data analysts, data scientists and organizations are able to perform analytics and derive valuable insights from the data.
  • Amazon Athena allows user to query data stored in Amazon S3 in its original format. Navigate to the Amazon Athena Console.

  • Click on β€œQuery Editor”, select Database which we have created in the earlier steps, but before executing any query we need to provide β€œQuery Result Location” which is Amazon S3 Bucket.
  • Amazon Athena stores Query Output and Metadata for each Query which executes in β€œQuery Result Locationβ€œ.
  • we have to create S3 bucket to store our Query results in this bucket, click on β€œSet up a query result location in Amazon S3β€³ tab and provide S3 bucket’s path and hit the β€œSave” button.
  • We have added the β€œQuery Result Location”, Now we can Run our Queries in Amazon Athena Query Editor.
  • Run the following MySQL Query and click on β€œRun” button.
SELECT * FROM "gfg-data-lake-db" . "gfg-data-lake-bucket" limit 10;

  • Output of above Query illustrated by the following screenshot.

Step 7: Clean Up

  • After following Numbers of steps, we have Successfully Created our AWS Data Lake with the Combination of Different AWS Services. now it’s time to clean up all the created Resources to avoid any unnecessary large bills.
  • Delete all the created AWS Resources including:
    • Amazon S3 Buckets
    • IAM Users and Roles
    • AWS Glue Crawler
    • Database created in AWS Lake Formation
    • Delete the Registered Locations

How to Create AWS Data Lake

In the data-driven world, organizations are flooded with large amounts of data that originate from various sources, ranging from structured databases to unstructured files and logs. In order to effectively make use of this data for insights and decisions, organisations need to have a storage system that is capable of storing vast datasets. To address this challenge completely, AWS Data Lakes offers an all-inclusive solution as it enables centralized ingestion, cataloguing and querying at scale. By incorporating AWS services such as Amazon S3 Bucket, AWS Glue, AWS Lake Formation, AWS Athena and IAM together in a reasonable manner an organisations can build an elastic data lake architecture that allows for user-driven acquisition of actionable intelligence from their data while maintaining security and compliance standards.

Similar Reads

What is a Data Lake?

A data lake is like a massive data repository, designed to store any kind of data or big data which can be structured, semi-structured and unstructured data. And it makes possible to store data in its original as-it forms....

Why Do We Need a Data Lake?

we live in a digital era, where data volumes are increasing day-by-day, and organization needs a database that scales well for their massive data before use....

Data Lake Architecture

The following diagram illustrates the AWS Data Lake Architecture and its components are discussed clearly in the below sections:...

How To AWS Date Lake: A Step-By-Step Guide

We are going to create an AWS Data Lake, using a combination of AWS services. AWS services will be using are:...

Conclusion

Creating AWS Data Lake includes sequence of steps that utilizes services such as AWS Lake formation, Amazon S3, AWS Glue, Amazon Athena and IAM. by carefully setting up these Resources, we set up a central repository where diverse datasets are stored, processed and queried within the shortest time possible. this scalable and secure infrastructure allows organizations to gain valuable insights, make decisions driven by data and adapt to the changing needs of analytics efficiently....

Creating AWS Data Lake – FAQs

What are the ways of taking information into AWS Data Lake?...