Step 3 – Creating Lambda function

Go to AWS Lambda console. Navigate to Functions section. Click Create Function and name it “ImageProcessing”. Select runtime as “NodeJS 16.x” and architecture as “x86_64”. Leave all other settings as default. Create the function.

In the code editor on the Lambda function page paste the following code. This function is executed whenenver an image is uploaded to our source S3 bucket and creates two images (thumbnail (300×300) and coverphoto(800×800)) and stores it in the destination S3 bucket. (Note: The value of processedImageBucket in the code should be set to the name of the destination bucket).

Javascript




const sharp = require("sharp");
const path = require("path");
const AWS = require("aws-sdk");
  
// Set the REGION
AWS.config.update({
    region: "ap-south-1",
});
const s3 = new AWS.S3();
const processedImageBucket = "serverless-bucket-processed-images";
  
// This Lambda function is attached to an S3 bucket. When any object is added in the S3
// bucket this handler will be called. When an image file is added in the S3 bucket, this function
// creates a square thumbnail of 300px x 300px size and it also creates a cover photo of
// 800px x 800px size. It then stores the thumbnail and coverphotos back to another S3 bucket
// at the same location as the original image file.
exports.handler = async (event, context, callback) => {
    console.log("An object was added to S3 bucket", JSON.stringify(event));
    let records = event.Records;
    // Each record represents one object in S3. There can be multiple
    // objects added to our bucket at a time. So multiple records can be there
    // How many records do we have? Each record represent one object in S3
    let size = records.length;
  
    for (let index = 0; index < size; index++) {
        let record = records[index];
        console.log("Record: ", record);
        // Extract the file name, path and extension
        let fileName = path.parse(record.s3.object.key).name;
        let filePath = path.parse(record.s3.object.key).dir;
        let fileExt = path.parse(record.s3.object.key).ext;
  
        console.log("filePath:" + filePath + ", fileName:" + fileName + ", fileExt:" + fileExt);
  
        // Read the image object that was added to the S3 bucket
        let imageObjectParam = {
            Bucket: record.s3.bucket.name,
            Key: record.s3.object.key,
        };
  
        let imageObject = await s3.getObject(imageObjectParam).promise();
        // Use sharp to create a 300px x 300px thumbnail
        // withMetadata() keeps the header info so rendering engine can read
        // orientation properly.
        let resized_thumbnail = await sharp(imageObject.Body)
            .resize({
                width: 300,
                height: 300,
                fit: sharp.fit.cover,
            })
            .withMetadata()
            .toBuffer();
        console.log("thumbnail image created");
  
        // Use sharp to create a 800px x 800px coverphoto
        let resized_coverphoto = await sharp(imageObject.Body)
            .resize({
                width: 800,
                height: 800,
                fit: sharp.fit.cover,
            })
            .withMetadata()
            .toBuffer();
        console.log("coverphoto image created");
  
        // The processed images are written to serverless-image-processing-bucket.
        let thumbnailImageParam = {
            Body: resized_thumbnail,
            Bucket: processedImageBucket,
            Key: fileName + "_thumbnail" + fileExt,
            CacheControl: "max-age=3600",
            ContentType: "image/" + fileExt.substring(1),
        };
        let result1 = await s3.putObject(thumbnailImageParam).promise();
        console.log("thumbnail image uploaded:" + JSON.stringify(result1));
  
        let coverphotoImageParam = {
            Body: resized_coverphoto,
            Bucket: processedImageBucket,
            Key: fileName + "_coverphoto" + fileExt,
            CacheControl: "max-age=3600",
            ContentType: "image/" + fileExt.substring(1),
        };
        let result2 = await s3.putObject(coverphotoImageParam).promise();
        console.log("coverphoto image uploaded:" + JSON.stringify(result2));
    }
};


Save the code and click Deploy to deploy the changes.

Go to Configuration tab and Edit the general configuration. There set the timeout to 1 min (timeout is the maximum time for which a Lambda function will run after which it stops running). We need to increase the timeout because the image can take time to process. Click on Save changes.

Serverless Image Processing with AWS Lambda and S3

AWS S3 (Simple Storage Service) is a cloud data storage service. It is one of the most popular services of AWS. It has high scalability, availability, security and is cost effective. S3 has different storage tiers depending on the use case. Some common use cases of AWS S3 are:

  1. Storage: It can be used for storing large amounts of data.
  2. Backup and Archive: S3 has different storage tiers based on how frequent the data is accessed which can be used to backup critical data at low costs.
  3. Static website: S3 offers static website hosting through HTML files stored in S3.
  4. Data lakes and big data analytics: Companies can use AWS S3 as a data lake and then run analytics on it for getting business insights and take critical decisions.

Similar Reads

AWS Lambda

AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Lambda functions run on demand i.e. they execute only when needed and you pay only for what you compute. Lambda is well integrated with may other AWS services. It supports a wide variety of programming languages....

Serverless Image Processing Flow

User uploads a file to the source S3 bucket (which is used for storing uploaded images). When the image is uploaded to a source S3 bucket, it triggers an event which invokes the Lambda function. The lambda function processes the image. Processed image is stored in the destination S3 bucket. The processed image is requested by the user....

Step 1 – Creating S3 buckets

We will use two S3 buckets:...

Step 2 – Configuring S3 bucket policy

In ‘Block Public Access settings for this bucket’ section disable “block all public access”. You will get a warning that the bucket and its objects might become public. Agree to the warning. (Note: we are making this bucket public only for this project, it is not recommended to make an S3 bucket public if not needed)....

Step 3 – Creating Lambda function

Go to AWS Lambda console. Navigate to Functions section. Click Create Function and name it “ImageProcessing”. Select runtime as “NodeJS 16.x” and architecture as “x86_64”. Leave all other settings as default. Create the function....

Step 4 – Creating Lambda layer and attaching it to Lambda function

...

Step 5 – Creating S3 trigger

Layers in Lambda is used to add dependencies to a Lambda Function. Lambda Layers reduces the code size of Lambda functions as we do not need to upload the dependencies with the function. It also useful for code reusability as we can reuse the layer with multiple functions if they require the same dependencies....

Step 6 – Testing the application

Now we need our Lambda function to know when an image is uploaded to the source bucket. We can do this by adding an event to the source S3 bucket and configure it to get triggered when an image is uploaded to the bucket which in turn invokes the Lambda function....

Why Two Different Buckets?

Upload an image file to source S3 bucket (“serverless-bucket-uploaded-images”). Wait for few seconds and check the destination bucket (“serverless-bucket-processed-images”). There you will see two images (thumbnail and coverphoto)....

FAQ’s On Serverless Image Processing with AWS Lambda and S3

We created two different buckets for this application because whenever the lambda function uploads the processed images to the source bucket it will create 2 triggers and the function will process the processed images once again which in creates 4 triggers. This creates an infinite loop and generated many images, so we created two buckets to prevent an infinite loop....