Step 3 – Creating Lambda function
Go to AWS Lambda console. Navigate to Functions section. Click Create Function and name it “ImageProcessing”. Select runtime as “NodeJS 16.x” and architecture as “x86_64”. Leave all other settings as default. Create the function.
In the code editor on the Lambda function page paste the following code. This function is executed whenenver an image is uploaded to our source S3 bucket and creates two images (thumbnail (300×300) and coverphoto(800×800)) and stores it in the destination S3 bucket. (Note: The value of processedImageBucket in the code should be set to the name of the destination bucket).
Javascript
const sharp = require( "sharp" ); const path = require( "path" ); const AWS = require( "aws-sdk" ); // Set the REGION AWS.config.update({ region: "ap-south-1" , }); const s3 = new AWS.S3(); const processedImageBucket = "serverless-bucket-processed-images" ; // This Lambda function is attached to an S3 bucket. When any object is added in the S3 // bucket this handler will be called. When an image file is added in the S3 bucket, this function // creates a square thumbnail of 300px x 300px size and it also creates a cover photo of // 800px x 800px size. It then stores the thumbnail and coverphotos back to another S3 bucket // at the same location as the original image file. exports.handler = async (event, context, callback) => { console.log( "An object was added to S3 bucket" , JSON.stringify(event)); let records = event.Records; // Each record represents one object in S3. There can be multiple // objects added to our bucket at a time. So multiple records can be there // How many records do we have? Each record represent one object in S3 let size = records.length; for (let index = 0; index < size; index++) { let record = records[index]; console.log( "Record: " , record); // Extract the file name, path and extension let fileName = path.parse(record.s3.object.key).name; let filePath = path.parse(record.s3.object.key).dir; let fileExt = path.parse(record.s3.object.key).ext; console.log( "filePath:" + filePath + ", fileName:" + fileName + ", fileExt:" + fileExt); // Read the image object that was added to the S3 bucket let imageObjectParam = { Bucket: record.s3.bucket.name, Key: record.s3.object.key, }; let imageObject = await s3.getObject(imageObjectParam).promise(); // Use sharp to create a 300px x 300px thumbnail // withMetadata() keeps the header info so rendering engine can read // orientation properly. let resized_thumbnail = await sharp(imageObject.Body) .resize({ width: 300, height: 300, fit: sharp.fit.cover, }) .withMetadata() .toBuffer(); console.log( "thumbnail image created" ); // Use sharp to create a 800px x 800px coverphoto let resized_coverphoto = await sharp(imageObject.Body) .resize({ width: 800, height: 800, fit: sharp.fit.cover, }) .withMetadata() .toBuffer(); console.log( "coverphoto image created" ); // The processed images are written to serverless-image-processing-bucket. let thumbnailImageParam = { Body: resized_thumbnail, Bucket: processedImageBucket, Key: fileName + "_thumbnail" + fileExt, CacheControl: "max-age=3600" , ContentType: "image/" + fileExt.substring(1), }; let result1 = await s3.putObject(thumbnailImageParam).promise(); console.log( "thumbnail image uploaded:" + JSON.stringify(result1)); let coverphotoImageParam = { Body: resized_coverphoto, Bucket: processedImageBucket, Key: fileName + "_coverphoto" + fileExt, CacheControl: "max-age=3600" , ContentType: "image/" + fileExt.substring(1), }; let result2 = await s3.putObject(coverphotoImageParam).promise(); console.log( "coverphoto image uploaded:" + JSON.stringify(result2)); } }; |
Save the code and click Deploy to deploy the changes.
Go to Configuration tab and Edit the general configuration. There set the timeout to 1 min (timeout is the maximum time for which a Lambda function will run after which it stops running). We need to increase the timeout because the image can take time to process. Click on Save changes.
Serverless Image Processing with AWS Lambda and S3
AWS S3 (Simple Storage Service) is a cloud data storage service. It is one of the most popular services of AWS. It has high scalability, availability, security and is cost effective. S3 has different storage tiers depending on the use case. Some common use cases of AWS S3 are:
- Storage: It can be used for storing large amounts of data.
- Backup and Archive: S3 has different storage tiers based on how frequent the data is accessed which can be used to backup critical data at low costs.
- Static website: S3 offers static website hosting through HTML files stored in S3.
- Data lakes and big data analytics: Companies can use AWS S3 as a data lake and then run analytics on it for getting business insights and take critical decisions.