API Gateway invoking Lambda function with Terraform - Lambda Container
In this post, we'll setup an API Gateway that invokes Lmabda function that takes an input. We'll do that via Terraform.
This post is similar to AWS API Gateway invoking Lambda function with Terraform, but this time, it will use ECR Docker image rather than using S3's zip file.
We'll be following the guidelines from:
- Resource: aws_api_gateway_resource.
- Serverless Applications with AWS Lambda and API Gateway
- TERRAFORM – DEPLOY PYTHON LAMBDA (CONTAINER IMAGE)
To create a lambda using Docker image, we need the following files:
├── apps │ ├── Dockerfile │ ├── app.py │ └── requirements.txt ├── lambda.tf └── variables.tf
Later, we'll add an API Gateway menifest file.
For now, let's see how the files look like.
apps/app.py:
def handler(event, context): h = float(event['hour']) print(h) return { 'past hours': h }
As we can see, the app is doing a very simple thing: takes a vaule from query string and then just spits it.
apps/Dockerfile:
FROM public.ecr.aws/lambda/python:3.8 COPY requirements.txt ./ RUN yum update -y && \ pip install -r requirements.txt COPY app.py . CMD ["app.handler"]
apps/requirements.tf:
requests==2.25.1
The "requests" module is not needed right now with our current code, but it shows how we install a package but just modifying the "requiremets.txt" rather than write a new Dockerfile. Actually, we'll later modify the app.py which makes some requests to a given url and then we need the "requests" module.
Now, move on to Terraform. Here is the lamba menifest file, lambda.tf:
terraform { required_providers { aws = { source = "hashicorp/aws" } } } provider "aws" { region = var.region } data aws_caller_identity current {} locals { prefix = "bogo" app_dir = "apps" account_id = data.aws_caller_identity.current.account_id ecr_repository_name = "${local.prefix}-demo-lambda-container" ecr_image_tag = "latest" } resource aws_ecr_repository repo { name = local.ecr_repository_name } # The null_resource resource implements the standard resource lifecycle # but takes no further action. # The triggers argument allows specifying an arbitrary set of values that, # when changed, will cause the resource to be replaced. resource null_resource ecr_image { triggers = { python_file = md5(file("${path.module}/${local.app_dir}/app.py")) docker_file = md5(file("${path.module}/${local.app_dir}/Dockerfile")) } # The local-exec provisioner invokes a local executable after a resource is created. # This invokes a process on the machine running Terraform, not on the resource. # path.module: the filesystem path of the module where the expression is placed. provisioner "local-exec" { command = <<EOF aws ecr get-login-password --region ${var.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com cd ${path.module}/${local.app_dir} docker build -t ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} . docker push ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} EOF } } data aws_ecr_image lambda_image { depends_on = [ null_resource.ecr_image ] repository_name = local.ecr_repository_name image_tag = local.ecr_image_tag } resource aws_iam_role lambda { name = "${local.prefix}-lambda-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow" } ] } EOF } data aws_iam_policy_document lambda { statement { actions = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] effect = "Allow" resources = [ "*" ] sid = "CreateCloudWatchLogs" } statement { actions = [ "codecommit:GitPull", "codecommit:GitPush", "codecommit:GitBranch", "codecommit:ListBranches", "codecommit:CreateCommit", "codecommit:GetCommit", "codecommit:GetCommitHistory", "codecommit:GetDifferences", "codecommit:GetReferences", "codecommit:BatchGetCommits", "codecommit:GetTree", "codecommit:GetObjectIdentifier", "codecommit:GetMergeCommit" ] effect = "Allow" resources = [ "*" ] sid = "CodeCommit" } } resource aws_iam_policy lambda { name = "${local.prefix}-lambda-policy" path = "/" policy = data.aws_iam_policy_document.lambda.json } resource aws_lambda_function bogo-lambda-function { depends_on = [ null_resource.ecr_image ] function_name = "${local.prefix}-lambda" role = aws_iam_role.lambda.arn timeout = 300 image_uri = "${aws_ecr_repository.repo.repository_url}@${data.aws_ecr_image.lambda_image.id}" package_type = "Image" } output "lambda_name" { value = aws_lambda_function.bogo-lambda-function.id }
The file will create an ECR repository, build a Docker image, and then upload it to the repo.
It also creates a lambda function using the container image.
In the process, it creates a policy and role to do those tasks.
The variables used in the file is defined in vaiables.tf:
variable region { default = "us-east-1" } variable "app_version" { default = "1.0.0" } variable "stage" { default = "dev" } variable "resource_name" { default = "number" }
Now, we are ready to deploy our lambda:
$ terraform init $ terraform apply--auto-approve Outputs: lambda_name = "bogo-lambda"
Our lambda function that uses the Docker image has been created:
We may want to test it:
Here is our manifest file for AWS API Gateway, api-gateway.tf:
terraform { required_providers { aws = { source = "hashicorp/aws" } } } provider "aws" { region = var.region } data aws_caller_identity current {} locals { prefix = "bogo" app_dir = "apps" account_id = data.aws_caller_identity.current.account_id ecr_repository_name = "${local.prefix}-demo-lambda-container" ecr_image_tag = "latest" } resource aws_ecr_repository repo { name = local.ecr_repository_name } # The null_resource resource implements the standard resource lifecycle # but takes no further action. # The triggers argument allows specifying an arbitrary set of values that, # when changed, will cause the resource to be replaced. resource null_resource ecr_image { triggers = { python_file = md5(file("${path.module}/${local.app_dir}/app.py")) docker_file = md5(file("${path.module}/${local.app_dir}/Dockerfile")) } # The local-exec provisioner invokes a local executable after a resource is created. # This invokes a process on the machine running Terraform, not on the resource. # path.module: the filesystem path of the module where the expression is placed. provisioner "local-exec" { command = <<EOF aws ecr get-login-password --region ${var.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com #cd ${path.module}/lambdas/${local.prefix}_client cd ${path.module}/${local.app_dir} docker build -t ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} . docker push ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} EOF } } data aws_ecr_image lambda_image { depends_on = [ null_resource.ecr_image ] repository_name = local.ecr_repository_name image_tag = local.ecr_image_tag } resource aws_iam_role lambda { name = "${local.prefix}-lambda-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow" } ] } EOF } data aws_iam_policy_document lambda { statement { actions = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] effect = "Allow" resources = [ "*" ] sid = "CreateCloudWatchLogs" } statement { actions = [ "codecommit:GitPull", "codecommit:GitPush", "codecommit:GitBranch", "codecommit:ListBranches", "codecommit:CreateCommit", "codecommit:GetCommit", "codecommit:GetCommitHistory", "codecommit:GetDifferences", "codecommit:GetReferences", "codecommit:BatchGetCommits", "codecommit:GetTree", "codecommit:GetObjectIdentifier", "codecommit:GetMergeCommit" ] effect = "Allow" resources = [ "*" ] sid = "CodeCommit" } } resource aws_iam_policy lambda { name = "${local.prefix}-lambda-policy" path = "/" policy = data.aws_iam_policy_document.lambda.json } resource aws_lambda_function bogo-lambda-function { depends_on = [ null_resource.ecr_image ] function_name = "${local.prefix}-lambda" role = aws_iam_role.lambda.arn timeout = 300 image_uri = "${aws_ecr_repository.repo.repository_url}@${data.aws_ecr_image.lambda_image.id}" package_type = "Image" } resource "aws_lambda_permission" "apigw" { statement_id = "AllowAPIGatewayInvoke" action = "lambda:InvokeFunction" function_name = aws_lambda_function.bogo-lambda-function.function_name principal = "apigateway.amazonaws.com" # The "/*/*" portion grants access from any method on any resource # within the API Gateway REST API. source_arn = "${aws_api_gateway_rest_api.bogo-gateway.execution_arn}/*/*" } output "lambda_name" { value = aws_lambda_function.bogo-lambda-function.id }
We need to let the lambda function that the trigger is the API Gateway. So, here is our final lambda.tf file:
terraform { required_providers { aws = { source = "hashicorp/aws" } } } provider "aws" { region = var.region } data aws_caller_identity current {} locals { prefix = "bogo" app_dir = "apps" account_id = data.aws_caller_identity.current.account_id ecr_repository_name = "${local.prefix}-demo-lambda-container" ecr_image_tag = "latest" } resource aws_ecr_repository repo { name = local.ecr_repository_name } # The null_resource resource implements the standard resource lifecycle # but takes no further action. # The triggers argument allows specifying an arbitrary set of values that, # when changed, will cause the resource to be replaced. resource null_resource ecr_image { triggers = { python_file = md5(file("${path.module}/${local.app_dir}/app.py")) docker_file = md5(file("${path.module}/${local.app_dir}/Dockerfile")) } # The local-exec provisioner invokes a local executable after a resource is created. # This invokes a process on the machine running Terraform, not on the resource. # path.module: the filesystem path of the module where the expression is placed. provisioner "local-exec" { command = <<EOF aws ecr get-login-password --region ${var.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com cd ${path.module}/${local.app_dir} docker build -t ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} . docker push ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} EOF } } data aws_ecr_image lambda_image { depends_on = [ null_resource.ecr_image ] repository_name = local.ecr_repository_name image_tag = local.ecr_image_tag } resource aws_iam_role lambda { name = "${local.prefix}-lambda-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow" } ] } EOF } data aws_iam_policy_document lambda { statement { actions = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] effect = "Allow" resources = [ "*" ] sid = "CreateCloudWatchLogs" } statement { actions = [ "codecommit:GitPull", "codecommit:GitPush", "codecommit:GitBranch", "codecommit:ListBranches", "codecommit:CreateCommit", "codecommit:GetCommit", "codecommit:GetCommitHistory", "codecommit:GetDifferences", "codecommit:GetReferences", "codecommit:BatchGetCommits", "codecommit:GetTree", "codecommit:GetObjectIdentifier", "codecommit:GetMergeCommit" ] effect = "Allow" resources = [ "*" ] sid = "CodeCommit" } } resource aws_iam_policy lambda { name = "${local.prefix}-lambda-policy" path = "/" policy = data.aws_iam_policy_document.lambda.json } resource aws_lambda_function bogo-lambda-function { depends_on = [ null_resource.ecr_image ] function_name = "${local.prefix}-lambda" role = aws_iam_role.lambda.arn timeout = 300 image_uri = "${aws_ecr_repository.repo.repository_url}@${data.aws_ecr_image.lambda_image.id}" package_type = "Image" } resource "aws_lambda_permission" "apigw" { statement_id = "AllowAPIGatewayInvoke" action = "lambda:InvokeFunction" function_name = aws_lambda_function.bogo-lambda-function.function_name principal = "apigateway.amazonaws.com" # The "/*/*" portion grants access from any method on any resource # within the API Gateway REST API. source_arn = "${aws_api_gateway_rest_api.bogo-gateway.execution_arn}/*/*" } output "lambda_name" { value = aws_lambda_function.bogo-lambda-function.id }
Here are the final files:
. ├── api-gateway.tf ├── apps │ ├── Dockerfile │ ├── app.py │ └── requirements.txt ├── lambda.tf └── variables.tf
The files are avaialble from API-Gateway-invoking-Lambda-function-with-Terraform-Lambda-Container
Now, we are ready to deploy our modified Terraform codes:
$ terraform apply --auto-approve ... Outputs: base_url = "https://jgyexabhp0.execute-api.us-east-1.amazonaws.com/dev/number" lambda_name = "bogo-lambda"
To check if our API GateWay triggers our lambda function correctly:
$ curl https://jgyexabhp0.execute-api.us-east-1.amazonaws.com/dev/number?hour=999 {"past hours": 999.0}
It's working!
Here is our newly created with a API Gateway trigger:
Here is screenshot for our API Gateway:
AWS (Amazon Web Services)
- AWS : EKS (Elastic Container Service for Kubernetes)
- AWS : Creating a snapshot (cloning an image)
- AWS : Attaching Amazon EBS volume to an instance
- AWS : Adding swap space to an attached volume via mkswap and swapon
- AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
- AWS : Creating an instance to a new region by copying an AMI
- AWS : S3 (Simple Storage Service) 1
- AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket
- AWS : S3 (Simple Storage Service) 3 - Bucket Versioning
- AWS : S3 (Simple Storage Service) 4 - Uploading a large file
- AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively
- AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download
- AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another
- AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier
- AWS : Creating a CloudFront distribution with an Amazon S3 origin
- AWS : Creating VPC with CloudFormation
- AWS : WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution
- AWS : CloudWatch & Logs with Lambda Function / S3
- AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS
- AWS : Lambda and SNS - cross account
- AWS : CLI (Command Line Interface)
- AWS : CLI (ECS with ALB & autoscaling)
- AWS : ECS with cloudformation and json task definition
- AWS Application Load Balancer (ALB) and ECS with Flask app
- AWS : Load Balancing with HAProxy (High Availability Proxy)
- AWS : VirtualBox on EC2
- AWS : NTP setup on EC2
- AWS: jq with AWS
- AWS & OpenSSL : Creating / Installing a Server SSL Certificate
- AWS : OpenVPN Access Server 2 Install
- AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR
- AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard
- AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT
- DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)
- AWS - OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN
- AWS : Autoscaling group (ASG)
- AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation
- AWS : Adding a SSH User Account on Linux Instance
- AWS : Windows Servers - Remote Desktop Connections using RDP
- AWS : Scheduled stopping and starting an instance - python & cron
- AWS : Detecting stopped instance and sending an alert email using Mandrill smtp
- AWS : Elastic Beanstalk with NodeJS
- AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy
- AWS : Identity and Access Management (IAM) Roles for Amazon EC2
- AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts
- AWS : Identity and Access Management (IAM) sts assume role via aws cli2
- AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation
- AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)
- AWS : Amazon Route 53
- AWS : Amazon Route 53 - DNS (Domain Name Server) setup
- AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx
- AWS Amazon Route 53 : Private Hosted Zone
- AWS : SNS (Simple Notification Service) example with ELB and CloudWatch
- AWS : Lambda with AWS CloudTrail
- AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK
- AWS : Redshift data warehouse
- AWS : CloudFormation
- AWS : CloudFormation Bootstrap UserData/Metadata
- AWS : CloudFormation - Creating an ASG with rolling update
- AWS : Cloudformation Cross-stack reference
- AWS : OpsWorks
- AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)
- AWS CodeDeploy : Deploy an Application from GitHub
- AWS EC2 Container Service (ECS)
- AWS EC2 Container Service (ECS) II
- AWS Hello World Lambda Function
- AWS Lambda Function Q & A
- AWS Node.js Lambda Function & API Gateway
- AWS API Gateway endpoint invoking Lambda function
- AWS API Gateway invoking Lambda function with Terraform
- AWS API Gateway invoking Lambda function with Terraform - Lambda Container
- Amazon Kinesis Streams
- AWS: Kinesis Data Firehose with Lambda and ElasticSearch
- Amazon DynamoDB
- Amazon DynamoDB with Lambda and CloudWatch
- Loading DynamoDB stream to AWS Elasticsearch service with Lambda
- Amazon ML (Machine Learning)
- Simple Systems Manager (SSM)
- AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine
- AWS : RDS Importing and Exporting SQL Server Data
- AWS : RDS PostgreSQL & pgAdmin III
- AWS : RDS PostgreSQL 2 - Creating/Deleting a Table
- AWS : MySQL Replication : Master-slave
- AWS : MySQL backup & restore
- AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL
- AWS : Restoring Postgres on EC2 instance from S3 backup
- AWS : Q & A
- AWS : Security
- AWS : Security groups vs. network ACLs
- AWS : Scaling-Up
- AWS : Networking
- AWS : Single Sign-on (SSO) with Okta
- AWS : JIT (Just-in-Time) with Okta
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization