CORS Tutorial for API Gateway and Lambda with React

CORS errors with API Gateway can be tricky. I’ll walk through setting up an API Gateway + Lambda backend that we’ll hit from a simple React front end.

Setting up the error

We can’t fix an error that we don’t have. First, let’s use the AWS SAM CLI to setup a simple API Gateway event that triggers a Lambda function. If you’ve never done that before, here’s a great tutorial from Thundra. Once configured, hit your API endpoint to make sure it works. I just used the default ‘Hello World!’ template.

Now, for a front end.

app.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import React from 'react';
import axios from 'axios';

class App extends React.Component {
state = { text: '' };

componentDidMount() {
axios.get(`api gateway endpoint`)
.then(res => {
const text = res.data.message;
this.setState({ text });
})
}
render() {
return (
<div>
{ this.state.text }
</div>
)
}
}

export default App;
index.js
1
2
3
4
5
import React from 'react';
import ReactDOM from 'react-dom';
import App from './components/App';

ReactDOM.render(<App />, document.querySelector("#root"));

I use the create-react-app and have reduced the code down to bare bones.

Here’s the error we’ll fix.
cors error

What is CORS?

If you’re curious, here’s an in-depth description of what CORS is. In short, I’m at one domain and I want to request resources from another. In this case, I’m requesting from my local domain (localhost) to an API Gateway endpoint.

Enabling CORS

The AWS SAM CLI tool generates a default template for an API Gateway triggered Lambda function. We’ll need to edit this template and rebuild and redeploy to convert our API Gateway endpoint to allow for CORS. First, let’s start with the default SAM template.

default SAM template.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-apigateway-cors-blog

Sample SAM Template for lambda-apigateway-cors-blog

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3

Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get

Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn

This template defines the HelloWorldFunction in the Resources section along with basic properties for that function. In that section, this template also defines an Event, which is the triggering event for the Lambda function.

We will flesh out the definition for that API Gateway resource and add an AWS::Include transform which will reference a swagger.yaml file that we’ve uploaded to an S3 bucket. A key part of this definition will be Cors: “‘*’” for the BasicAWSApiGateway resource.

We’ll also add RestApiId: !Ref BasicAWSApiGateway underneath the Properties attribute for the HelloWorld event.

Here’s the complete updated template.yaml file.

updated for CORS SAM template.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-apigateway-cors-blog

Sample SAM Template for lambda-apigateway-cors-blog

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3

Resources:
HelloWorldApiGateway:
Type: AWS::Serverless::Api
Properties:
Name: Gateway Endpoint HelloWorld
StageName: Prod
Cors: "'*'"
DefinitionBody:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: s3://aws-sam-cli-managed-default-samclisourcebucket-kzrkk1luntm4/lambda-apigateway-cors-blog/swagger.yaml
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
RestApiId: !Ref HelloWorldApiGateway
Path: /hello
Method: get

Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApiGateway:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${HelloWorldApiGateway}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn

If we run sam build and sam deploy from the command line our serverless application won’t yet deploy because our template.yaml references a swagger.yaml file that doesn’t exist. Let’s build that now.

swagger.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
swagger: '2.0'
info:
description: 'This is a test setting up CORS for APIGatway and a Lambda event'
version: '1.0.0'
title: AutoGate Admin Service Management Gateway

paths:
/hello:
get:
responses:
200:
description: 200 response
headers:
Access-Control-Allow-Origin:
type: string
x-amazon-apigateway-integration:
uri:
Fn::Sub: 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloWorldFunction.Arn}/invocations'
responses:
default:
statusCode: 200
responseParameters:
method.response.header.Access-Control-Allow-Origin: "'*'"
passthroughBehavior: when_no_match
httpMethod: POST
type: aws_proxy
options:
consumes:
- application/json
produces:
- application/json
responses:
200:
description: '200 response'
schema:
$ref: '#/definitions/Empty'
headers:
Access-Control-Allow-Origin:
type: string
Access-Control-Allow-Methods:
type: string
Access-Control-Allow-Headers:
type: string
security:
- None: []
x-amazon-apigateway-integration:
responses:
default:
statusCode: 200
responseParameters:
method.response.header.Access-Control-Allow-Methods: "'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'"
method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Origin: "'*'"
requestTemplates:
application/json: '{"statusCode": 200}'
passthroughBehavior: when_no_match
type: mock
definitions:
Empty:
type: object
title: Empty Schema

We’ll want to take note of this line Fn::Sub: ‘arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloWorldFunction.Arn}/invocations’. Here, you’ll want to make sure you swap out your own function ARN value. That’s found under Resources in your template.yaml; it’s whatever you initially defined your function name as.

The namespace, which matches your app name, where we’ll upload this swagger file exists already in the bucket the SAM tool automatically generated when you first deployed the application. The name should begin with aws-sam-cli-managed-default-samclisourcebucket or something similar.

For our CORS activation needs, this swagger.yaml file defines the headers and values necessary for CORS to function.

Finally, we’ll want to add in a ‘headers’ section to the return value of our Lambda function.

1
2
3
4
5
6
7
8
9
10
11
{
"statusCode": 200,
'headers': {
"Access-Control-Allow-Origin": "*",
'Content-Type': 'application/json'
},
"body": json.dumps({
"message": "hello world",
# "location": ip.text.replace("\n", "")
}),
}

If all goes well, when you fire up your React app, you’ll see the much beloved hello world in the upper left hand corner of your browser.

NB: There are options to enable CORS via the AWS Console in API Gateway. I have never successfully enabled CORS via that route. This implementation is more straightforward: requiring only a few changes to the default SAM template, a swagger.yaml file and a small change to the Lambda function response.

Lambda / API Gateway Internal Server Error

While updating a Lambda function tied to API gateway, I started getting an error when I hit the raw API endpoint. Previously, the Lambda function was returning a string and I had just updated the function to return JSON instead.

When I then hit the endpoint for the updated Lambda, I started getting this error {“message”: “Internal server error”}. To spare the many wrong roads I went down, the solution was straight forward. My function had previously returned a string, but, now, returning json, I needed to include a proper response code and header for the response to be rendered from the endpoint.

So, what was essentially:

1
2
print("string")
return None

became, instead:

1
2
3
4
5
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps(array)
}

simulate-aws-iot-device

AWS makes it easy to simulate an IoT device with a script run from the CLI of your local machine. Let’s walk through it. We’ll reference the script pasted below.

simulate_aws_iot_device.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import ssl
import csv
import json
import random
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
from pathlib import Path

CLIENT_NAME = 'city-temp-device'
TOPIC = 'city-temp-device/1'

BROKER_PATH = 'XXXXXXXXXXXXXXX.iot.region.amazonaws.com'

ROOT_CA_PATH = './certs/AmazonRootCA1.pem'
PRIVATE_KEY_PATH = './certs/urban-temp/XXXXXXXXXXXXXX.pem.key'
CERTIFICATE_PATH = './certs/urban-temp/XXXXXXXXXXX-certificate.pem.crt'

IoTclient = AWSIoTMQTTClient(CLIENT_NAME)
IoTclient.configureEndpoint(BROKER_PATH, 8883)
IoTclient.configureCredentials(
ROOT_CA_PATH,
PRIVATE_KEY_PATH,
CERTIFICATE_PATH
)

#https://s3.amazonaws.com/aws-iot-device-sdk-python-docs/html/index.html#AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient.configureOfflinePublishQueueing
#If set to -1, the queue size is set to be infinite.
IoTclient.configureOfflinePublishQueueing(-1)

#https://s3.amazonaws.com/aws-iot-device-sdk-python-docs/html/index.html#AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient.configureDrainingFrequency
#Used to configure the draining speed to clear up the queued requests when the connection is back. Should be called before connect.
IoTclient.configureDrainingFrequency(2)

#https://s3.amazonaws.com/aws-iot-device-sdk-python-docs/html/index.html#AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient.configureConnectDisconnectTimeout
#Used to configure the time in seconds to wait for a CONNACK or a disconnect to complete. Should be called before connect.
IoTclient.configureConnectDisconnectTimeout(10)

#https://s3.amazonaws.com/aws-iot-device-sdk-python-docs/html/index.html#AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient.configureMQTTOperationTimeout
#Used to configure the timeout in seconds for MQTT QoS 1 publish, subscribe and unsubscribe. Should be called before connect.
IoTclient.configureMQTTOperationTimeout(5)

IoTclient.connect()

IoTclient.publish(TOPIC, "connection status: ok", 0)

def payload():
locations = ["El Paso", "Dallas", "Los Angeles", "Seattle", "NYC", "Lincoln", "Omaha", "Uvalde"]
payload = json.dumps({"location":random.choice(locations),"temperature":round(random.uniform(0,100),2)})
return payload

while True:
IoTclient.publish(TOPIC, payload(), 0)

First, open the AWS console and search for “IoT Core” service. Next, click “Get Started,” then, “Manage” >> “Things.” Then, “Create” >> “Create a single thing.”

Create a thing
  1. name your thing
  2. create or assign a thing type
  3. add to a group / create group

Click “Next”

Certificates

Your “device” will need to generate three authentication certificates to enable authentication between itself and AWS.

Click “Create certificate” and you’ll see “Certificate created!” on the next page along with three certificates: “A certificate for this thing”, “A public key” and “A private key.” Download these three certificates.

Underneath where it says “You also need to download a root CA for AWS IoT:“, click the “Download” button which will open a new tab. On this new page, download the “RSA 2048 bit key: Amazon Root CA 1.” Save that new page into a file named “AmazonRootCA1.pem.”

As AWS states in the documentation: “Server certificates allow your devices to verify that they’re communicating with AWS IoT and not another server impersonating AWS IoT. Service certificates must be copied onto your device and referenced when devices connect to AWS IoT.”

Return to the first certificate page and click “Activate.” We don’t have any policies yet, that’s next, so just click done. You’ll return to the ‘Things’ page, where you’ll see your newly created thing.

Policies

From the main IoT page, under ‘Secure’, select ‘Policies’ and then ‘Create’ a new policy. Give your policy a name and an ‘Action.’

The ‘Action’ section will look similar to IAM and/or S3 bucket policies and the principle is the same. For this example, we’ll give our policy the action statement of “iot:*,” which allows the policy to execute all possible actions underneath IoT. This policy is obviously too permissive for real world applications. You can explore examples of more restrictive and specific actions from those suggested in the dropdown.

We use the ‘Resource ARN’ section to allow or restrict access to specific topics on specific resources or accounts. For our purposes, we’ll just use the “*“, with the “Effect” of “Allow.” Again this policy, allowing any action to any topic, falls under “Do Not Do This At Home/Prod,” but it makes our lives easier for right now. Click ‘Create.’

Once the policy is created, click on ‘Certificates’ again, find the certificate that you just created. Select it, then, under “Actions,” attach the just created policy.

Now, on the main AWS IoT console page, under settings in the lower left, copy the “Endpoint” value into the “Broker Path” variable in the above script. You’ll also see in the script that we’re referencing the certificates that we downloaded earlier. Set those paths appropriately and make sure the certificates are available at those locations.

Test

Now, we can test our IoT ‘device.’ On the lower left, select “Test” and under “Subscription Topic” insert the value from the “TOPIC” variable in the above script. The value under “Subscription Topic” needs to match that variable’s value. Subscribe to the topic, then, on the next screen, “Publish to topic” and verify this message:

1
2
3
{
"message": "Hello from AWS IoT console"
}

Good.

At this point, run the above script from your command line. You should see your “devices” from around the globe dumping their json temperature data. Currently, you’re not saving this data, however. In a later post, I’ll cover how to use Kinesis streams to ingest this data and then pipe it to S3, Glacier, Redshift or DynamoDB for durable storage.

python-palindrome-test

Quick ditty using Python to tell if a string is a palindrome.

is_palindrome.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/usr/bin/python
import sys

def is_palindrome(string):
t_f = False
string2 = ''
arr_tmp = []
rev_str = ''

#can use the for loop with slice notation
#or reversed() to reverse the string
for item in list(string)[::-1]:
arr_tmp.append(item)
# arr_tmp = list(reversed(string))

rev_str = string2.join(arr_tmp)

if string == rev_str:
t_f = True

print(t_f)

is_palindrome(sys.argv[1])

Shell script to update Lambda function

After you’ve updated the code for your Lambda function, here’s a shell script to update the Lambda package and redeploy it to AWS. AWS’s official documentation on aws lambda update-function-code is here. There are much better ways to structure and manage your Lambda functions, but, in case you’re rolling old school, this shell script is handy. Also, I’ve included below the tree view of a basic directory structure for a Lambda function that this script would serve.

update_function.sh
1
2
3
4
5
6
7
8
9
10
#!/bin/zsh
lambda_name="LambdaName"
zip_file="${lambda_name}.zip"

files="lambda_function.py"
chmod -R 755 ${files}
zip -r "${zip_file}" $files

aws lambda update-function-code --region "us-east-1" --function-name "${lambda_name}" --zip-file "fileb://${zip_file}"

1
2
3
4
5
6
7
8
9
10
/psycopg2/
/test_function.sh
/dist
/dist/lambda_function.py
/deployment_bundle.zip
/event.json
/output.json
/lambda_function.py
/create_function.sh
/update_function.sh

Connect PostgreSQL RDS instance and Python AWS Lambda function

I recently had a need to write from a Lambda function into a PostgreSQL RDS instance. Normally, I would just copy all my Python dependencies from my virtual env into a “dist” folder, zip that folder up with the lambda_function.py file and deploy that to S3, then Lambda.

For reasons well beyond the scope of this post, that method doesn’t work with the psycopg2 library, however. Lucky for us, this repo contains precompiled versions of psycopg2 for both Python2.* and Python3.6. The instructions in the repo are clear, so we’ll follow them. We’ll make a directory and copy the appropriate psycopg2 library for use with Python 3.6. (Don’t forget to rename it from psycopg2-3.6 to psycopg2.)

Next, we’ll fire up an RDS instance of PostgreSQL.

1
2
3
4
5
6
7
8
9
10
11
12
aws rds create-db-instance
--db-subnet-group-name default \
--db-instance-identifier LambdaPGConnect \
--db-instance-class db.t2.micro \
--engine postgres \
--allocated-storage 5 \
--publicly-accessible \
--db-name LambdaPGConnectDB \
--master-username lambdapgconnect \
--master-user-password lambdapgconnect1234 \
--backup-retention-period 3\
--vpc-security-group-ids sg-########

I chose a subnet-group and security-group in my default VPC, which is fine for this tutorial, but not recommended for production work. Remember the security-group id, we’ll need that in a few minutes. I set the –publicly-accessible flag to true for this instance so that I can access it via my SQL client. (It can take a bit of time for AWS to create your DB.)

After the RDS instance is created, we’ll make some edits to the ‘Security Group’ inbound rules, then log-in to the instance via our PostgreSQL client of choice in order to make a table and populate it.

From the AWS console, go to RDS > Databases then click on the database you just created. Under ‘Connectivity’, look Security > VPC Security Groups and click on that VPC. Under the Inbound tab, click ‘Edit’ and add a ‘PostgreSQL’ rule. The ‘Protocol’ and ‘Port Range’ will self-populate and under ‘Source’ select ‘My IP.” This will allow you to access resources in that VPC from outside the VPC and from the IP address of your location.

security group edit inbound rules console

Then, add another rule of type ‘PostgreSQL’. This time the ‘Source’ will be ‘Custom’ again, but we want to add the Security Group itself, i.e. we’re telling the Security Group it can access resources from within itself. This rule is necessary because the Lambda function and the RDS instance are in the same group and their relationship must be made explicit.

From the command line, run aws rds describe-db-instances, from that json we’ll need the Endpoint.Address and Endpoint.Port as well as the master-username and master-user-password from the create statement. We should be able to log-in to the instance from our SQL client now. Create the table and populate it with data with the below SQL statements.

1
2
3
4
5
6
7
CREATE TABLE "public"."estabs_tbl" (
"name" varchar(255),
"estab_id" varchar(255),
"address" varchar(255),
"latitude" float4,
"longitude" float4
)
1
2
3
4
INSERT INTO estabs_tbl (address, estab_id, latitude, longitude, name) VALUES
('1222 24TH ST NW San Antonio, TX 78207','1021','29.4418','-98.5409','CASA DOS LAREDOS'),
('1150 AUSTIN HWY San Antonio, TX 78209','1023','29.487','-98.4476','BUN & BARREL'),
('1306 BASSE RD San Antonio, TX 78212','10235','29.4879','-98.5047','SHOP N SHOP');

Let’s create the Lambda package. We’ll just write a single file function.

lambda_function.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#!/usr/bin/python
import psycopg2

db_host = "lambdapgconnect.ctxoyiwkbtuq.us-east-1.rds.amazonaws.com"
db_port = 5432
db_name = "LambdaPGConnectDB"
db_user = "lambdapgconnect"
db_pass = "lambdapgconnect1234"
db_table = "estabs_tbl"

def create_conn():
conn = None
try:
conn = psycopg2.connect("dbname={} user={} host={} password={}".format(db_name,db_user,db_host,db_pass))
except:
print("Cannot connect.")
return conn

def fetch(conn, query):
result = []
print("Now executing: {}".format(query))
cursor = conn.cursor()
cursor.execute(query)

raw = cursor.fetchall()
for line in raw:
result.append(line)

return result

def lambda_handler(event, context):
query_cmd = "select count(\*) from estabs_tbl"
print(query_cmd)

# get a connection, if a connect cannot be made an exception will be raised here
conn = create_conn()

result = fetch(conn, query_cmd)
conn.close()

return result

Next, we’ll create the appropriate role to assign to the Lambda function.

create_role.sh
1
2
3
4
5
role_name="lambda-vpc-execution-role"
role_policy_arn="arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"

aws iam create-role --role-name "lambda-vpc-execution-role" --assume-role-policy-document file://role-policy.txt
aws iam attach-role-policy --role-name "${role_name}" --policy-arn "${role_policy_arn}"

This role allows Lambda to function within the VPC. We also have to assign the role privileges via role_policy.txt.

role_policy.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

I’ve already generated this role for previous Lambda work so I’ll just need to generate the Lambda function. The role_arn is created when you generate the role. I’m using the command line json processor jq to pull the subnet_ids and the sec_group_id from the output of the aws_cli ec2 commands. The function generation itself is straightforward. All the communication and permissions for the RDS instance and the Lambda function were covered when we edited the security group inbound rules and added the appropriate role and role policy.

create_function.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/zsh
lambda_name="LambdaPGConnectDB"
zip_file="${lambda_name}.zip"
role_arn="arn:aws:iam::nnnnnnnnnnnn:role/lambda-vpc-execution-role"
subnet_ids=`aws ec2 describe-subnets |\
jq -r '.Subnets|map(.SubnetId)|join(",")'`
sec_group_id=`aws ec2 describe-security-groups --group-name "default" |\
jq -r '.SecurityGroups[].GroupId'`

files="lambda_function.py"
chmod -R 755 ${files}
zip -r "${zip_file}" psycopg2 $files

aws lambda create-function \
--region "us-east-1" \
--function-name "${lambda_name}" \
--zip-file "fileb://${zip_file}" \
--role "${role_arn}" \
--handler "lambda_function.lambda_handler" \
--runtime python3.6 \
--timeout 60 \
--vpc-config SubnetIds="${subnet_ids}",SecurityGroupIds="${sec_group_id}"

If you find you need to edit your function, it will probably be small enough to allow you to edit via the Lambda console, but that’s not a best practice. Instead, you can use a small shell script like this to update your function, rezip and push to AWS.

update_function.sh
1
2
3
4
5
6
7
8
9
#!/bin/zsh
lambda_name="LambdaPGConnectDB"
zip_file="${lambda_name}.zip"

files="lambda_function.py"
chmod -R 755 ${files}
zip -r "${zip_file}" psycopg2 $files

aws lambda update-function-code --region "us-east-1" --function-name "${lambda_name}" --zip-file "fileb://${zip_file}"

A successful run will yield the following json and cheerful [[3]] in the output.txt file.

success response
1
2
3
4
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}