set up a microservice in 5 minutes with Serverless
tl;dr: write serverless infrastructure as code
cost: $0
build time: 5 minutes (MVP)
overview
Serverless, confusingly, means two things.
There's serverless the concept - your infra provider dynamically spins up compute at invocation time and (spins it down after). This means no more worrying about managing (or scaling) compute - it's all done for you. The major serverless products are AWS Lambda, Azure Functions, and GCP Cloud Functions.
and there's serverless the infrastructure framework. I'll refer to it from now on as serverless.com. It allows you to generate all the necessary DevOps components with code (specifically, the serverless.yml
). Serverless.com can support anything from a ten minute pet project to a production microservice architecture (though there are some limitations; see Gotchas)
Serverless is a Node.js CLI, so you'll need a small amount of Javascript to set it up. The compute environment itself supports many languages (e.g. Lambda supports Node, Python, Ruby, Go, Java, PowerShell, and C#)
Below, I'll cover:
- setting up the local environment
- connecting AWS
- creating your first microservice
- invoking and passing data to a Lambda
- helpful commands
- bells and whistles
(here's the Github repo of the code covered below)
#1 - setting up the environment
# 1.0 - create a directory
mkdir serverless && cd serverless
# 1.1 - (optional) set up a virtual environment if you want
virtualenv venv && source venv/bin/activate
# 1.2 - install serverless (execute each line by itself)
npm init
sudo npm install -g serverless
# if you want to stand up a localhost server to test, add
npm install serverless-offline --save-dev
# if you want to bundle depedencies in your deploy object, add
# I generally recommend using Lambda Layers instead
npm install serverless-python-requirements --save
# 1.3 - Check it all worked out (sls is the short serverless.com CLI command)
sls version
#2 - connect to AWS
serverless.yml infra-as-code can be easily ported between the major cloud platforms. That said, I'm going to cover AWS here as it is the default for most use cases.
# 2.1 - if you haven't yet, install the AWS CLI
pip3 install awscli --upgrade
# 2.2 - if you haven't yet, configure AWS CLI. I use us-west-2
aws configure
# 2.3 - configure serverless.com to use AWS creds
sls config credentials --provider aws --key YOUR_KEY_HERE --secret YOUR_SECRET_HERE
(alternately, you can use environment variables to config)
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
# 2.4 - make sure your account has the IAM permissions it needs
serverless.com recommends this set of permissions for the account your key/secret are attached to. I find them a bit extensive; you can probably get away with admin on Lambda, API Gateway, CloudFormation, and CloudWatch. You will define the (much more specific) IAM role each Lambda will have in the serverless.yml
#3 - creating your first microservice function
The below command will create a Lambda function and a CloudFormation stack in your AWS account (and a local copy).
# 3.1 - create a new serverless project from template (note: no underscores in --name)
sls create \
--template aws-python3 \
--name lambda-test \
--path serverless-test \
&& cd serverless-test
This will generate a couple of files (in a new child directory) for you:
handler.py
- the Python code that your Lambda will executeserverless.yml
- the YAML code where you will declare what infra you want build- (and a
.gitignore
)
# 3.2 - the handler.py
Let's open up that handler.py
and take a look
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
Some important parts:
hello()
- the main function. This is invoked when the Lambda is called. Change this to the standardlambda_handler()
event
- all variables that are passed when the Lambda is invoked. These will vary by invoke method (e.g. CLI local invoke vs API Gateway)context
- functions that allow you to introspect the Lambda (e.g. printing instance runtime)response
- Lambdas should return API-Gateway compatible objects, with JSON serialized data, even when the calling source is not necessarily an API Gateway
# 3.3 - the serverless.yml
The sls create
will give you 114 lines of (mostly-commented-out) YAML so you can see different ways to use it. Let's narrow it down to what we're going to use:
service: test-stack
provider:
name: aws
runtime: python3.7
stage: prod # The default stage for sls deploy
region: us-west-2
logRetentionInDays: 120 # Personal preference
memorySize: 1024 # Default for Lambdas in this stack
timeout: 10 # Default for Lambdas in this stack. Normally 3s
logs:
restApi: true # Log API Gateway calls
apiKeys:
- admin_key # Generate an API Key called 'admin_key'
tags:
deployment: serverless.com
iamRoleStatements:
- Effect: Allow # Allows Lambdas to invoke each other
Action:
- lambda:InvokeFunction
- lambda:InvokeAsync
Resource:
- "*"
functions:
test-lambda: # Name of the Lambda Function
handler: handler.lambda_handler # Filename.Function_name
events: # What can invoke this Lambda
- http: # API Gateway event
path: /test_lambda # API endpoint
method: get # Supported HTTP verbs
private: true # Requires API key to access
# 3.4 - let's see it in action
To invoke the handler.py locally:
sls invoke local -f handler.py
To deploy the handler.py to your AWS account:
sls deploy
To invoke the now-deployed AWS-hosted Lambda:
sls invoke -f handler.py
That's all you need to get started. You can stop here if you'd like
#4 - invoking and passing data to a Lambda
Functions need data to be useful. There are 18 ways at present to invoke a Lambda and pass data to it. Here's a couple more common ones:
local invoke: pass data as a JSON string from the CLI
sls invoke local -f test-lambda -d {"key1": "value1", "key2":2}
cloud invoke: pass data as from a .json file
sls invoke -f test-lambda -p data_file.json
API Gateway invoke:
curl -X GET \
'https://Ay4s04xFYq.execute-api.us-west-2.amazonaws.com/prod/test-lambda?key1=value1&key2=2' \
-H 'x-api-key: THEAPIKEYGENERATEDBYYOURDEPLOY'
# 4.1 - how to implement those invocation methods
synchronous API Gateway invocation (for asynchronous add async: true
)
functions:
function-name:
events:
- http:
async: true
For async functions, API Gateway will not wait for the Lambda to execute, and instead will return status code 202 immediately.
cron invocation - add schedule: cron
or schedule: rate
(Docs)
functions:
function-name:
events:
- schedule: cron(0 18 ? * SUN *) # Runs weekly on Sun at 11 am (18 UTC -7 correction)
- schedule: rate(7 days) # Runs weekly, you don't pick when
s3 invocation - when something is added or modified in a bucket (Docs)
functions:
function-name:
events:
- s3: bucket-name
SQS invocation - reading from a queue
functions:
function-name:
events:
- sqs:
arn: arn:aws:sqs:us-west-1:${env:AWS_ACCOUNT_ID}:queueName
batchSize: 10
#5 - helpful commands
basic deploy
sls deploy -s prod --conceal
if you rename your serverless.yml (for example, for a separate stack to manage your DynamoDB tables), add --config
sls deploy --config "serverless-dynamo.yml"
you can deploy just one lambda at a time (it is slightly faster) if you want
sls deploy function -f test-lambda
destroy the CloudFormation stack and associated resources
sls remove
check service details, including API keys, without redeploying
sls info -s <stage_name> -v
want verbose debug logging?
export SLS_DEBUG=*
#6 - bells and whistles
#6.1 - environment variables
You can pass environment variables to the cloud-hosted functions one of two ways:
for every Lambda in the stack
provider:
... truncated ..
environment:
GLOBAL_KEY: ${env:GLOBAL_KEY}
for a specific Lambda function
functions:
function-name:
environment:
SPECIFIC_KEY: ${env:SPECIFIC_KEY}
then be sure your local virtual environment has the keys you specified when deploying:
export GLOBAL_KEY=1234567890
export SPECIFIC_KEY=0987654321
#6.2 - offline 'server' testing
Want to test your sls stack on a localhost 'server'? Add these two lines to the bottom of your serverless.yml
plugins:
- serverless-offline
then execute
sls offline
You can test it by going to localhost:3000/endpoint_speicifc_path
#6.3 - bundling dependencies
I recommend you use Lambda Layers to host cached versions of the libraries you use in AWS. I built a quick tool to create them yourself here.
If you have a large number of small sized non-standard libraries, you can bundle them along with the Lambda code in your sls deploy
's. This will make the deployments take longer.
I prefer to use a pipenv
virtual environment, and freeze only the requirements I want in the cloud to the requirements.txt
; this prevents accidentally bundling large dependencies you have in your local env (e.g. boto3) that you needn't include.
Add to your serverless.yml:
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux # Old templates may have true. Use non-linux
zip: true # Makes deploys faster
usePipenv: false # bundle reqs from requirements.txt rather than Pipfile
A few example OSS repos I've built with Lambda on serverless.com:
- Google Sheets as a DB
- Twilio SMS to Email
- Gumroad Webhook to Dynamo/Google Analytics
- Send emails with SES
Thanks for reading. Questions or comments? 👉🏻 alec@contextify.io