If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If not, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.
In part one of this series, we covered the tools and technologies behind the ACME Serverless Fitness Shop. Now it’s time to look at the CI/CD side of things.
What is the ACME Serverless Fitness Shop #
Quick recap: the ACME Serverless Fitness Shop combines two of my favorite things — serverless and fitness. It has seven distinct domains, each with one or more serverless functions. Some are event-driven, others have an HTTP API, and all of them are written in Go.
Continuous Anything #
“Continuous Anything” isn’t just a catchy title. It captures the idea that all the practices starting with “Continuous” should work together in a single run from code to production: integration (building and testing), deployment (getting builds to staging and production), and verification (making sure the deployment is the right thing to do).
There are plenty of CI/CD tools to choose from — Jenkins, Travis CI, CircleCI, and others. For the ACME Serverless Fitness Shop, I had a few requirements. Serverless means not running your own servers, so the CI/CD tool needs to be a managed service. There are multiple service domains with their own repositories, but some variables should be shared across all of them from a single place.
Jenkins is well-known, but almost everything requires a plugin installed on the server. You need to make sure build tools are available on the machine running the builds. And as far as I know, there’s no managed Jenkins service.
Travis CI runs builds in a container or VM with minimal tooling beyond the language you’ve chosen. You also can’t share environment variables across multiple projects.
That brings me to CircleCI. It’s a managed service with a generous free tier. CircleCI offers orbs — reusable pieces of YAML that can be configuration, commands, or entire CI jobs. It’s the plugin concept without requiring anyone to install them on a server. The ACME Serverless Fitness Shop has seven service domains, each with its own GitHub repository. CircleCI’s “Build Contexts” let me share environment variables across builds. I only need to update my Sentry DSN in one place and it’s available to all builds.
Continuous Integration #
Back in 1991, Grady Booch coined the term Continuous Integration as “the practice of merging all developers’ working copies to a shared mainline several times a day”. The idea still holds: the longer a developer works on a separate copy of a codebase, the higher the chance of conflicts. These can range from simple library version mismatches to multiple developers editing the same method in the same file.
A typical Continuous Integration workflow runs builds and tests, then gives feedback when something fails. In CircleCI, that takes about 10 lines of YAML. The image used is the next-gen convenience image from CircleCI, which comes with useful Go tools pre-installed.
# The version of the CircleCI configuration language to use (2.1 is needed for most orbs)
version: 2.1
jobs:
build:
docker:
# The ACME source code is in Go, so we'll rely on the image provided by the CircleCI team (1.14 is the latest version at the time of writing)
- image: cimg/go:1.14
steps:
# Get the sources from the repository
- checkout
# Get all dependencies for both code execution and running tests (-t) and downloaded packages shouldn't be installed (-d)
- run: go get -t -d ./...
# Run gotestsum for all packages, which is a great tool to run tests and see human friendly output
- run: gotestsum --format standard-quiet
# Compile and build executables and store them in the .bin folder
- run: GOBIN=$(pwd)/.bin go install ./...Continuous Delivery #
Finding a clean definition of Continuous Delivery is harder than you’d think. After a lot of back-and-forth with Matty Stratton, Laura Santamaria, and Aaron Aldrich, the consensus is: Continuous Delivery means being able to test the entire codebase and prepare it for production, with the final push to production requiring manual approval.
For the ACME Serverless Fitness Shop, I use Pulumi for this. I want a tool without a custom DSL — I’m not a YAML expert and I’d rather write Go. The Pulumi team built an orb that makes the CircleCI integration straightforward, handling installation and boilerplate. In a future post, we’ll look at how the Pulumi scripts are structured.
version: 2.1
# Register the Pulumi orb
orbs:
pulumi: pulumi/pulumi@1.2.0
jobs:
build:
docker:
- image: circleci/golang:1.14
steps:
- checkout
- run: go get -t -d ./...
- run: gotestsum --format standard-quiet
- run: GOBIN=$(pwd)/.bin go install ./...
# Log in to Pulumi using a specific version of the CLI
- pulumi/login:
version: 1.12.1
- pulumi/preview:
stack: retgits/dev
working_directory: ~/project/pulumi
workflows:
version: 2
deploy:
jobs:
- build:
# The context gives the ability to set environment variables that are shared across pipelines.
# In this case the ACMEServerless context has the Pulumi Token that is needed by the Pulumi orb
context: ACMEServerlessAt 23 lines, this pipeline already builds and tests both the app code and the infrastructure. It also sets the context (shared environment variables) so all builds have the same configuration.
Continuous Verification #
As a reminder, Continuous Verification is “A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.”
There are many things you can verify, but here are four that matter most for this project:
- Security: Are the Go modules safe? Are the IAM settings correct?
- Performance: Is the function execution time acceptable?
- Utilization: How much memory are the functions actually using?
- Cost: What will running these components cost?
Starting with security — it’s a shared responsibility to build safe software. I’ve written about Snyk before, so I won’t repeat the details here. Scanning Go modules takes two additional lines of YAML: add the orb (snyk: snyk/snyk@0.0.10) and add the scan step (snyk/scan).
For performance checks after invoking the Lambda function a few times, you can use the AWS CLI to pull CloudWatch metrics. That requires four steps:
- Add the AWS CLI orb:
aws-cli: circleci/aws-cli@0.1.22 - Add the setup step:
aws-cli/setup(use a context to avoid putting credentials in your YAML) - Add a run step to get statistics:
export FUNCTION=AllCarts && export ENDDATE=`date -u '+%Y-%m-%dT%TZ'` && export STARTDATE=`date -u -d "1 day ago" '+%Y-%m-%dT%TZ'` && export DURATION=`aws cloudwatch get-metric-statistics --metric-name Duration --start-time $STARTDATE --end-time $ENDDATE --period 3600 --namespace AWS/Lambda --statistics Average --dimensions Name=FunctionName,Value=$FUNCTION | jq '.Datapoints | map(.Average) | add'` && if (($DURATION > 3000)); then echo "Alert" && exit 1; else echo "Within range. Continuing"; fi- Set an appropriate threshold for
$DURATION. In this example, it’s 3000 milliseconds.
In a future post on tracing, we’ll look at using VMware Tanzu Observability by Wavefront to check function costs and memory utilization.
Here’s the full pipeline with all the steps and orbs:
version: 2.1
orbs:
pulumi: pulumi/pulumi@1.2.0
snyk: snyk/snyk@0.0.10
aws-cli: circleci/aws-cli@0.1.22
jobs:
build:
docker:
- image: circleci/golang:1.14
steps:
- checkout
- run: go get -t -d ./...
- run: gotestsum --format standard-quiet
- run: GOBIN=$(pwd)/.bin go install ./...
- snyk/scan
- pulumi/login:
version: 1.12.1
# We're updating the stack instead of only showing the preview
- pulumi/update:
stack: retgits/dev
working_directory: ~/project/pulumi
skip-preview: true
- aws-cli/setup
- run: export FUNCTION=AllCarts && export ENDDATE=`date -u '+%Y-%m-%dT%TZ'` && export STARTDATE=`date -u -d "1 day ago" '+%Y-%m-%dT%TZ'` && export DURATION=`aws cloudwatch get-metric-statistics --metric-name Duration --start-time $STARTDATE --end-time $ENDDATE --period 3600 --namespace AWS/Lambda --statistics Average --dimensions Name=FunctionName,Value=$FUNCTION | jq '.Datapoints | map(.Average) | add'` && if (($DURATION > 3000)); then echo "Alert" && exit 1; else echo "Within range. Continuing"; fi
workflows:
version: 2
deploy:
jobs:
- build:
context: ACMEServerless29 lines of YAML that:
- Build and test the Go code for the Lambda functions
- Scan Go modules for security vulnerabilities
- Validate the infrastructure deployment
- Update the development environment
- Run a performance check
The best part is that all builds use the exact same steps, and I don’t have to share any credentials thanks to the CircleCI context.
What’s next? #
Next up, we’ll look at Infrastructure as Code with Pulumi in more detail.
Cover photo by Magda Ehlers from Pexels