Data Driven Decisions in DevOps @ MyDevSecOps
Unlock the full potential of DevOps with Continuous Verification! Elevate your CI/CD pipeline by integrating security, performance, and cost checks for smarter deployment decisions.
With everything going on in DevOps, I think we can safely say that building pipelines is the way to deploy your applications to production. But knowing what you deploy to production and whether it is actually okay needs more data, like security checks, performance checks, and budget checks. Weβve come up with a process for that, which we call Continuous Verification βA process of querying external systems and using information from the response to make decisions to improve the development and deployment process.β In this session, weβll look at extending an existing CI/CD pipeline with checks for security, performance, and cost to make a decision on whether we want to deploy our app or not.
The talk
At VMware we define Continuous Verification as:
βA process of querying external systems and using information from the response to make decisions to improve the development and deployment process.β
Continuous Verification, by default, means itβs an extension to the existing development and deployment processes that companies already have. That extension focuses on optimizing both the development and deployment experience for those companies by looking at security, performance, and cost. At most companies, some of these steps are done manually or scripted, but hardly ever are they really part of the deployment pipeline.
And that is exactly how we can make sure that we build software better, faster, and more secure!
Slides
Talk materials
- Continuous Verification: The Missing Link to Fully Automate Your Pipeline
- Prowler: AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool
- VMware Secure State
- ACME Serverless Fitness Shop - Payment Service
- Tanzu Observability powered by Wavefront
- The ACME Fitness Shop
- Gotling
- Snyk.io
DevOps Pipeline
## Set the default image for the CI workflow
image: docker:19.03.8
## Global variables available to the workflow
variables:
## The host for the docker registry, set to docker:2375 to work with DinD
DOCKER_HOST: tcp://docker:2375
## Skip verification of TLS certificates for DinD
DOCKER_TLS_CERTDIR: ""
## Specify which GitLab templates should be included
include:
template: Container-Scanning.gitlab-ci.yml
## Specify the stages that exist in the template and the order in which they need to run
stages:
- scan_code
- build
- container_scanning
- governance
- deploy_staging
- performance
- deploy_production
## Stage scan_code performs a vulnerability analysis of the code using Snyk.io
scan_code:
stage: scan_code
image: golang:1.14
script:
## Download the latest version of the Snyk CLI for Linux
- curl -o /bin/snyk -L https://github.com/snyk/snyk/releases/latest/download/snyk-linux
- chmod +x /bin/snyk
## Authenticate using a Snyk API token
- snyk auth $SNYK_TOKEN
## Run snyk to test for vulnerabilities in the dependencies
- snyk test
## Build the container tagged with the commit revision for which project is built
build:
stage: build
image: docker:19.03.8
services:
- docker:19.03.8-dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
before_script:
- docker info
- docker login -u "$DOCKER_USER" -p "$DOCKER_PASSWORD"
script:
- docker info
- docker build --file /builds/retgits/test/cmd/cloudrun-payment-http/Dockerfile . -t $DOCKER_USER/payment:$CI_BUILD_REF
- docker push $DOCKER_USER/payment:$CI_BUILD_REF
## Scan containers built in this job
container_scanning:
stage: container_scanning
## Validate whether the project is still within budget
budget:
stage: governance
image: vtimd/alpine-python-kubectl
script:
- chmod +x ./governance/budget.py
- ./governance/budget.py $GITLAB_TOKEN
- if [ $OVERAGE = "OVER" ]; then exit 1 ; else echo "Within Budget. Continuing!"; fi
## Validate whether the project follows the best practices set by the security team
security:
stage: governance
image: vtimd/alpine-python-kubectl
script:
- chmod +x ./governance/security.py
- ./governance/security.py
- if [ $VSS_VIOLATION_FOUND = "True" ]; then exit 1 ; else echo "Violation Check Passed. Continuing!"; fi
## Deploy the service to staging
deploy_staging:
stage: deploy_staging
image: google/cloud-sdk:alpine
script:
# Authenticate using the service account
- echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
# Deploy
- gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=dev,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME
## Start traffic generation
traffic:
stage: performance
image: alpine:latest
script:
## Download the latest version of Gotling
- apk add curl
- curl -o /bin/gotling -L https://github.com/retgits/gotling/releases/download/v0.3-alpha/gotling
- chmod +x /bin/gotling
## Run performance test
- gotling governance/trafficgen.yaml
## Check performance against Wavefront
perf_stats:
stage: performance
image:
name: retgits/wavefront-pod-inspector:serverless
entrypoint: [""]
script:
- /bin/entrypoint.sh
- if [ $abc = "failed" ]; then echo "Alert" && exit 1 ; else echo "Within range. Continuing!"; fi
## Deploy the service to production
deploy_production:
stage: deploy_production
image: google/cloud-sdk:alpine
script:
# Authenticate using the service account
- echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
# Deploy
- gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=prod,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME