A dozen reasons why Cloud Run complies with the Twelve-Factor App methodology

The Good, the Bad and the Ugly in Cybersecurity – Week 29
July 19, 2019
Vulnerability Assessment, Penetration Testing, Redteaming…Oh My God!
July 22, 2019


One codebase tracked in revision control, many deploys

Each service you intend to deploy on Cloud Run should live in its own repository, whatever your choice of source control software. When you want to deploy your service, you need to build the container image, then deploy it.

For building your container image, you can use a third-party container registry, or Cloud Build, GCP’s own build system.

You can even supercharge your deployment story by integrating Build Triggers, so any time you, say, merge to master, your service builds, pushes, and deploys to production.

You can also deploy an existing container image as long as it listens on a PORT, or find one of the many sporting a shiny Deploy on Cloud Run button.


Explicitly declare and isolate dependencies

Since Cloud Run is a Bring-Your-Own container environment, you can declare whatever you want in this container, and the container encapsulates the entire environment. Nothing escapes, so two containers won’t conflict with each other.

When you need to declare dependencies, these can be captured using environment variables, keeping your service stateless.

It is important to note that there are some limitations to what you can put into a Cloud Run container due to the environment sandboxing, and what ports can be used (which we’ll cover later in Section VII.)


Store config in the environment

Yes, Cloud Run supports stored configuration in the environment by default. And it’s mandatory. You must listen for requests on PORT, otherwise your service will fail to start.

To be truly stateless, your code goes in your container, and configurations are decoupled by way of environment variables. These can be declared when you create the service, in the Optional Settings.

Don’t worry if you miss this setting when you declare your service. You can always edit it again by clicking “+ Deploy New Revision” when viewing your service, or by using the –update-env-vars flag in gcloud beta run deploy

Each revision you deploy is not editable, which means revisions are reproducible, as the configuration is frozen. To make changes you must deploy a new revision.

For bonus points, consider using berglas, which leverages Cloud KMS and Cloud Storage to secure your environment variables. It works out of the box with Cloud Run (and the repo even comes with multiple language examples).


Treat backing services as attached resources

Much like you would connect to any external database in a containerized environment, you can connect to a plethora of different hosts in the GCP universe.

And since your service cannot have any internal state, to have any state you must use a backing service.


Strictly separate build and run stages

Having separate build and run stages is how you deploy in Cloud Run land! If you set up your Continuous Deployment back in Section I, then you’ve already automated that step.

If you haven’t, building a new version of your Cloud Run service is as easy as building your container image:

gcloud builds submit --tag gcr.io/YOUR_PROJECT/YOUR_IMAGE .

to take advantage of Cloud Build, and deploying the built container image:

gcloud beta run deploy --image gcr.io/YOUR_PROJECT/YOUR_IMAGE YOUR SERVICE

Cloud Run creates a new revision of the service, ensures the container starts, and then re-routes traffic to this new revision for you. If for any reason your container image encounters an error, the service is still active with the old version, and no downtime occurs.

You can also create continuous deployment by configuring Cloud Run automations using Cloud Build triggers, further streamlining your build, release, and run process.


Execute the app as one or more stateless processes

Each Cloud Run service runs its own container, and each container should have one process. If you need multiple concurrent processes, separate those out into different services, and use a stateful backing service (Section IV) to communicate between them.


Export services via port binding

Cloud Run follows the modern architecture best practices and each Service must expose themselves on a port number, specified by the PORT environment variable.

This is the fundamental design of Cloud Run: any container you want, as long as it listens on port 8080.

Cloud Run does support outbound gRPC and WebSockets, but does not currently work with these protocols inbound.


Scale out via the process model

Concurrency is a first-class factor in Cloud Run. You declare what the maximum number of concurrent requests your container can receive. If the incoming concurrent request count exceeds this number, Cloud Run will automatically scale by adding more container instances to handle all incoming requests.


Maximize robustness with fast startup and graceful shutdown

Since Cloud Run handles scaling for you, it’s in your best interest to ensure your services are the most efficient they can be. The faster they are to startup, the more seamless scaling can be.

There are a number of tips around how to write effective services, so be sure to consider the size of your containers, the time they take to startup, and how gracefully they handle errors without terminating.


Keep development, staging, and production as similar as possible

A container-based development workflow means that your local machine can be the development environment, and Cloud Run can be your production environment! Even if you’re running on a non-Linux environment, a local Docker container should behave in the same way as the same container running elsewhere. It’s always a good idea to test your container locally when developing. Testing locally helps you achieve a more efficient iterative development strategy, allowing you to work more effectively. To ensure that you get the same port-binding behaviour as Cloud Run in production, make sure you run with a port flag:

PORT=8080 && docker run -p 8080:${PORT} -e PORT=${PORT} gcr.io/[PROJECT_ID]/[IMAGE]

When testing locally, consider if you’re using any GCP external services, and ensure you point Docker to the authentication credentials.

Once you’ve confirmed your service is sound, you can deploy the same container to a staging environment, and after confirming it’s working as intended there, to a production environment.

A GCP Project can host many services, so it’s recommended that your staging and production (or green and blue, or however you wish to call your isolated environments) are separate projects. This also ensures isolation between databases across environments.


Treat logs as event streams

Cloud Run uses Stackdriver Logging out of the box. The “Logs” tab on your Cloud Run service view will show you what’s going on under the covers, including log aggregation across all dynamically created instances. Stackdriver Logging automatically captures stdout and stderr, and there may also be a native client for Logging in your preferred programming language.

In addition, since logs are captured in Stackdriver Logging, you can then use the tools available for StackDriver logging to further work with your logs; for example, exporting to Big Query.


Run admin/management tasks as one-off processes

Administration tasks are outside the scope of Cloud Run. If you need to do any project configuration, database administration, or other management changes, you can perform these tasks using the GCP Console, gcloud CLI, or Cloud Shell.

A near-perfect score, as a matter of fact(or)

With the exception of one factor being outside of scope, Cloud Run maps near perfectly with Twelve-Factor, which means it will map well to scalable, manageable infrastructure for your next serverless deployment. To learn more about Cloud Run, check out this quickstart.

Leave a Reply

Your email address will not be published. Required fields are marked *