Deploy Docker container on GCP

Deploying a Docker Container on a VM in Google Cloud Platform (Part 14)

In this blog post, you’ll learn how to deploy a Docker container on a Google
Compute Engine Virtual Machine (VM).

This post is part of the Dockerized Django Back-end API with Angular Front-end Tutorial. Check out all the parts of the tutorial there.

In the last part of the tutorial, we’ve learned how to connect Django to a PostgreSQL database hosted on Google Cloud SQL. In this blog post, we’ll understand how to use our Docker image to deploy a container running on a Google Compute Engine VM.

There are no repository changes made in this part of the tutorial. Nevertheless, to get the code to where we left off in the last blog post, use:

$ git checkout v1.19

Deploying the Container on a VM

After you’ve built your Docker image, the next step is to deploy the container on a Google Compute Engine VM.

The nice thing about Compute Engine is that it will automatically supply a Container-Optimized OS (COS) image with Docker installed. When the VM starts up, it will immediately launch your container.

There are two steps for deploying a container on Compute Engine:

  1. Bundle your application into a Docker image and publish it to Container Registry.
  2. Specify a Docker image name and the docker run configuration when creating a VM instance.

We’ve already done step 1 in the previous posts, where we’ve published our Docker image to Container Registry using the images field in the build config file.

We’ll now proceed to create a new VM instance.

Creating a New VM Instance Running a Container

The new VM instance can be created from the Google Cloud Console or from the gcloud command-line. I recommend the former, as in this case you can configure the container options from the GUI instead of the command-line.

  1. Go to the VM instances page.

  2. Click the Create instance button to create a new instance.

  3. Give the instance a name such as todo-vm and set a Region and Zone of your choice.

  4. Under the Container section, check Deploy a container image to this VM instance.

  5. Specify a container image under Container image and click Advanced container options to see extra options.

Here, you should use the image you’ve published to Container Registry in the previous posts. Of course, yourproject should be the GCP project ID you’ve used in Cloud Build.

Next, tick the Allocate a buffer for STDIN and Allocate a pseudo-TTY checkboxes. We need these to connect to the container itself.

  1. Set environment variables.

In the Environment variables section we’ll generally have sensitive variables that we don’t want to include in the repository for security concerns (e.g. passwords, secret keys, etc.).

We look up these variables in in the django/todoproj directory.

Add the following environment variables to your container:

  • DJANGO_SECRET_KEY – this should be set to Django’s production secret key. Review part 9 of this tutorial to remember why it wasn’t included in the repo.
  • DB_NAME_DJANGO – the production Django database on the Cloud SQL instance you’ve created in the previous part of the tutorial.
  • DB_USER_DJANGO – the Django user on the Cloud SQL instance.
  • DB_PASSWORD_DJANGO– the Django user’s password on the Cloud SQL instance.
  • CLOUD_SQL_INSTANCE_IP – the IP address of the Cloud SQL instance.
  1. Under the Firewalls section, tick Allow HTTP traffic and Allow HTTPS traffic since we are indeed serving HTTP requests.

  2. Redirect HTTP traffic from port 80 to port 8080.

Remember how we’re running uWSGI as non-root on port 8080 in production?

We do this as we can’t run uWSGI as a non-root user on port 80.

Therefore, we need to forward incoming traffic on port 80 to port 8080 using the following iptables rule:

iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

Add this iptables rule in the startup-script field of the Custom metadata section in the Advanced container options as follows:

Good to Know

Since currently the container shares ports with the host, the port to which you’re redirecting traffic to must not be used by another process on the OS.

Later, you can check the listening ports and applications with the netstat command while connected to the VM using:

$ netstat -tulpn | grep LISTEN
  1. Create the VM by clicking the Create button.

One-time Setup for the Django Container

Now that our VM is running, there is one more task we need to perform: setting up the database tables for our Django API.

First, connect to the VM using the command:

$ gcloud compute ssh [INSTANCE_NAME]

Replace [INSTANCE_NAME] with the name you’ve given to your Compute Engine instance.

After, list the running containers to find out the name of the Django container:

$ docker container ps

You should then see something like this:

In the above output, the Django container has an autogenerated name klt-todo-vm-tsgu.

Let’s connect to the container using the command:

$ docker exec -it YOUR_DJANGO_CONTAINER_NAME bash

Now that we’re in the container, we can simple apply the database migrations using:

$ python migrate

As said at the start of the blog post, there are no code changes in the repository for this part of the tutorial.

At this point in time, you should now have the Django REST API live in a production VM on GCP.

Well done!


In this part of the tutorial, we’ve learned how to deploy a Docker container on a Google Compute Engine VM. We’ve covered the most important container settings and how to forward packets to the port where the uWSGI Django server is listening for requests.

We now have a live REST API, but how can we serve the Angular front-end?

In the next (and last) part of the tutorial, we’ll see how to serve our Angular app’s static files from Google Cloud Storage.

Credit: For this tutorial, I’ve used the following resources:

About the Author Dragos Stanciu

follow me on:


Like this article? Stay updated by subscribing to my weekly newsletter:

Leave a Comment:

Add Your Reply