Building and Deploying Docker Images to AWS ECR: A Step-by-Step Guide

As a full-stack developer, I frequently work with Docker to package up applications and their dependencies into portable, reproducible units called containers. Hosting these Docker images in a remote registry makes it easy to share them across teams and deploy them to various environments.

In this tutorial, I‘ll walk you through the process of building a Docker image for a sample Node.js app and pushing it to Amazon‘s Elastic Container Registry (ECR) service. Whether you‘re new to containerization or looking to deepen your understanding, this guide will equip you with the practical knowledge and commands to start deploying your own applications with Docker and ECR.

What is Docker?

Before diving in, let‘s make sure we‘re on the same page about what Docker is and why it‘s so popular. At its core, Docker is a platform that allows you to automate the deployment and running of applications inside isolated containers.

You can think of a container as a lightweight, stand-alone executable package that contains everything needed to run a piece of software – the code, runtime, libraries, environment variables, and config files. By encapsulating the application and its dependencies in this self-contained unit, a container guarantees that it will always run the same, regardless of the environment it is running in.

This decoupling of the application from the underlying infrastructure is what makes containers so powerful – you can build an image once and then run it as a container on any machine running Docker, without worrying about dependency conflicts or environment inconsistencies. This portability and consistency is a huge boon for development and operations.

Diagram explaining how Docker containers encapsulate an application and its dependencies

Some key benefits of using Docker containers:

  • Accelerate developer onboarding and eliminate "works on my machine" issues by ensuring consistent environments
  • Simplify and speed up application testing, deployment and scaling
  • Maximize hardware efficiency and reduce cloud computing costs by enabling higher density of applications per server compared to virtual machines
  • Enable a microservices architecture where large apps can be broken down into smaller, loosely coupled services in separate containers

While Docker started out focused on Linux, it now runs on Windows and MacOS as well. It has seen massive adoption and is used across the spectrum from solo developers to large enterprises.

Installing and Configuring Docker

The first step to start working with Docker is to install it on your local machine. Docker provides installers for all major operating systems on its website. I‘ll walk through the setup for MacOS.

  1. Download the installer .dmg file from the official Docker site: https://docs.docker.com/docker-for-mac/install/

  2. Double-click the .dmg file to open it and then drag the Docker icon to your Applications folder. This will install Docker as well as Docker Compose, a tool for defining and running multi-container applications.

Screenshot showing dragging Docker to Applications folder on Mac

  1. Launch Docker the same way you launch any other application on your Mac. When prompted, enter your system credentials to authorize Docker to run with root privileges.

  2. Once the whale icon in the top status bar stays steady, Docker is up and running! Click the icon to view options and the status of Docker.

Screenshot of Docker whale icon in Mac menu bar

  1. To confirm Docker is properly installed, open a terminal window and run:
docker --version

You should see output similar to:

Docker version 20.10.17, build 100c701

With Docker running locally, you‘re now ready to start working with images and containers. But first, let‘s understand what a Dockerfile is.

What is a Dockerfile?

A Dockerfile is simply a text file that contains the instructions to assemble an image. It specifies the base image to start from, the commands to configure that base image, and what processes to run when launching a container from this image.

Here‘s an example of a basic Dockerfile:

FROM node:14

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000  

CMD ["node", "server.js"]

Let‘s break this down line-by-line:

  • FROM node:14 specifies the base image to start from, in this case the official Node.js LTS image.
  • WORKDIR /app sets the working directory for any subsequent instructions.
  • COPY package*.json ./ copies the package.json and package-lock.json files from your current directory on the host machine into the /app directory in the image.
  • RUN npm install installs the app dependencies listed in package.json.
  • COPY . . copies the rest of the application code into the image.
  • EXPOSE 3000 documents that the container listens on port 3000 at runtime.
  • CMD ["node", "server.js"] specifies the command to run when the image is launched as a container, in this case starting the Node.js server.

Once you have a Dockerfile defined, you‘re ready to build the actual image.

Building a Docker Image

To demonstrate a concrete example, let‘s say we have a simple Express.js app with the following project structure:

.
├── Dockerfile
├── package.json
└── server.js

Assume the Dockerfile contents are what we saw in the previous section. To build an image from this, run the following command from the directory containing the Dockerfile:

docker build -t myapp:v1 .

Here‘s what those options mean:

  • -t myapp:v1 tags the image with a name (myapp) and version (v1)
  • . specifies the build context as the current directory

You‘ll see Docker step through each instruction in the Dockerfile, building up the image layer by layer. Once complete, the image will be stored in your local Docker image registry.

To view all images on your machine, run:

docker image ls

You should see your newly built image listed:

REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
myapp               v1                  8e5d7bd6b0d2        About a minute ago   945MB

You can now launch a container from this image with:

docker run -p 3000:3000 myapp:v1 

This starts a container from the myapp:v1 image, mapping port 3000 in the container to port 3000 on your machine. You can hit http://localhost:3000 to verify the app is running.

Introduction to Amazon ECR

While you can run containers directly from images in your local registry, to take full advantage of Docker you‘ll want to store your images in a remote registry that can be accessed by others. This is where Amazon Elastic Container Registry comes in.

Amazon ECR is a fully-managed Docker registry service that allows you to store, manage, and deploy Docker images in a scalable and secure manner. It deeply integrates with other AWS services like ECS and EKS for running containers in production.

Some key benefits of using ECR:

  • Secure – ECR transfers container images over HTTPS and automatically encrypts images at rest
  • Highly available – ECR is built to be highly available and resilient across multi-AZ infrastructure
  • Scalable – ECR allows you to store and distribute a large number of container images reliably
  • Integrated – ECR is tightly integrated with IAM for access control and CloudTrail for logging actions taken on repositories

To work with ECR, you‘ll first need to make sure you have the AWS CLI set up locally.

Configuring the AWS CLI

  1. Install the AWS CLI by following the instructions for your OS here: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

  2. Once installed, configure the CLI with your AWS access key and secret:

aws configure

You‘ll be prompted to enter four pieces of info:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region name (e.g. us-east-2)
  • Default output format (e.g. json)

Your AWS access keys can be generated from the IAM console – just be sure to store them securely!

Screenshot of configuring AWS CLI with access keys

With the AWS CLI ready to go, let‘s create an ECR repository to store our Docker images.

Creating an ECR Repository

You can create an ECR repository either from the AWS web console or using the AWS CLI. Let‘s use the CLI approach.

To create a repository named myapp, run:

aws ecr create-repository --repository-name myapp --region us-east-2

Be sure to specify the AWS region you want the repository created in.

You‘ll get back a JSON response containing metadata about the new repository, most importantly the repository URI which we‘ll need to push images:

{
    "repository": {
        "repositoryArn": "arn:aws:ecr:us-east-2:123456789012:repository/myapp",
        "registryId": "123456789012",
        "repositoryName": "myapp",
        "repositoryUri": "123456789012.dkr.ecr.us-east-2.amazonaws.com/myapp",
        "createdAt": 1666832054.0,
        ...
}

Make note of the repositoryUri value as we‘ll be using that shortly.

Pushing a Docker Image to ECR

To push our previously built Docker image to our new ECR repository, we need to tag it with the repositoryUri and then push it using the Docker CLI.

First, tag the local image:

docker tag myapp:v1 123456789012.dkr.ecr.us-east-2.amazonaws.com/myapp:v1

Then, to authenticate the Docker CLI to your ECR registry, get the ECR authentication token:

aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-2.amazonaws.com

If successful, you‘ll see a "Login Succeeded" message.

Finally, push the tagged image to ECR:

docker push 123456789012.dkr.ecr.us-east-2.amazonaws.com/myapp:v1

You‘ll see the upload progress, and once complete, the image will be available in your ECR repository! You can verify this in the AWS web console:

Screenshot of ECR repository with pushed Docker image

And there you have it – you‘ve successfully built a Docker image, created an ECR repository, and pushed the image to it. You‘re now on your way to deploying containerized applications in a scalable, reproducible way.

Conclusion and Next Steps

In this article, we covered the fundamentals of building Docker images locally and pushing them to a remote registry using Amazon ECR. These are essential skills for any developer or devops engineer working with containers.

While we focused on a simple Node.js example, the same principles apply for any application or tech stack you can containerize with Docker. I encourage you to try this process out with your own applications.

There are many advanced topics we didn‘t cover here such as multi-stage builds, image vulnerability scanning, and setting up CI/CD pipelines to automatically build and deploy containers. How you configure these will depend on your team‘s specific needs and workflows.

Some great next steps to level up your container skills:

  • Dive deeper into the Docker command line and Dockerfile options
  • Learn how to use ECS or Kubernetes to orchestrate and run containers in production
  • Set up automated builds that trigger on code pushes to a repository
  • Implement a multi-stage CI/CD pipeline that builds, tests, and deploys containers
  • Explore serverless container options like AWS Fargate or Google Cloud Run

I hope this guide has been helpful in your journey working with Docker and AWS. The container ecosystem is constantly evolving, but mastering these foundational skills will serve you well in deploying all types of modern applications. Happy containerizing!

Similar Posts