Allowing Long Idle Timeouts when using AWS ElasticBeanstalk and Docker

A client I work with had a requirement for a 60 second plus HTTP connection timeout when running Docker on ElasticBeanstalk. Specifically, one of the Engineers was noticing that any HTTP requests taking 60 seconds or more to complete were not being returned by the ElasticBeanstalk application.

Identifying the Cause of the 60 Second Dropped Connections:

The 60 second timeout is actually set in two locations, described below:

  1. The Amazon Elastic Load Balancer, which uses a default “Idle Timeout” value of 60 seconds. The “Idle Timeout” of the given Elastic Load Balancer can be changed easily. (

    ElasticBeanstalk - ELB - 600 Second Timeout
    ElasticBeanstalk ELB configured with 600 second Idle Timeout.
  2. The nginx Application that acts as a proxy server in front of the Docker container also has a default timeout. The nginx default timeout is not exposed for configuration – you’ll need to modify the nginx configuration through the use of an .ebextensions file or another method. This will also be described within this blog post.
ElasticBeanstalk - HTTP Request Flow
ElasticBeanstalk – HTTP Request Flow

Setting the nginx Timeouts:

The method I used for setting the nginx timeouts can be described, at a high level as:

  1. creating an “ebextension” file that modifies the default nginx configuration used by ElasticBeanstalk. ebextension files are used by Amazon to modify the configuration of ElasticBeanstalk instances.
  2. creating a ZIP format “package” containing a file as well as the .ebextension file used to modify the ElasticBeanstalk configuration.

The details are below:

  • Create an “ebextension” file within the root of your project – the file should be at the path .ebextensions/nginx-timeout.config.
  • The content of the file is described below:
  "/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf":     mode: "000644"
    owner: root
    group: root
    content: |
      proxy_connect_timeout       600;
      proxy_send_timeout          600;
      proxy_read_timeout          600;
      send_timeout                600;
    command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
  • Create an application package by running the following from the root of your application’s directory:
    • zip -r ../application-name-$ .ebextensions
    • the command above will package the “” file as well as the contents of the .ebextensions directory
  • Upload the resulting application-name-$ to AWS ElasticBeanstalk to deploy your application with the nginx timeouts.
  • Note that I’ll be continuing to do testing around the ideal values for the proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout and send_timeout values.

An Introduction to the AWS EC2 Container Service

An Introduction to the AWS EC2 Container Service:

Below is a quick introduction to the AWS EC2 Container Service. The most valuable part of this post may be the link to the CloudFormation stack that will build an EC2 Container Service Cluster and all associated resources. Creating this CloudFormation stack was the moment when I really understood ECS at a nuts and bolts level.

How I came to use ECS at all:

On November 13, 2014, Amazon released the “Amazon EC2 Container Service” – the news didn’t excite me much. Containerization, specifically Docker, was a hot technology but my own frustration with the industry’s lack of understanding of Docker’s benefits and costs coupled with the quality and thought behind Docker implementations I’d seen meant I had little interest in EC2 Container Service. My own clients were happy with Docker and didn’t require container packing or docker links – so there was no push to move to ECS or another container management software. I put ECS on the shelf to be revisited. The day when I would revisit ECS came about mid-June, when one of our developers needed to expose a port through an ELB “fronting” and Elastic Beanstalk Environment and also needed to publish multiple ports on each EC2 instance making up the Elastic Beanstalk Environment. Stated simply, the EC2 instances needed to expose port 8080 to a Load Balancer and the EC2 instances also needed to communicate with each other across an array of ports (specifically, we were running a tomcat7 hosted Java application that utilized Apache Ignite with a Replicated Cache).

Setting Up ECS:

My initial work with ECS was a challenge because of the ECS lexicon – “task definitions, tasks, services, container definitions” and the lack of maturity – AWS actually added CloudFormation support during my ECS prototyping. In any case, I’ll describe each of the resources that make up ECS, how they are used and how they relate to each other:

Clusters and Container Instances:

Clusters and Container Instances will make up the basis of your ECS Infrastructure. A quick diagram demonstrates the relationship between Clusters and Container Instances.

EC2 Container Service - Cluster and Container Instances

  • Clusters are a group of Container Instances.
Container Instances:
  • Container instances run one or more containers
  • Container instances must run the “ECS Container Agent” which will register with an ECS Cluster
    • the “Container Agent” is really well thought out – for instance, you can set or pass values to the ECS Container Agent during install so that EC2 instances within an Auto Scaling Group are automatically registered with an ECS cluster

Task Definitions, Tasks and Services:

You’ll run your application by creating a Task Definition and then running the Task Definition across one or more Container Instances. I’ll explain the Task Definitions, Tasks and Services resources below.

Task Definition:

A task definition contains three parts:

  • Container Definitions – this is a list of one or more containers that make up a task. An example would be an nginx container running front-end code and an nginx container running back-end code. An example of two different “Task Definitions” is given below – one of these task definitions utilizing only one container, while a second, different task definition requires two different containers.
    EC2 Container Service - Task Definition
  • Family – an arbitrary name for the task definition – if you iteratively modify your Task Definitions you can utilize the “Family” to keep these different versions.
  • Volumes (optional) – a list of volumes that will be made available to the containers running within a given Container Instance.
    • I have not needed to utilize Volumes as I’ve been able to clearly communicate or utilize other services to avoid the requirement for Block Storage.

A task is created when a “Task Definition” is run on a container instance. You can use one “Task Definition” to instantiate containers on multiple hosts – for instance, when you click on “Run new task” and select a task definition you’ll be asked how many “tasks” you want to run. If you have a task definition that contains two “Container Definitions” and you want to run 2 tasks, Amazon will place two containers on one of the Container Instances and two containers on the other Container Instance.


A service is a “superset” of a Task Definition. An example of a “Service” is shown below – note the addition of an Elastic Load Balancer.

EC2 Container Service - Services

When you create a service you define the following:

  • the “Task Definition” used to determine which Tasks/Containers will be run
  • a desired number of tasks
  • a Load Balancer

In return for the bit of extra work, the service will:

  • “run and maintain a specified number of tasks.” If you ask for 4 tasks when you instantiate a service, ECS you’ll always have 4 tasks running.
  • utilize a Load Balancer for routing traffic to Container Instances instances that are running a given Container.
Services versus Tasks:


  • if a task fails, ECS does not return the task to service
  • when defining a “Task Definition” you are not allowed to define a Load Balancer


  • if a task fails, ECS will return the task to service
    • an example would include the loss of a Container Instance that drops the number of running tasks below the number of – when a new Container Instance is brought into the cluster a new task will be started
  • when defining a “Service” you are optionally allowed to define a Load Balancer

Auto Scaling with ECS:

  • In regards to the “Container Instances” as part of an Auto Scaling Group, here is a real world example of this benefit:
    • terminate a Cluster Instance that is a member of an ECS Cluster and an Auto Scaling Group
    • the Auto Scaling Group will bring a new EC2 Instance in service
    • the new EC2 Instance’s user data contains instructions for configuring and installing the ECS Container Agent
    • the ECS Container Agent registers the new EC2 Instance with the cluster

Running ECS:

For those who prefer to “learn by doing” – if you navigate to my “Snippets” repository on GitHub you’ll notice the following file – this will create a working ECS Cluster running nginx and all other required resources. The link is here:

Future Post:

I’ll be covering some of the following in a future post:

  • How to best utilize Security Groups when working with ECS?
    • Note: it would seem obvious that Security Groups aren’t sufficient for securing Docker containers within ECS – maybe AWS is working on something?
  • Best practices for building clusters – do you build one large cluster for all environments, do you build one cluster per environment?
  • Does AWS / ECS distribute services/tasks across Availability Zones correctly?
  • Can I utilize ECS to have an unequal number of containers – for instance, can I have a 2 to 1 relationship of front-end to back-end instances?
  • Tuning resource usage of Containers.
  • How are “tasks” within a service considered healthy?
  • How to delete task definitions from AWS?

Using Elastic Beanstalk with a Docker Registry

Using Elastic Beanstalk with DockerHub or a Docker Registry

Combining DockerHub (using either public or private repositories) or your own Docker registry and Amazon’s Elastic Beanstalk creates a very simple method of delivering a versioned application into a production environment.

The Benefit of Elastic Beanstalk with DockerHub:

For smaller engineering organizations, for organizations running lean on Operations or just staying within the walled-garden of AWS the Elastic Beanstalk <-> DockerHub/Registry combination combines low operational overhead, versioned application rollout/rollback and reliable PaaS service. Assuming you are already building Docker images, the entire setup can be completed within a half of a day.

Overview of Elastic Beanstalk and Docker Deployment Flow:

  1. Build source code and dependencies into a Docker image.
  2. Push an image to DockerHub (use “docker push”)
  3. Push an “Application Version” (use a Dockerfile).
  4. Deploy the application version to a “QA”/”Staging”/”Pre-production” environment, if required.
  5. Deploy the application version to Production

The diagram below is a visual representation of the deployment flow, from start (creating an image) to finish (deployment into Production).

ElasticBeanstalk Deployment from DockerHub

Initial Setup:

If you wish to utilize a private DockerHub repository or your own registry/repository combination and you requires authentication, you’ll need to do a bit of preparation. This preparation is described below.

1. You will need to create and upload a “.dockercfg” file to S3. This file provides authentication information to Amazon’s Elastic Beanstalk. The dockercfg file contains the following text:


You can create a .dockercfg file by running docker login and entering your username and password and the login prompt. This will create a .dockercfg file in the format required by Elastic Beanstalk.

2. If you do use a dockercfg file, the Elastic Beanstalk hosts will need access to it through an AWS IAM profile.

Performing a Deployment:

Upload an Image to DockerHub

Upload Docker image(s) to DockerHub. This makes images available for Elastic Beanstalk or any other host capable of running Docker. A typical Docker push can be done through the command line:

docker push my_organization/my_repository:version (example: cloudavail/test_tomcat_app:0.1.0)

or as a target when using a build tool. For example, I’ve pushed Docker code using Transmode’s gradle-docker plugin.

Create a Dockerrun file for the given “Application Version”

The Dockerrun file is used to describe an application to the Elastic Beanstalk platform. Practically speaking a Dockerun file creates an “Application Version” and instructs Elastic Beanstalk how to do both of the following:

  • “get” an image from DockerHub
  • how to configure a Docker container.

A dockerrun file looks something akin to the following:

  "AWSEBDockerrunVersion": "1",
  "Authentication": {
    "Bucket": "my_bucket",
    "Key": "my_dockercfg_file"
  "Image": {
    "Name": "cloudavail/my_tomcat_app:0.1.0",
    "Update": "true"
 "Ports": [
      "ContainerPort": "8080"

The “Authentication” section is required if authentication is needed to access images that are stored in a private repository: “my_bucket” is the name of the bucket where the authentication file is stored and “Key” is the path to the given authentication file. In my example, I’d use something akin to the following

"Bucket": "cloudavail_releases",
 "Key": "docker/dockercfg"

The “Image” section contains the path to a given image and tag.
The “Ports” section contains the port that should be open – which, by default, is mapped to port 80.

Upload the “Application Version” to Elastic Beanstalk

You’ll need to submit the previously created “Dockerrun” file to Amazon. You can do this in one of two ways:

  1. Upload through the UI.
  2. Upload the Dockerrun file to S3 and then issue an API call using Amazon’s CLI tools or custom tooling of your choice.

One cool thing to note – Amazon retains old application versions to allow versioned rollout and rollback. For instance, you can select an application version “0.1.2”, deploy this version to QA, test and then deploy the exact same version to Production. If you needed to rollback, you can select the application version 0.1.1 and deploy this version to Production. The screenshot below demonstrates this:

ElasticBeanstalk - Application Versions

Deploy the newly created Application Version
  1. Deploy through the UI.
  2. Deploy through an API call using Amazon’s CLI tools or custom tooling of your choice.