ELB Behavior when Utilizing Backend Instances with Self-Signed Certs

There are AWS questions that I don’t know the answer to – and sometimes these questions need answers. In a case about a week ago, the following question was posed:

How does an Amazon Web Services ELB function when the backend instances utilize a self-signed certificate?

I was lucky enough to have the time to investigate. If you are just looking for the answer, see “short answer” below. For instructions (and a CloudFormation file) to allow you to duplicate my work, see”long answer” further below.

Short answer:

Yes. The AWS ELB will work with backend instances that utilize a self-signed certificate.

Long answer:

if you’d like to duplicate the test I utilized a CloudFormation file that builds this very infrastructure (an ELB, an Auto Scaling Group and Ubuntu instances running Apache accepting HTTPS connections on port 443) you can get this file from my “Snippets” repository at https://github.com/colinbjohnson/snippets/tree/master/aws/elb/elb_backend_selfsigned_cert A diagram describing the configuration is below:



After performing tests to ensure that the backend instances where healthy and serving requests I wanted to dig a bit deeper to confirm that the data was, in fact, encrypted. I went ahead and ran some requests against the backend web servers and utilized tcpdump on a backend instance, capturing data on port 443. Please note that the Security Group utilized in testing only allows port 443 inbound, so I could have run this test without filtering on “port”. A screen capture of the data captured by tcpdump is shown below:

Backend Instance - tcpdump Capture.png

I ran another tcpdump capture and loaded the data into Wireshark for visualization – the result is show below:

Backend Instance - Encrypted Data.png Notice that the capture has no filters applied – and specifically – that payload is shown as “Encrypted Application Data.” If this were an HTTP connection I would be able to view the data that was being sent. After the packet capture my curiosity was satisfied – the body of HTTP requests was being encrypted in transit.


If you require “end to end” encryption of data in transit you can utilize backend instances with self-signed certificates.


Allowing Long Idle Timeouts when using AWS ElasticBeanstalk and Docker

A client I work with had a requirement for a 60 second plus HTTP connection timeout when running Docker on ElasticBeanstalk. Specifically, one of the Engineers was noticing that any HTTP requests taking 60 seconds or more to complete were not being returned by the ElasticBeanstalk application.

Identifying the Cause of the 60 Second Dropped Connections:

The 60 second timeout is actually set in two locations, described below:

  1. The Amazon Elastic Load Balancer, which uses a default “Idle Timeout” value of 60 seconds. The “Idle Timeout” of the given Elastic Load Balancer can be changed easily. (http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html).

    ElasticBeanstalk - ELB - 600 Second Timeout
    ElasticBeanstalk ELB configured with 600 second Idle Timeout.
  2. The nginx Application that acts as a proxy server in front of the Docker container also has a default timeout. The nginx default timeout is not exposed for configuration – you’ll need to modify the nginx configuration through the use of an .ebextensions file or another method. This will also be described within this blog post.
ElasticBeanstalk - HTTP Request Flow
ElasticBeanstalk – HTTP Request Flow

Setting the nginx Timeouts:

The method I used for setting the nginx timeouts can be described, at a high level as:

  1. creating an “ebextension” file that modifies the default nginx configuration used by ElasticBeanstalk. ebextension files are used by Amazon to modify the configuration of ElasticBeanstalk instances.
  2. creating a ZIP format “package” containing a Dockerrun.aws.json file as well as the .ebextension file used to modify the ElasticBeanstalk configuration.

The details are below:

  • Create an “ebextension” file within the root of your project – the file should be at the path .ebextensions/nginx-timeout.config.
  • The content of the file is described below:
  "/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf":     mode: "000644"
    owner: root
    group: root
    content: |
      proxy_connect_timeout       600;
      proxy_send_timeout          600;
      proxy_read_timeout          600;
      send_timeout                600;
    command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
  • Create an application package by running the following from the root of your application’s directory:
    • zip -r ../application-name-$version.zip .ebextensions Dockerrun.aws.json
    • the command above will package the “Dockerrun.aws.json” file as well as the contents of the .ebextensions directory
  • Upload the resulting application-name-$version.zip to AWS ElasticBeanstalk to deploy your application with the nginx timeouts.
  • Note that I’ll be continuing to do testing around the ideal values for the proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout and send_timeout values.

Inner Workings of the AWS ELB, Part 1

Inner Workings of the AWS ELB, Part 1

Ever wonder what an AWS ELB is made of? This blog post will:

  1. Provide more transparency into AWS ELB for those who rely on AWS technology.
  2. Let ELB users know what they can expect from an Amazon ELB.
  3. Let ELB users know what they can ask Amazon for.

What is an ELB?

Simply stated: an ELB is made up of one (or more) EC2 instances per availability zone, with these instances routing traffic to back-end EC2 instances. While AWS does not publicly document ELB architecture, the following should be convincing:

Behavior of ELB when Availability Zones are Added and Removed:
  1. Create an ELB with only one availability zone (example: us-east-1a)
  2. Run “host” with the ELB’s hostname – notice that only one IP address is returned.
  3. Add a second availability zone to your ELB.
  4. Run host with the ELB’s hostname – notice that two IP addresses are returned.
  5. Add an additional availability zone – you’ll see a third IP address when performing a DNS query of the ELB’s hostname.
Return of DNS Lookup of ELB IP addresses:

Run “dig -x” with the IP addresses of the given instances.

Example: dig -x +short

The return of these requests are the same values as EC2 instance public DNS names.

Example return value: ec2-50-19-115-50.compute-1.amazonaws.com.


Amazon’s own Documentation:

Amazon’s own documentation hints at the construction of the ELB – the ELB Concepts page notes “When you register your EC2 instances, Elastic Load Balancing provisions load balancer nodes in all the Availability Zones that has the registered instances.”

How does this help me?

If Amazon Web Service’s ELB’s are made up of EC2 instances in an Auto Scaling Group then you have a good idea of what you can ask the AWS ELB support group for. Examples below:

You can “pre-warm” an ELB to prepare for a flood of traffic:

Let’s say you had a client that was a tech-centered advertising network and you knew that an upcoming Apple announcement was going to generate a large amount of traffic on your servers – knowing this, you could do one of the following:

  1. Call Amazon and request that the ELB be scaled up before this occurred. Note that Amazon documents your ability to request “pre-warming” in their Best Practices in Evaluating Elastic Load Balancing document.
  2. Pre-warm the ELB by gradually scaling up the ELB by sending your own automatically generated traffic.
You could change the EC2 Instance Types that make up an ELB:

Let’s say you knew that your traffic was naturally bursty – enough to overwhelm a small EC2 instance type. You could Amazon support and ask them to change the instance type that comprises your ELB to an instance that offered greater network performance.

You can change the Auto Scaling Policy supporting the ELB:

Again, assuming a situation where you have bursty traffic or you wish to have multiple ELB EC2 instances in each availability zone – you could call Amazon and suggest that you wish for the ELB to bring additional EC2 nodes into service immediately when traffic starts or to always maintain two nodes per AZ. Regarding the “scale up” time period, Amazon’s documentation states that “the time required for Elastic Load Balancing to scale can range from 1 to 7 minutes, depending on the changes in the traffic profile” – particular customers might wish for this time period to be more predictable, or for their ELB to have over-provisioned capacity from the start.


If you use or are implementing Amazon’s ELB, I’d suggest you:

  1. Implement the ELB and back-end instances in the most vanilla manner that meets your needs. The ELB doesn’t provide many tunables and remaining “vanilla” ensures that tunables aren’t needed.
  2. Engage your Amazon Account Manager or support group. In particular the Account Managers I have worked with have been valuable in ensuring predictable ELB implementation.
  3. If you need ELB modifications, ask. The ELB has a very limited API, but a number of parameters are able to be tuned by placed a call to Amazon support.