AWS ELB pre-open Connection Exposé, Part 1

Exposé of AWS’s ELB “pre-open” Optimization, part 1

or “what are all these ..reading.. connections in Apache when I use an AWS ELB?”

A colleague of mine reported a problem where their Apache servers were reporting a number of connections in “..reading..” status. They suspected that Amazon’s Elastic Load Balancer was causing the additional connections.

Screenshot server-status page of an EC2 instance behind an AWS ELB and the resulting ..reading.. requests.
Screenshot server-status page of an EC2 instance behind an AWS ELB and the resulting ..reading.. requests.

Determining the source of the ..reading.. Requests

I devised a simple method to identify the source of the ..reading.. requests: comparing the results shown by Apache’s “Server Status” page when viewed on an EC2 instance running Apache and the results shown by Apache’s “Server Status” page on an EC2 instance that was located behind an Amazon ELB.

Configuration was as follows:

  • AMI: ami-a73264ce
  • OS: Ubuntu 12.04.3 LTS
  • Apache Version: Apache/2.2.22, prefork
  • Apache Configuration: KeepAlive Off

To test, I visited The server status page showed no ..reading.. connections. Next, I added an ELB in front of this instance. Visiting returned one ..reading.. request. When adding additional listening ports to the ELB (for instance, adding port 443 as an Apache listening port) I could determine that whenever a new listener was added an additional ..reading.. request was opened.

Predicting ..reading.. behavior

I was curious about the number of ..reading.. requests – at this point, I’d only seen one ..reading.. request per listener. This behavior did not match my colleagues assertion that he was seeing a number of ..reading.. requests. I suspected that the number of ..reading.. requests was related to the amount of traffic directed through an ELB so I created a test where I would send a number of sequential requests through the ELB to an EC2 instance – the results are below:

Example test:

for i in {1..5}
# count reading requests
# result
<dt>12 requests currently being processed, 0 idle workers</dt>

The “R” requests above demonstrate an active ..reading.. request. To reiterate the above: I placed 5 sequential requests through an ELB and, in return, 11 “Reading” requests were opened on the client. Further tests demonstrated that for anywhere between 1 to 10 sequential requests results in an additional 5-7 ..reading.. requests. Typical results from testing are below:

  • 0 http requests = 1 reading request
  • 1 http request = 7 reading requests
  • 2 http requests = 7 reading requests
  • 5 http requests = 11 reading requests
  • 8 http requests = 14 reading requests
  • 10 http requests = 16 reading requests
  • 15 http requests = 17 reading requests

Conclusion, part 1:

  1. Confirmation that Amazon’s ELB creates opened connections to an EC2 instance which are reported as ..reading.. requests by Apache.
  2. Confirmation that the number of additional opened connections to an EC2 instance is greater than one opened connection.

Next Steps?

Part 2 of the AWS ELB “pre-open” optimization blog post will detail the investigation into the purpose behind the Amazon ELB opening a number of ..reading.. requests, some discussion with Amazon regarding the undocumented behavior and, lastly, some discussion of the potential problems this behavior could cause.

Amazon Adds Auto Scaling Support to AWS Management Console

Amazon makes Auto Scaling available in AWS Console

… and … two of my favorite Auto Scaling tricks …

I hope that exposing Auto Scaling in the console will increase adoption of Amazon’s Auto Scaling – using Amazon’s EC2 service without Auto Scaling is failing to leverage Amazon’s strongest offering. Here’s are two of my favorite Auto Scaling Group uses:

  1. use auto-scaling groups for everything – even single instances. Need to revert a machine to a pristine state? If your machine is part of a scaling group, you can use ec2-terminate-instances to terminate the dirty instance and then wait while Amazon provisions a pristine instance for your use.
  2. use auto-scaling groups to “go dark” – for instance if your entire QA environment is going to be unused for a period of time (end of December, for example) simply set min-size, max-size and desired-capaicty to 0 for each group. When you return in January, simply reset the groups to their original sizes. This is also an excellent way to test if your application can withstand an outage – if your application scales back after January as if nothing happened – you using Amazon right.

Here’s the link to Amazon’s announcement: