Blog

Using AWS CloudFormation’s Transform Function

Why use CloudFormation’s Transform Function?

There are two good reasons for using CloudFormation’s “Transform” function to include files. These two reasons are described below:

  1. Consistency.
    1. By including a snippet in each and every CloudFormation template – you’ll ensure that the included code is the same, stack to stack.
  2. Code reuse.
    1. You won’t need to update code across multiple stacks when you need to make changes. You will need to update stacks to get changes made to the included files – but you won’t have to update the actual code in each stack.

How to do this?

Creating a CloudFormation File that uses an Include.

You need to include a Fn::Transform statement where the given file is to be included. An example included is below:

Fn::Transform:
  Name: AWS::Include
  Parameters:
    Location : s3://187376578462-fn-transform-include/ubuntu_ami.yaml

An example of an include in the “Mappings” section of a CloudFormation template would look like:

Mappings:
  Fn::Transform:
    Name: AWS::Include
    Parameters:
      Location : s3://187376578462-fn-transform-include/ubuntu_ami.yaml

Lastly, here is a screenshot of a CloudFormation file that uses an include – see line 29.

CloudFormation - Fn Transform
CloudFormation template that utilizes a Transform function to include a file.

Creating the Included File

You will need to create a file that will be included in a given CloudFormation stack. This file is going to be inserted where the Fn::Transform statement is – this is akin to “import” or “include” in a programming language or “catting” two files together in a *nix Operating System.

The included file should look akin to the following:

AWSRegionArch2AMI:
  us-east-1:
    '64': ami-ddf13fb0
  us-west-1:
    '64': ami-b20542d2
  us-west-2:
    '64': ami-b9ff39d9
CloudFormation - File to be Included
File to be included in a CloudFormation template.

Uploading the Included File

The file that _will be_ included needs to be uploaded to S3. You can do this using the aws s3 command – see below:

aws s3 cp ubuntu_ami.yaml s3://$ubuntu_ami_file_s3_path --region us-west-2
CloudFormation - Included File Upload
AWS S3 command uploading a file to be included in a CloudFormation template.

Creating the CloudFormation Stack with an Include

You’ll need to use the “aws cloudformation deploy” command to deploy or update the given template. An example is below:

aws cloudformation deploy --stack-name FunctionTransformInclude --template-file autoscaling_with_yaml_userdata.yaml --parameter-overrides ubuntuAMISMapping3Location=s3://$ubuntu_ami_file_s3_path --region us-west-2
CloudFormation - Fn Transform Launch Stack
AWS CloudFormation “Deploy” command creating a CloudFormation stack

Summary

I’m planning on using for AMI mappings in particular, as well as for including sections of CloudFormation that might be better generated using code (for instance, user-data might be a consideration). I’ve yet to consider the use of “Fn::Transform / Include” to improve the security of stacks by removing passwords.

If you have questions or comments – reach me at colin@cloudavail.com.

ELB Behavior when Utilizing Backend Instances with Self-Signed Certs

There are AWS questions that I don’t know the answer to – and sometimes these questions need answers. In a case about a week ago, the following question was posed:

How does an Amazon Web Services ELB function when the backend instances utilize a self-signed certificate?

I was lucky enough to have the time to investigate. If you are just looking for the answer, see “short answer” below. For instructions (and a CloudFormation file) to allow you to duplicate my work, see”long answer” further below.

Short answer:

Yes. The AWS ELB will work with backend instances that utilize a self-signed certificate.

Long answer:

if you’d like to duplicate the test I utilized a CloudFormation file that builds this very infrastructure (an ELB, an Auto Scaling Group and Ubuntu instances running Apache accepting HTTPS connections on port 443) you can get this file from my “Snippets” repository at https://github.com/colinbjohnson/snippets/tree/master/aws/elb/elb_backend_selfsigned_cert A diagram describing the configuration is below:

ELB to BE HTTPS.png

 

After performing tests to ensure that the backend instances where healthy and serving requests I wanted to dig a bit deeper to confirm that the data was, in fact, encrypted. I went ahead and ran some requests against the backend web servers and utilized tcpdump on a backend instance, capturing data on port 443. Please note that the Security Group utilized in testing only allows port 443 inbound, so I could have run this test without filtering on “port”. A screen capture of the data captured by tcpdump is shown below:

Backend Instance - tcpdump Capture.png

I ran another tcpdump capture and loaded the data into Wireshark for visualization – the result is show below:

Backend Instance - Encrypted Data.png Notice that the capture has no filters applied – and specifically – that payload is shown as “Encrypted Application Data.” If this were an HTTP connection I would be able to view the data that was being sent. After the packet capture my curiosity was satisfied – the body of HTTP requests was being encrypted in transit.

Conclusion:

If you require “end to end” encryption of data in transit you can utilize backend instances with self-signed certificates.

 

Creating VPCs and Subnets across Regions with a Single CloudFormation File

I’ve often encountered clients who want to utilize a single CloudFormation to build VPCs and Subnets across different AWS Regions and different AWS Accounts. In this blog post will describe exactly how to do this – as well as some of the pain points that are encountered when trying to utilize a single CloudFormation to build VPCs and subnets in different regions and accounts. The post is divided up into two parts – part one describes the solutions (and provides links to CloudFormation files which are stored in GitHub) and part two describes the solutions in more depth.

Part 1: a Single CloudFormation file for building VPC and Subnets in any Region or Account

The solution for building a any-region/any-account CloudFormation file containing a VPC and subnets is going to be different depending on if you need to provide a CloudFormation file that is multi-region or is both multi-region and multi-account. As a result of this, the blog post is divided into “Part 1-A” which covers multi-region only and “Part 1-B” which covers any-region/any-account.

Part 1-A: a Single CloudFormation file for building VPC and Subnets in any Region

If you don’t have a requirement that the you build VPCs and subnets across multiple accounts, you’ll have a relatively straightforward process:

First, you’ll create a mapping that maps each Region to Availability Zones in which subnets can be created. Be careful here: in my own personal AWS account I work can not create a subnet in “us-east-1a”. The end result looks something like below:

"AWSRegion2AZ" : {
  "us-east-1" : { "1" : "us-east-1b", "2" : "us-east-1c", "3" : "us-east-1d", "4" : "us-east-1e" },
  "us-west-1" : { "1" : "us-west-1b", "2" : "us-west-1c" },
  "us-west-2" : { "1" : "us-west-2a", "2" : "us-west-2b", "3" : "us-west-2c" }
}

Second, for each resource that requires a subnet, you’ll need to “Ref” the subnet. An example is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::FindInMap" : [ "AWSRegion2AZ", { "Ref" : "AWS::Region" }, "1" ] }
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

A link to a CloudFormation template that will create a VPC and subnets in any AWS region: https://github.com/colinbjohnson/snippets/tree/master/aws/cloudformation/multi_region_vpc_cloudformation

Below I’ve used the CloudFormation file to create VPCs and subnets in 3 AWS regions: us-west-2, us-east-1 and us-west-1. Screenshots are below:

 

Part 1-B: a Single CloudFormation file for building VPC and Subnets in any Region or any Account

A CloudFormation file that builds a VPC and subnets in any Region or Account is going to be similar to the above (using a Map with defined Availability Zones) with one exception – each account will have different Availability Zones where subnets can be built. An example: my own account allows VPC subnets in the us-east-1b, us-east-1c, us-east-1d and us-east-1e Availability Zones (notice: no subnets can be built in us-east-1a) whereas a different account might allow subnets in us-east-1a, us-east-1b and us-east-1c Availability Zones. To account for this difference you’ll need a map that provides a VPC subnet to Availability Zone mapping for both Region and Account. The solution is shown below:

First, create a Map that accepts “Regions” and “Accounts” and returns a list of Availability Zones where VPC Subnets can be built.

"RegionAndAccount2AZ": {
  "us-east-1" : { 
    "Production" : [ "us-east-1b", "us-east-1c", "us-east-1d" ] ,
    "Development" : [ "us-east-1b", "us-east-1c", "us-east-1d" ]
  },
  "us-west-2" : { 
    "Production" : [ "us-west-2a", "us-west-2b", "us-west-2c" ] ,
    "Development" : [ "us-west-2a", "us-west-2b", "us-west-2c" ]
  }
},

Second, for each resource that needs to be build in a specific Availability Zone you’ll need select an item from the RegionAndAccount2AZ list:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

And … here is a link to the CloudFormation template that will create a VPC and subnets in either different AWS regions and in different AWS accounts: https://github.com/colinbjohnson/snippets/tree/master/aws/cloudformation/multi_region_and_account_vpc_cloudformation

Part 2: Why is this all required?

Any time you have complexity it is important to keep focus on what actually needs to be done and why. I’ll describe the reasons why the additional complexity is required below:

  1. We need to ensure that when creating subnets using CloudFormation that the subnets are created in different Availability Zones. AWS doesn’t provide a facility for doing this. Result: we must define Availability Zones when creating subnet resources.
  2. AWS provides no mechanism for getting the Availability Zones in which subnets can be created. Result: we must manually provide a list of Availability Zones where subnets can be created. We do this using a map.
  3. If multiple accounts are used we run into a problem where the manually provided list of Availability Zones where subnets may be created are potentially different in each different account. Result: we need a map that allows CloudFormation to select Availability Zones where subnets can be built and that takes the “account” into account.

I’ve described the solutions to each problem above in more detail below.

Choosing Subnets Yourself

If you simply define subnets without specifying an “Availability Zone” property for each subnet there is a good chance that Amazon will create these subnets in the same Availability Zone. An example of defining subnets without an Availability Zone property is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

There is a pretty good chance that this will cause two problems:

  1. If you are creating resources that use these subnets – such as an ELB – the ELB resource creation will fail due to the fact that an ELB can only have /one/ subnet per AZ. In the case above, if PublicSubnet1 and PublicSubnet2 are both in us-east-1b – ELB creation will fail.
  2. You may end up with an availability problem as a result of resources being created in the same Availability Zone. For example, if PublicSubnet1 and PublicSubnet2 are both in us-east-1b and you create an Auto Scaling Group that utilizes both PublicSubnet1 and PublicSubnet2 – your instances will still all be brought up in us-east-1b.

The solution would be to use “Fn::GetAZs” but…

“Fn::GetAZs” Returns AZs Where Subnets Can’t Be Placed

To solve the problem of placing subnets in the same Availability Zone, you’d think that you want to use Amazon’s “Fn::GetAZs”. For example, you’d call “{ “Fn::GetAZs” : { “Ref” : “AWS::Region” }” (this returns a list of Availability Zones) and then you’d build PublicSubnet1 in the first Availability Zone, PublicSubnet2 in the second Availability Zone and so on. An example is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::GetAZs" : { "Ref" : "AWS::Region" } } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::GetAZs" : { "Ref" : "AWS::Region" } } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

However, if you use Amazon’s “Fn::GetAZs” – you’ll get a list of all Availability Zones – not just those Availability Zones in which a subnet can be created. As an example, if I call “Fn::GetAZs” using my own account in the us-east-1 region, the return values are [ “us-east-1a”, “us-east-1b”, “us-east-1c”, “us-east-1d”, “us-east-1e” ]. A problem arises because the “us-east-1a” Availability Zone isn’t available to me for subnet creation, so CloudFormation stack creation fails. Here’s a screenshot of that behavior:

FnGetAZs Returns AZs Where Subnets Cant Be Built.png

“Mapping Method” to the Rescue

Using a Map solves this mess. The solution isn’t ideal as it requires one time creation of a map containing a list of Availability Zones where subnets can be created. This map does allow you:

  1. ensure subnets are built in different AZs.
  2. provide support for multiple regions.

Availability Zones that Support VPC Subnets are Different Per Account

If you require VPCs built in different accounts you’ll be required to take one additional step – specifically, you’ll need to provide an Availability Zone to Subnet map per account because each account may have different Availability Zone properties. An example of this mapping is below:

"Mappings" : {
  "RegionAndAccount2AZ": {
    "us-east-1" : { 
      "Production" : [ "us-east-1a", "us-east-1b", "us-east-1c" ] ,
      "Development" : [ "us-east-1b", "us-east-1c", "us-east-1d" ]
    },
    "us-west-2" : { 
      "Production" : [ "us-west-2a", "us-west-2b", "us-west-2c" ] ,
      "Development" : [ "us-west-2a", "us-west-2b", "us-west-2c" ]
    }
  }
},

And an example of using this mapping to place a subnet in the correct Availability Zone:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

And… in Conclusion:

  1. The situation of building VPCs and subnets across regions and accounts using CloudFormation will likely improve. Examples of potential improvements might include a “Fn::GetAZs” pseudo parameter that returns only Availability Zones where subnets can be built or for loops that can build 1 to “x” subnets.
  2. The techniques described in this blog post can likely be improved by using conditionals or lambda. If anyone does this – let me know and I’ll update the post.
  3. Other tools that support “shelling out” or running arbitrary commands may provide better mechanisms that allow a single file to create VPCs and Subnets – although using a tool outside of CloudFormation may not be an option you are open to considering.

Hope that you have found this post useful – if you have questions or comments please feel free to send me an email: colin@cloudavail.com.

Allowing Long Idle Timeouts when using AWS ElasticBeanstalk and Docker

A client I work with had a requirement for a 60 second plus HTTP connection timeout when running Docker on ElasticBeanstalk. Specifically, one of the Engineers was noticing that any HTTP requests taking 60 seconds or more to complete were not being returned by the ElasticBeanstalk application.

Identifying the Cause of the 60 Second Dropped Connections:

The 60 second timeout is actually set in two locations, described below:

  1. The Amazon Elastic Load Balancer, which uses a default “Idle Timeout” value of 60 seconds. The “Idle Timeout” of the given Elastic Load Balancer can be changed easily. (http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html).

    ElasticBeanstalk - ELB - 600 Second Timeout
    ElasticBeanstalk ELB configured with 600 second Idle Timeout.
  2. The nginx Application that acts as a proxy server in front of the Docker container also has a default timeout. The nginx default timeout is not exposed for configuration – you’ll need to modify the nginx configuration through the use of an .ebextensions file or another method. This will also be described within this blog post.
ElasticBeanstalk - HTTP Request Flow
ElasticBeanstalk – HTTP Request Flow

Setting the nginx Timeouts:

The method I used for setting the nginx timeouts can be described, at a high level as:

  1. creating an “ebextension” file that modifies the default nginx configuration used by ElasticBeanstalk. ebextension files are used by Amazon to modify the configuration of ElasticBeanstalk instances.
  2. creating a ZIP format “package” containing a Dockerrun.aws.json file as well as the .ebextension file used to modify the ElasticBeanstalk configuration.

The details are below:

  • Create an “ebextension” file within the root of your project – the file should be at the path .ebextensions/nginx-timeout.config.
  • The content of the file is described below:
files:
  "/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf":     mode: "000644"
    owner: root
    group: root
    content: |
      proxy_connect_timeout       600;
      proxy_send_timeout          600;
      proxy_read_timeout          600;
      send_timeout                600;
commands:
  "00nginx-create-proxy-timeout":
    command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
  • Create an application package by running the following from the root of your application’s directory:
    • zip -r ../application-name-$version.zip .ebextensions Dockerrun.aws.json
    • the command above will package the “Dockerrun.aws.json” file as well as the contents of the .ebextensions directory
  • Upload the resulting application-name-$version.zip to AWS ElasticBeanstalk to deploy your application with the nginx timeouts.
  • Note that I’ll be continuing to do testing around the ideal values for the proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout and send_timeout values.

Creating Expiring IAM Users

A common question I get from folks is “how do I create a temporary AWS IAM user” or “how do I grant access to x service” for only a period of time. Typically, you’d want to use the IAM “Condition” element, which I’ll demonstrate how to do this in the remainder of this blog post. I’ll be using the example of creating an IAM “Power Users” policy that expires on January 31st of 2016 in this example.

Understanding IAM “Power Users” Policy:

The reason I like the IAM “Power Users” policy is that many organizations are moving to the model of fully empowered Engineering staff (meaning organizations where all Engineering staff have Administrator access). This model works well for many organizations, with one exception – allowing all Engineering staff to reset passwords means that when an employee leaves an organization you can’t be certain you have removed their access completely. Consider the case where an ex-employee has just created a new user or provided a password reset – they may know that accounts’s username password even after their own account has been disabled or removed. The other reason I like “Power Users” – despite the “deny” of actions on IAM resources – IAM users can still reset their own passwords despite the deny on IAM user actions (see: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_user-change-own.html#ManagingUserPwdSelf-Console).

Creating the Expiring IAM Power User Policy:

To create an “Power User” IAM policy, do the following:

  1. Login to the IAM Console within the AWS Console.
  2. Select “Policies” from the left-hand navigation and click “Create Policy”
    1. Select “Copy an AWS Managed Policy”
    2. In “Step 2: Set Permissions”, select the “Power User” policy.
    3. In “Step 3: Review Policy”, add the following:
      “Condition”: {
      “DateLessThan”: {
      “aws:CurrentTime”: “2016-01-31T12:00:00Z”
      }
      The outcome will look like the image below:
      IAM Policy - Adding Condition
  3. Click “Create Policy”

The magic here is that the Statement within the IAM Policy will only be allowed when the condition is true.

Attach the Policy to a User or Group:

  1. Login to IAM Console within the AWS Console.
  2. Select “Policies” from the left-hand navigation and select the “Power User” policy you had created previously. See image below:
    IAM Policy - Filtered and Power User Selected
  3. Scroll down to the “Attached Entities” section, click “Attend Entity” and add the Users (or Groups) to which you wish to attach this policy. See image below:
    IAM Policy - Attached Entities

Notes:

  1. After “Policy Expiration” the IAM user will still be able to login to the AWS Console. They won’t have permissions to issue any API commands, however.

References:

  1. IAM “Conditions” Reference is available here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition.

An Introduction to the AWS EC2 Container Service

An Introduction to the AWS EC2 Container Service:

Below is a quick introduction to the AWS EC2 Container Service. The most valuable part of this post may be the link to the CloudFormation stack that will build an EC2 Container Service Cluster and all associated resources. Creating this CloudFormation stack was the moment when I really understood ECS at a nuts and bolts level.

How I came to use ECS at all:

On November 13, 2014, Amazon released the “Amazon EC2 Container Service” – the news didn’t excite me much. Containerization, specifically Docker, was a hot technology but my own frustration with the industry’s lack of understanding of Docker’s benefits and costs coupled with the quality and thought behind Docker implementations I’d seen meant I had little interest in EC2 Container Service. My own clients were happy with Docker and didn’t require container packing or docker links – so there was no push to move to ECS or another container management software. I put ECS on the shelf to be revisited. The day when I would revisit ECS came about mid-June, when one of our developers needed to expose a port through an ELB “fronting” and Elastic Beanstalk Environment and also needed to publish multiple ports on each EC2 instance making up the Elastic Beanstalk Environment. Stated simply, the EC2 instances needed to expose port 8080 to a Load Balancer and the EC2 instances also needed to communicate with each other across an array of ports (specifically, we were running a tomcat7 hosted Java application that utilized Apache Ignite with a Replicated Cache).

Setting Up ECS:

My initial work with ECS was a challenge because of the ECS lexicon – “task definitions, tasks, services, container definitions” and the lack of maturity – AWS actually added CloudFormation support during my ECS prototyping. In any case, I’ll describe each of the resources that make up ECS, how they are used and how they relate to each other:

Clusters and Container Instances:

Clusters and Container Instances will make up the basis of your ECS Infrastructure. A quick diagram demonstrates the relationship between Clusters and Container Instances.

EC2 Container Service - Cluster and Container Instances

Clusters:
  • Clusters are a group of Container Instances.
Container Instances:
  • Container instances run one or more containers
  • Container instances must run the “ECS Container Agent” which will register with an ECS Cluster
    • the “Container Agent” is really well thought out – for instance, you can set or pass values to the ECS Container Agent during install so that EC2 instances within an Auto Scaling Group are automatically registered with an ECS cluster

Task Definitions, Tasks and Services:

You’ll run your application by creating a Task Definition and then running the Task Definition across one or more Container Instances. I’ll explain the Task Definitions, Tasks and Services resources below.

Task Definition:

A task definition contains three parts:

  • Container Definitions – this is a list of one or more containers that make up a task. An example would be an nginx container running front-end code and an nginx container running back-end code. An example of two different “Task Definitions” is given below – one of these task definitions utilizing only one container, while a second, different task definition requires two different containers.
    EC2 Container Service - Task Definition
  • Family – an arbitrary name for the task definition – if you iteratively modify your Task Definitions you can utilize the “Family” to keep these different versions.
  • Volumes (optional) – a list of volumes that will be made available to the containers running within a given Container Instance.
    • I have not needed to utilize Volumes as I’ve been able to clearly communicate or utilize other services to avoid the requirement for Block Storage.
Tasks:

A task is created when a “Task Definition” is run on a container instance. You can use one “Task Definition” to instantiate containers on multiple hosts – for instance, when you click on “Run new task” and select a task definition you’ll be asked how many “tasks” you want to run. If you have a task definition that contains two “Container Definitions” and you want to run 2 tasks, Amazon will place two containers on one of the Container Instances and two containers on the other Container Instance.

Services:

A service is a “superset” of a Task Definition. An example of a “Service” is shown below – note the addition of an Elastic Load Balancer.

EC2 Container Service - Services

When you create a service you define the following:

  • the “Task Definition” used to determine which Tasks/Containers will be run
  • a desired number of tasks
  • a Load Balancer

In return for the bit of extra work, the service will:

  • “run and maintain a specified number of tasks.” If you ask for 4 tasks when you instantiate a service, ECS you’ll always have 4 tasks running.
  • utilize a Load Balancer for routing traffic to Container Instances instances that are running a given Container.
Services versus Tasks:

Task:

  • if a task fails, ECS does not return the task to service
  • when defining a “Task Definition” you are not allowed to define a Load Balancer

Service:

  • if a task fails, ECS will return the task to service
    • an example would include the loss of a Container Instance that drops the number of running tasks below the number of – when a new Container Instance is brought into the cluster a new task will be started
  • when defining a “Service” you are optionally allowed to define a Load Balancer

Auto Scaling with ECS:

  • In regards to the “Container Instances” as part of an Auto Scaling Group, here is a real world example of this benefit:
    • terminate a Cluster Instance that is a member of an ECS Cluster and an Auto Scaling Group
    • the Auto Scaling Group will bring a new EC2 Instance in service
    • the new EC2 Instance’s user data contains instructions for configuring and installing the ECS Container Agent
    • the ECS Container Agent registers the new EC2 Instance with the cluster

Running ECS:

For those who prefer to “learn by doing” – if you navigate to my “Snippets” repository on GitHub you’ll notice the following file – this will create a working ECS Cluster running nginx and all other required resources. The link is here: https://github.com/colinbjohnson/snippets/tree/master/aws/ecs/ecs_introduction.

Future Post:

I’ll be covering some of the following in a future post:

  • How to best utilize Security Groups when working with ECS?
    • Note: it would seem obvious that Security Groups aren’t sufficient for securing Docker containers within ECS – maybe AWS is working on something?
  • Best practices for building clusters – do you build one large cluster for all environments, do you build one cluster per environment?
  • Does AWS / ECS distribute services/tasks across Availability Zones correctly?
  • Can I utilize ECS to have an unequal number of containers – for instance, can I have a 2 to 1 relationship of front-end to back-end instances?
  • Tuning resource usage of Containers.
  • How are “tasks” within a service considered healthy?
  • How to delete task definitions from AWS?