An Introduction to the AWS EC2 Container Service

An Introduction to the AWS EC2 Container Service:

Below is a quick introduction to the AWS EC2 Container Service. The most valuable part of this post may be the link to the CloudFormation stack that will build an EC2 Container Service Cluster and all associated resources. Creating this CloudFormation stack was the moment when I really understood ECS at a nuts and bolts level.

How I came to use ECS at all:

On November 13, 2014, Amazon released the “Amazon EC2 Container Service” – the news didn’t excite me much. Containerization, specifically Docker, was a hot technology but my own frustration with the industry’s lack of understanding of Docker’s benefits and costs coupled with the quality and thought behind Docker implementations I’d seen meant I had little interest in EC2 Container Service. My own clients were happy with Docker and didn’t require container packing or docker links – so there was no push to move to ECS or another container management software. I put ECS on the shelf to be revisited. The day when I would revisit ECS came about mid-June, when one of our developers needed to expose a port through an ELB “fronting” and Elastic Beanstalk Environment and also needed to publish multiple ports on each EC2 instance making up the Elastic Beanstalk Environment. Stated simply, the EC2 instances needed to expose port 8080 to a Load Balancer and the EC2 instances also needed to communicate with each other across an array of ports (specifically, we were running a tomcat7 hosted Java application that utilized Apache Ignite with a Replicated Cache).

Setting Up ECS:

My initial work with ECS was a challenge because of the ECS lexicon – “task definitions, tasks, services, container definitions” and the lack of maturity – AWS actually added CloudFormation support during my ECS prototyping. In any case, I’ll describe each of the resources that make up ECS, how they are used and how they relate to each other:

Clusters and Container Instances:

Clusters and Container Instances will make up the basis of your ECS Infrastructure. A quick diagram demonstrates the relationship between Clusters and Container Instances.

EC2 Container Service - Cluster and Container Instances

Clusters:
  • Clusters are a group of Container Instances.
Container Instances:
  • Container instances run one or more containers
  • Container instances must run the “ECS Container Agent” which will register with an ECS Cluster
    • the “Container Agent” is really well thought out – for instance, you can set or pass values to the ECS Container Agent during install so that EC2 instances within an Auto Scaling Group are automatically registered with an ECS cluster

Task Definitions, Tasks and Services:

You’ll run your application by creating a Task Definition and then running the Task Definition across one or more Container Instances. I’ll explain the Task Definitions, Tasks and Services resources below.

Task Definition:

A task definition contains three parts:

  • Container Definitions – this is a list of one or more containers that make up a task. An example would be an nginx container running front-end code and an nginx container running back-end code. An example of two different “Task Definitions” is given below – one of these task definitions utilizing only one container, while a second, different task definition requires two different containers.
    EC2 Container Service - Task Definition
  • Family – an arbitrary name for the task definition – if you iteratively modify your Task Definitions you can utilize the “Family” to keep these different versions.
  • Volumes (optional) – a list of volumes that will be made available to the containers running within a given Container Instance.
    • I have not needed to utilize Volumes as I’ve been able to clearly communicate or utilize other services to avoid the requirement for Block Storage.
Tasks:

A task is created when a “Task Definition” is run on a container instance. You can use one “Task Definition” to instantiate containers on multiple hosts – for instance, when you click on “Run new task” and select a task definition you’ll be asked how many “tasks” you want to run. If you have a task definition that contains two “Container Definitions” and you want to run 2 tasks, Amazon will place two containers on one of the Container Instances and two containers on the other Container Instance.

Services:

A service is a “superset” of a Task Definition. An example of a “Service” is shown below – note the addition of an Elastic Load Balancer.

EC2 Container Service - Services

When you create a service you define the following:

  • the “Task Definition” used to determine which Tasks/Containers will be run
  • a desired number of tasks
  • a Load Balancer

In return for the bit of extra work, the service will:

  • “run and maintain a specified number of tasks.” If you ask for 4 tasks when you instantiate a service, ECS you’ll always have 4 tasks running.
  • utilize a Load Balancer for routing traffic to Container Instances instances that are running a given Container.
Services versus Tasks:

Task:

  • if a task fails, ECS does not return the task to service
  • when defining a “Task Definition” you are not allowed to define a Load Balancer

Service:

  • if a task fails, ECS will return the task to service
    • an example would include the loss of a Container Instance that drops the number of running tasks below the number of – when a new Container Instance is brought into the cluster a new task will be started
  • when defining a “Service” you are optionally allowed to define a Load Balancer

Auto Scaling with ECS:

  • In regards to the “Container Instances” as part of an Auto Scaling Group, here is a real world example of this benefit:
    • terminate a Cluster Instance that is a member of an ECS Cluster and an Auto Scaling Group
    • the Auto Scaling Group will bring a new EC2 Instance in service
    • the new EC2 Instance’s user data contains instructions for configuring and installing the ECS Container Agent
    • the ECS Container Agent registers the new EC2 Instance with the cluster

Running ECS:

For those who prefer to “learn by doing” – if you navigate to my “Snippets” repository on GitHub you’ll notice the following file – this will create a working ECS Cluster running nginx and all other required resources. The link is here: https://github.com/colinbjohnson/snippets/tree/master/aws/ecs/ecs_introduction.

Future Post:

I’ll be covering some of the following in a future post:

  • How to best utilize Security Groups when working with ECS?
    • Note: it would seem obvious that Security Groups aren’t sufficient for securing Docker containers within ECS – maybe AWS is working on something?
  • Best practices for building clusters – do you build one large cluster for all environments, do you build one cluster per environment?
  • Does AWS / ECS distribute services/tasks across Availability Zones correctly?
  • Can I utilize ECS to have an unequal number of containers – for instance, can I have a 2 to 1 relationship of front-end to back-end instances?
  • Tuning resource usage of Containers.
  • How are “tasks” within a service considered healthy?
  • How to delete task definitions from AWS?

Reducing AWS Cost using Scheduled Scaling

Reducing AWS Cost using Scheduled Scaling

One of the ways to reduce AWS cost is to utilize Auto Scaling Groups and the “Scheduled Scaling” feature to scaled down EC2 resources during non-business hours. In the example below, I’ll walk you through the use of scheduled scaling to “Scale Down” an api-qa01 Auto Scaling Group to 0 instances after the close of business (assumed to be 7:00 pm pacific) and “Scale Up” an api-qa01 Auto Scaling Group to two instances at start of business (assumed to be 8:00 am pacific).

Setup Scheduled Scaling:

To setup scheduled scaling, you’ll need:

  1. Infrastructure that uses Auto Scaling Groups (you could do something similar with EC2 start/stop instances, but I don’t plan to cover this case in the blog). It would be ideal if you’ve configured your Auto Scaling Group to publish Auto Scaling Group Metrics – this will let you track the capacity of an Auto Scaling Group. To do so:
    1. Check “Enable CloudWatch detailed monitoring” when creating the Launch Configuration in the AWS Console, or use –instance-monitoring when creating a Launch Configuration using the AWS Command Line Interface.
    2. Use “aws autoscaling enable-metrics-collection” or check “Enable CloudWatch detailed monitoring” when creating the Auto Scaling Group from the AWS Console.
  2. A computer with the “AWS Command Line Interface” tools installed (scheduled scaling is not yet available in the AWS Console)

With the above prerequisites set, configuring scheduled scaling is as simple as doing the following:

  1. Identifying the name of the group for which you wish to enable scheduled scaling.
  2. Creating a “Scale Up” scheduled event, example below:
    1. aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaleup-api-qa01 --auto-scaling-group-name api-qa01 --recurrence "0 15 * * *" --desired-capacity 2
  3. Creating a “Scale Down” scheduled event:
    1. aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaledown-api-qa01 --auto-scaling-group-name api-qa01 --recurrence "0 2 * * *" --desired-capacity 0
  4. Once you done creating “Scale Up” and “Scale Down” events, you’ll want to ensure that you’ve setup the scheduled scaling actions correctly – the command to do this is below:
    1. aws autoscaling describe-scheduled-actions --auto-scaling-group-name api-qa01 --query 'ScheduledUpdateGroupActions[*].{Name:ScheduledActionName,DesiredCapacity:DesiredCapacity,Recurrence:Recurrence}' --output table
    2. The result should look something akin to the followingScheduled Scaling - Described Scheduled Actions
Confirming Scaling:

If you’ve enabled Auto Scaling Group Metrics you should be able to identify the changes in Desired Capacity and in InService Capacity. An example of the Auto Scaling Group api-qa01 is shown below:

Scheduled Scaling Example

Noteice that the “GroupDesiredCapacity” and “GroupTotalInstances” increases to 2 Instances daily at 15:00 UTC and then returns to 0 Instances daily at 2:00 UTC.

If You Don’t Use Auto Scaling:

Those not using Scheduled Scaling might be able to make be able to utilize a cron job that Starts and Stops EC2 instances based on a Tag. For instance, you could create Tags with the Key “StartAfter” and “StopAfter” and then utilize the Values of these tags to inform a cron job when to Start/Stop EC2 instances.

In Summary:
  1. You should have everything you need in order to schedule Start/Stop of instances based on time of day.
  2. Use of scheduled Scale Up / Scale Down is a great way to ensure that your staff is familiar with Auto Scaling (QA or Developers might need to Scale Up infrastructure outside of business horus) and that your AWS Infrastructure is capable of withstanding outages or periods of time when no EC2 instances are running.
  3. If you’ve got questions, please feel free to comment below or send me an email.

VPC Introduction – Part 2

VPC Introduction – Part 2

This is the second part of a 4 part introduction to Amazon’s VPC. Part 1 examined the VPC resource itself, as well as the subnet, Route Table and Network ACL resources. Part 2 examines the Internet Gateway resource, the EC2-VPC Security Group resource and Auto Scaling Groups when used in VPC.

Internet Gateway Resource

An Internet Gateway provides connectivity to the Internet. Simply creating an Internet Gateway resource is not enough to provide access to the Internet, however, you’ll also need to do the following:

  1. Create or modify a Route Table to include a route to the Internet. An Internet route is typically defined as follows: Destination: 0.0.0.0/0, Target: <Internet Gateway Resource Number>
  2. Provide a Network ACL that allows outbound and inbound traffic from the Internet.
  3. Associate any subnet that requires Internet access to the previously created/modified Route Table and Network ACL.
  4. Provide each instance that requires Internet access with a Public IP address – the Internet Gateway does not providing Internet access while using Public IP addresses because the Internet Gateway does not function as a NAT router.

Note that using an Internet Gateway and Public IP addresses for instances is only one way to provide Internet connectivity to EC2 instances – part 3 will cover this in greater depth.

EC2-VPC Security Group Resource

EC2-VPC security groups are comprised of inbound and outbound rules and are associated with EC2 instances and other resources such as RDS Security Groups or ElastiCache. Inbound and Outbound rules filter based on IP addressing or security groups and port and both default to “Deny” traffic if not explicitly allowed by a rule. I’ve described the inbound and outbound rules below:

1. Inbound Rules. Inbound rules filter based on a packet’s source IP address or security group and source port. Amazon provides a number of rule templates for you (for ssh and HTTP, for example). Custom rules can also be created – a rule allowing port 81 in from the Internet would look like:

  • Type: “Custom TCP Rule”
  • Protocol: TCP
  • Port Range: 81
  • Source: 0.0.0.0/0

2. Outbound Rules. Outbound rules filter traffic based on a destination packet’s IP address or security group and destination port. An example outbound rule that allows unfettered tcp access to the Internet is below:

  • Type: All TCP Rule
  • Protocol: TCP
  • Port Range: 0 – 65535
  • Destination: Anywhere: 0.0.0.0/0

An example outbound rule that allows only access to HTTP resources on the Internet is below:

  • Type: HTTP
  • Protocol: TCP
  • Port Range: 80
  • Destination: Anywhere: 0.0.0.0/0

notice that we allow port 80 as the destination port but no other ports.

If you are familiar with EC2-Classic, the differences from EC2-Classic Security Groups are in the VPC Security Groups User Guide under “VPC Security Group Differences.”

Auto Scaling Groups and Launch Configurations

Auto Scaling Groups and Launch Configurations in VPC differ only slightly from Auto Scaling Groups and Launch Configurations in EC2-Classic. The two important differences are described below:

  • An Auto Scaling Group must have one or more associated subnets in order to launch instances.
  • A Launch Configuration includes an “IP Address Type” – this allows instances to be automatically given a public IP address.

The image below describes a VPC that provides Internet access to instances in two subnets. The VPC is comprised of a VPC, an Internet Gateway, a Route Table, a Network ACL, two subnets, an EC2-VPC Security Group, an Auto Scaling Group, a Launch Configuration and the instances that make up the Auto Scaling Group.

VPC - Internet Gateway and SG and ASG

Amazon Adds Auto Scaling Support to AWS Management Console

Amazon makes Auto Scaling available in AWS Console

… and … two of my favorite Auto Scaling tricks …

I hope that exposing Auto Scaling in the console will increase adoption of Amazon’s Auto Scaling – using Amazon’s EC2 service without Auto Scaling is failing to leverage Amazon’s strongest offering. Here’s are two of my favorite Auto Scaling Group uses:

  1. use auto-scaling groups for everything – even single instances. Need to revert a machine to a pristine state? If your machine is part of a scaling group, you can use ec2-terminate-instances to terminate the dirty instance and then wait while Amazon provisions a pristine instance for your use.
  2. use auto-scaling groups to “go dark” – for instance if your entire QA environment is going to be unused for a period of time (end of December, for example) simply set min-size, max-size and desired-capaicty to 0 for each group. When you return in January, simply reset the groups to their original sizes. This is also an excellent way to test if your application can withstand an outage – if your application scales back after January as if nothing happened – you using Amazon right.

Here’s the link to Amazon’s announcement: http://aws.typepad.com/aws/2013/12/aws-management-console-auto-scaling-support.html