Building ElasticBeanstalk using CloudFormation

The purpose of this blog post is provide the reader an understanding of how to build an ElasticBeanstalk application utilizing a CloudFormation Stack. In particular, the post describes:

  • the resources required in a CloudFormation stack, including the
    • AWS::ElasticBeanstalk::Application Resource
    • AWS::ElasticBeanstalk::Environment Resource
  • the relationship between these resources

After reading this blog post you should be able to build an ElasticBeanstalk application with multiple environments using a CloudFormation file. Subsequent posts will describe methods of deploying code to the ElasticBeanstalk application.

Resources required to support an ElasticBeanstalk Application

Prerequisite Resources

An ElasticBeanstalk application will require some underlying infrastructure such as a VPC, Subnet and, presumably, Internet Gateway, NAT Gateway, Route Tables and Route Table Associations.

Required Resources

AWS::ElasticBeanstalk::Application

The actual ElasticBeanstalk application. An Application serves as a container for Environments and Application Versions.

ElasticBeanstalkApplication:
  Type: AWS::ElasticBeanstalk::Application
  Properties:
    ApplicationName: !Ref AWS::StackName

The screenshot below shows the ElasticBeanstalk Application created by a AWS::ElasticBeanstalk::Application resource.

ElasticBeanstalk - MultipleEnvs - Application.png

AWS::ElasticBeanstalk::Environment

An “Environment” is a subset of the ElasticBeanstalk application. The “Environments” are shown in the AWS Console as parts of an application. For each “Environment” AWS will launch a CloudFormation stack containing components (typically an Auto Scaling Group, ) required to run your application.

ElasticBeanstalkEnvironment:
  Type: AWS::ElasticBeanstalk::Environment
  Properties:
    ApplicationName: !Ref ElasticBeanstalkApplication
    TemplateName: !Ref ElasticBeanstalkConfigurationTemplate

For a given application you will likely have a “Prod” environment or a “QA” environment. In the image below, a CloudFormation file containing two “AWS::ElasticBeanstalk::Environment” resources is used to construct “QA” and “Prod” environments – each of these Environments can have unique configurations and utilize different versions of a codebase.

ElasticBeanstalk - MultipleEnvs - Environments Highlight.png

AWS::ElasticBeanstalk::ConfigurationTemplate

A “Configuration Template” is used to specify the resources required to build an Environment as well as the configuration of these resources. These configuration options include things such as:

  • the “Solution Stack” (where Solution Stack determines what type of AMI will be used to run a given application – for instance, an AMI that contains php, ruby, python or docker)
  • if the Application will utilize an ElasticLoad Balancer
  • the Min Size and Max Size of the Auto Scaling Group supporting the Elastic Beanstalk application if the environment utilizes Auto Scaling
  • the VPC in which an ElasticBeanstalk application should reside if the application resides in a VPC

The full list of “OptionSettings” are available are available here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html. An example Configuration Template is below:

ElasticBeanstalkProdConfigurationTemplate:
  Type: AWS::ElasticBeanstalk::ConfigurationTemplate
  Properties:
    ApplicationName: !Ref ElasticBeanstalkApplication
      OptionSettings:
        - Namespace: aws:autoscaling:asg
          OptionName: MinSize
          Value: 2
        - Namespace: aws:autoscaling:asg
          OptionName: MaxSize
          Value: 2
        - Namespace: aws:autoscaling:launchconfiguration
          OptionName: InstanceType
          Value: t2.micro
        - Namespace: aws:elasticbeanstalk:environment
          OptionName: EnvironmentType
          Value: LoadBalanced
        - Namespace: aws:ec2:vpc
          OptionName: VPCId
          Value: !Ref VPC
        - Namespace: aws:ec2:vpc
          OptionName: Subnets
          # Value: !Join turns the individual subnets into a string
          Value: !Join [ ",", [ !Ref PublicSubnet01, !Ref PublicSubnet02 ] ]
        - Namespace: aws:ec2:vpc
          OptionName: AssociatePublicIpAddress
          Value: true
          SolutionStackName: 64bit Amazon Linux 2018.03 v2.8.4 running PHP 5.6

Link to CloudFormation File

The stack that I used to aid understanding the use of CloudFormation to create AWS ElasticBeanstalk applications is available here: https://github.com/cloudavail/snippets/tree/master/aws/elasticbeanstalk/elasticbeanstalk_with_multiple_envs

Conclusion

If you have any questions about this particular blog post please feel free to post a question below or email blog@cloudavail.com.

Using AWS CloudFormation’s Transform Function

Why use CloudFormation’s Transform Function?

There are two good reasons for using CloudFormation’s “Transform” function to include files. These two reasons are described below:

  1. Consistency.
    1. By including a snippet in each and every CloudFormation template – you’ll ensure that the included code is the same, stack to stack.
  2. Code reuse.
    1. You won’t need to update code across multiple stacks when you need to make changes. You will need to update stacks to get changes made to the included files – but you won’t have to update the actual code in each stack.

How to do this?

Creating a CloudFormation File that uses an Include.

You need to include a Fn::Transform statement where the given file is to be included. An example included is below:

Fn::Transform:
  Name: AWS::Include
  Parameters:
    Location : s3://187376578462-fn-transform-include/ubuntu_ami.yaml

An example of an include in the “Mappings” section of a CloudFormation template would look like:

Mappings:
  Fn::Transform:
    Name: AWS::Include
    Parameters:
      Location : s3://187376578462-fn-transform-include/ubuntu_ami.yaml

Lastly, here is a screenshot of a CloudFormation file that uses an include – see line 29.

CloudFormation - Fn Transform
CloudFormation template that utilizes a Transform function to include a file.

Creating the Included File

You will need to create a file that will be included in a given CloudFormation stack. This file is going to be inserted where the Fn::Transform statement is – this is akin to “import” or “include” in a programming language or “catting” two files together in a *nix Operating System.

The included file should look akin to the following:

AWSRegionArch2AMI:
  us-east-1:
    '64': ami-ddf13fb0
  us-west-1:
    '64': ami-b20542d2
  us-west-2:
    '64': ami-b9ff39d9
CloudFormation - File to be Included
File to be included in a CloudFormation template.

Uploading the Included File

The file that _will be_ included needs to be uploaded to S3. You can do this using the aws s3 command – see below:

aws s3 cp ubuntu_ami.yaml s3://$ubuntu_ami_file_s3_path --region us-west-2
CloudFormation - Included File Upload
AWS S3 command uploading a file to be included in a CloudFormation template.

Creating the CloudFormation Stack with an Include

You’ll need to use the “aws cloudformation deploy” command to deploy or update the given template. An example is below:

aws cloudformation deploy --stack-name FunctionTransformInclude --template-file autoscaling_with_yaml_userdata.yaml --parameter-overrides ubuntuAMISMapping3Location=s3://$ubuntu_ami_file_s3_path --region us-west-2
CloudFormation - Fn Transform Launch Stack
AWS CloudFormation “Deploy” command creating a CloudFormation stack

Summary

I’m planning on using for AMI mappings in particular, as well as for including sections of CloudFormation that might be better generated using code (for instance, user-data might be a consideration). I’ve yet to consider the use of “Fn::Transform / Include” to improve the security of stacks by removing passwords.

If you have questions or comments – reach me at colin@cloudavail.com.

ELB Behavior when Utilizing Backend Instances with Self-Signed Certs

There are AWS questions that I don’t know the answer to – and sometimes these questions need answers. In a case about a week ago, the following question was posed:

How does an Amazon Web Services ELB function when the backend instances utilize a self-signed certificate?

I was lucky enough to have the time to investigate. If you are just looking for the answer, see “short answer” below. For instructions (and a CloudFormation file) to allow you to duplicate my work, see”long answer” further below.

Short answer:

Yes. The AWS ELB will work with backend instances that utilize a self-signed certificate.

Long answer:

if you’d like to duplicate the test I utilized a CloudFormation file that builds this very infrastructure (an ELB, an Auto Scaling Group and Ubuntu instances running Apache accepting HTTPS connections on port 443) you can get this file from my “Snippets” repository at https://github.com/colinbjohnson/snippets/tree/master/aws/elb/elb_backend_selfsigned_cert A diagram describing the configuration is below:

ELB to BE HTTPS.png

 

After performing tests to ensure that the backend instances where healthy and serving requests I wanted to dig a bit deeper to confirm that the data was, in fact, encrypted. I went ahead and ran some requests against the backend web servers and utilized tcpdump on a backend instance, capturing data on port 443. Please note that the Security Group utilized in testing only allows port 443 inbound, so I could have run this test without filtering on “port”. A screen capture of the data captured by tcpdump is shown below:

Backend Instance - tcpdump Capture.png

I ran another tcpdump capture and loaded the data into Wireshark for visualization – the result is show below:

Backend Instance - Encrypted Data.png Notice that the capture has no filters applied – and specifically – that payload is shown as “Encrypted Application Data.” If this were an HTTP connection I would be able to view the data that was being sent. After the packet capture my curiosity was satisfied – the body of HTTP requests was being encrypted in transit.

Conclusion:

If you require “end to end” encryption of data in transit you can utilize backend instances with self-signed certificates.

 

Creating VPCs and Subnets across Regions with a Single CloudFormation File

I’ve often encountered clients who want to utilize a single CloudFormation to build VPCs and Subnets across different AWS Regions and different AWS Accounts. In this blog post will describe exactly how to do this – as well as some of the pain points that are encountered when trying to utilize a single CloudFormation to build VPCs and subnets in different regions and accounts. The post is divided up into two parts – part one describes the solutions (and provides links to CloudFormation files which are stored in GitHub) and part two describes the solutions in more depth.

Part 1: a Single CloudFormation file for building VPC and Subnets in any Region or Account

The solution for building a any-region/any-account CloudFormation file containing a VPC and subnets is going to be different depending on if you need to provide a CloudFormation file that is multi-region or is both multi-region and multi-account. As a result of this, the blog post is divided into “Part 1-A” which covers multi-region only and “Part 1-B” which covers any-region/any-account.

Part 1-A: a Single CloudFormation file for building VPC and Subnets in any Region

If you don’t have a requirement that the you build VPCs and subnets across multiple accounts, you’ll have a relatively straightforward process:

First, you’ll create a mapping that maps each Region to Availability Zones in which subnets can be created. Be careful here: in my own personal AWS account I work can not create a subnet in “us-east-1a”. The end result looks something like below:

"AWSRegion2AZ" : {
  "us-east-1" : { "1" : "us-east-1b", "2" : "us-east-1c", "3" : "us-east-1d", "4" : "us-east-1e" },
  "us-west-1" : { "1" : "us-west-1b", "2" : "us-west-1c" },
  "us-west-2" : { "1" : "us-west-2a", "2" : "us-west-2b", "3" : "us-west-2c" }
}

Second, for each resource that requires a subnet, you’ll need to “Ref” the subnet. An example is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::FindInMap" : [ "AWSRegion2AZ", { "Ref" : "AWS::Region" }, "1" ] }
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

A link to a CloudFormation template that will create a VPC and subnets in any AWS region: https://github.com/colinbjohnson/snippets/tree/master/aws/cloudformation/multi_region_vpc_cloudformation

Below I’ve used the CloudFormation file to create VPCs and subnets in 3 AWS regions: us-west-2, us-east-1 and us-west-1. Screenshots are below:

 

Part 1-B: a Single CloudFormation file for building VPC and Subnets in any Region or any Account

A CloudFormation file that builds a VPC and subnets in any Region or Account is going to be similar to the above (using a Map with defined Availability Zones) with one exception – each account will have different Availability Zones where subnets can be built. An example: my own account allows VPC subnets in the us-east-1b, us-east-1c, us-east-1d and us-east-1e Availability Zones (notice: no subnets can be built in us-east-1a) whereas a different account might allow subnets in us-east-1a, us-east-1b and us-east-1c Availability Zones. To account for this difference you’ll need a map that provides a VPC subnet to Availability Zone mapping for both Region and Account. The solution is shown below:

First, create a Map that accepts “Regions” and “Accounts” and returns a list of Availability Zones where VPC Subnets can be built.

"RegionAndAccount2AZ": {
  "us-east-1" : { 
    "Production" : [ "us-east-1b", "us-east-1c", "us-east-1d" ] ,
    "Development" : [ "us-east-1b", "us-east-1c", "us-east-1d" ]
  },
  "us-west-2" : { 
    "Production" : [ "us-west-2a", "us-west-2b", "us-west-2c" ] ,
    "Development" : [ "us-west-2a", "us-west-2b", "us-west-2c" ]
  }
},

Second, for each resource that needs to be build in a specific Availability Zone you’ll need select an item from the RegionAndAccount2AZ list:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

And … here is a link to the CloudFormation template that will create a VPC and subnets in either different AWS regions and in different AWS accounts: https://github.com/colinbjohnson/snippets/tree/master/aws/cloudformation/multi_region_and_account_vpc_cloudformation

Part 2: Why is this all required?

Any time you have complexity it is important to keep focus on what actually needs to be done and why. I’ll describe the reasons why the additional complexity is required below:

  1. We need to ensure that when creating subnets using CloudFormation that the subnets are created in different Availability Zones. AWS doesn’t provide a facility for doing this. Result: we must define Availability Zones when creating subnet resources.
  2. AWS provides no mechanism for getting the Availability Zones in which subnets can be created. Result: we must manually provide a list of Availability Zones where subnets can be created. We do this using a map.
  3. If multiple accounts are used we run into a problem where the manually provided list of Availability Zones where subnets may be created are potentially different in each different account. Result: we need a map that allows CloudFormation to select Availability Zones where subnets can be built and that takes the “account” into account.

I’ve described the solutions to each problem above in more detail below.

Choosing Subnets Yourself

If you simply define subnets without specifying an “Availability Zone” property for each subnet there is a good chance that Amazon will create these subnets in the same Availability Zone. An example of defining subnets without an Availability Zone property is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

There is a pretty good chance that this will cause two problems:

  1. If you are creating resources that use these subnets – such as an ELB – the ELB resource creation will fail due to the fact that an ELB can only have /one/ subnet per AZ. In the case above, if PublicSubnet1 and PublicSubnet2 are both in us-east-1b – ELB creation will fail.
  2. You may end up with an availability problem as a result of resources being created in the same Availability Zone. For example, if PublicSubnet1 and PublicSubnet2 are both in us-east-1b and you create an Auto Scaling Group that utilizes both PublicSubnet1 and PublicSubnet2 – your instances will still all be brought up in us-east-1b.

The solution would be to use “Fn::GetAZs” but…

“Fn::GetAZs” Returns AZs Where Subnets Can’t Be Placed

To solve the problem of placing subnets in the same Availability Zone, you’d think that you want to use Amazon’s “Fn::GetAZs”. For example, you’d call “{ “Fn::GetAZs” : { “Ref” : “AWS::Region” }” (this returns a list of Availability Zones) and then you’d build PublicSubnet1 in the first Availability Zone, PublicSubnet2 in the second Availability Zone and so on. An example is below:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::GetAZs" : { "Ref" : "AWS::Region" } } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::GetAZs" : { "Ref" : "AWS::Region" } } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

However, if you use Amazon’s “Fn::GetAZs” – you’ll get a list of all Availability Zones – not just those Availability Zones in which a subnet can be created. As an example, if I call “Fn::GetAZs” using my own account in the us-east-1 region, the return values are [ “us-east-1a”, “us-east-1b”, “us-east-1c”, “us-east-1d”, “us-east-1e” ]. A problem arises because the “us-east-1a” Availability Zone isn’t available to me for subnet creation, so CloudFormation stack creation fails. Here’s a screenshot of that behavior:

FnGetAZs Returns AZs Where Subnets Cant Be Built.png

“Mapping Method” to the Rescue

Using a Map solves this mess. The solution isn’t ideal as it requires one time creation of a map containing a list of Availability Zones where subnets can be created. This map does allow you:

  1. ensure subnets are built in different AZs.
  2. provide support for multiple regions.

Availability Zones that Support VPC Subnets are Different Per Account

If you require VPCs built in different accounts you’ll be required to take one additional step – specifically, you’ll need to provide an Availability Zone to Subnet map per account because each account may have different Availability Zone properties. An example of this mapping is below:

"Mappings" : {
  "RegionAndAccount2AZ": {
    "us-east-1" : { 
      "Production" : [ "us-east-1a", "us-east-1b", "us-east-1c" ] ,
      "Development" : [ "us-east-1b", "us-east-1c", "us-east-1d" ]
    },
    "us-west-2" : { 
      "Production" : [ "us-west-2a", "us-west-2b", "us-west-2c" ] ,
      "Development" : [ "us-west-2a", "us-west-2b", "us-west-2c" ]
    }
  }
},

And an example of using this mapping to place a subnet in the correct Availability Zone:

"PublicSubnet1" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "0", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.0/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},
"PublicSubnet2" : {
  "Type" : "AWS::EC2::Subnet",
  "Properties" : {
    "AvailabilityZone" : { "Fn::Select" : [ "1", { "Fn::FindInMap" : [ "RegionAndAccount2AZ", { "Ref" : "AWS::Region"}, { "Ref" : "Account" } ] } ] },
    "CidrBlock" : "10.0.0.128/25",
    "VpcId" : { "Ref" : "VPC" }
  }
},

And… in Conclusion:

  1. The situation of building VPCs and subnets across regions and accounts using CloudFormation will likely improve. Examples of potential improvements might include a “Fn::GetAZs” pseudo parameter that returns only Availability Zones where subnets can be built or for loops that can build 1 to “x” subnets.
  2. The techniques described in this blog post can likely be improved by using conditionals or lambda. If anyone does this – let me know and I’ll update the post.
  3. Other tools that support “shelling out” or running arbitrary commands may provide better mechanisms that allow a single file to create VPCs and Subnets – although using a tool outside of CloudFormation may not be an option you are open to considering.

Hope that you have found this post useful – if you have questions or comments please feel free to send me an email: colin@cloudavail.com.

Allowing Long Idle Timeouts when using AWS ElasticBeanstalk and Docker

A client I work with had a requirement for a 60 second plus HTTP connection timeout when running Docker on ElasticBeanstalk. Specifically, one of the Engineers was noticing that any HTTP requests taking 60 seconds or more to complete were not being returned by the ElasticBeanstalk application.

Identifying the Cause of the 60 Second Dropped Connections:

The 60 second timeout is actually set in two locations, described below:

  1. The Amazon Elastic Load Balancer, which uses a default “Idle Timeout” value of 60 seconds. The “Idle Timeout” of the given Elastic Load Balancer can be changed easily. (http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html).

    ElasticBeanstalk - ELB - 600 Second Timeout
    ElasticBeanstalk ELB configured with 600 second Idle Timeout.
  2. The nginx Application that acts as a proxy server in front of the Docker container also has a default timeout. The nginx default timeout is not exposed for configuration – you’ll need to modify the nginx configuration through the use of an .ebextensions file or another method. This will also be described within this blog post.
ElasticBeanstalk - HTTP Request Flow
ElasticBeanstalk – HTTP Request Flow

Setting the nginx Timeouts:

The method I used for setting the nginx timeouts can be described, at a high level as:

  1. creating an “ebextension” file that modifies the default nginx configuration used by ElasticBeanstalk. ebextension files are used by Amazon to modify the configuration of ElasticBeanstalk instances.
  2. creating a ZIP format “package” containing a Dockerrun.aws.json file as well as the .ebextension file used to modify the ElasticBeanstalk configuration.

The details are below:

  • Create an “ebextension” file within the root of your project – the file should be at the path .ebextensions/nginx-timeout.config.
  • The content of the file is described below:
files:
  "/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf":     mode: "000644"
    owner: root
    group: root
    content: |
      proxy_connect_timeout       600;
      proxy_send_timeout          600;
      proxy_read_timeout          600;
      send_timeout                600;
commands:
  "00nginx-create-proxy-timeout":
    command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
  • Create an application package by running the following from the root of your application’s directory:
    • zip -r ../application-name-$version.zip .ebextensions Dockerrun.aws.json
    • the command above will package the “Dockerrun.aws.json” file as well as the contents of the .ebextensions directory
  • Upload the resulting application-name-$version.zip to AWS ElasticBeanstalk to deploy your application with the nginx timeouts.
  • Note that I’ll be continuing to do testing around the ideal values for the proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout and send_timeout values.

Creating Expiring IAM Users

A common question I get from folks is “how do I create a temporary AWS IAM user” or “how do I grant access to x service” for only a period of time. Typically, you’d want to use the IAM “Condition” element, which I’ll demonstrate how to do this in the remainder of this blog post. I’ll be using the example of creating an IAM “Power Users” policy that expires on January 31st of 2016 in this example.

Understanding IAM “Power Users” Policy:

The reason I like the IAM “Power Users” policy is that many organizations are moving to the model of fully empowered Engineering staff (meaning organizations where all Engineering staff have Administrator access). This model works well for many organizations, with one exception – allowing all Engineering staff to reset passwords means that when an employee leaves an organization you can’t be certain you have removed their access completely. Consider the case where an ex-employee has just created a new user or provided a password reset – they may know that accounts’s username password even after their own account has been disabled or removed. The other reason I like “Power Users” – despite the “deny” of actions on IAM resources – IAM users can still reset their own passwords despite the deny on IAM user actions (see: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_user-change-own.html#ManagingUserPwdSelf-Console).

Creating the Expiring IAM Power User Policy:

To create an “Power User” IAM policy, do the following:

  1. Login to the IAM Console within the AWS Console.
  2. Select “Policies” from the left-hand navigation and click “Create Policy”
    1. Select “Copy an AWS Managed Policy”
    2. In “Step 2: Set Permissions”, select the “Power User” policy.
    3. In “Step 3: Review Policy”, add the following:
      “Condition”: {
      “DateLessThan”: {
      “aws:CurrentTime”: “2016-01-31T12:00:00Z”
      }
      The outcome will look like the image below:
      IAM Policy - Adding Condition
  3. Click “Create Policy”

The magic here is that the Statement within the IAM Policy will only be allowed when the condition is true.

Attach the Policy to a User or Group:

  1. Login to IAM Console within the AWS Console.
  2. Select “Policies” from the left-hand navigation and select the “Power User” policy you had created previously. See image below:
    IAM Policy - Filtered and Power User Selected
  3. Scroll down to the “Attached Entities” section, click “Attend Entity” and add the Users (or Groups) to which you wish to attach this policy. See image below:
    IAM Policy - Attached Entities

Notes:

  1. After “Policy Expiration” the IAM user will still be able to login to the AWS Console. They won’t have permissions to issue any API commands, however.

References:

  1. IAM “Conditions” Reference is available here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition.