TippingPoint Digital Vaccine Laboratories
DID YOU KNOW... In December of 2007, Microsoft released seven security bulletins which fixed 11 new security vulnerabilities. TippingPoint and ZDI were credited with discovering a total of four of those vulnerabilities.

Cloud Security: Amazon's EC2 serves up 'certified pre-owned' server images

Background behind the problem

Cloud computing has quickly evolved from a hot industry buzz word into a multi-billion dollar emerging market, with all the big names striving to grab a piece of the pie. Amazon, with its Amazon Elastic Computer Cloud (EC2), is arguably the dominant leader of the cloud services market. Even the video streaming giant Netflix moved its operation into Amazon's EC2, opting out of building out its own data centers. With such a high growth technology sector it's no wonder we are starting to see more and more malicious activity spreading in the cloud ecosystem.

Let us quickly go over a typical usage scenario from the view point of an EC2 user, in order to better explain how this attack works. Users sign up for Amazon Web Services because they want to host something on the cloud, be that a web site, web service, or just a data backup. More often than not they want to host a server in the cloud in order to offset the cost of purchasing and maintaining their own hardware. So how does this process work? Well, when a user wants to create a virtual web server, referred to as an instance, they usually have two choices. They can either create a software image to install with, or they can use one of the pre-existing software images available from EC2. These images are called AMIs, or Amazon Machine Images. They're stacks of software created to help users deploy servers quickly. For example, a typical AMI might consist of a SuSE Linux image with Apache web server and MySQL database already installed and configured. Other images may have WordPress, Joomla, Drupal or a number of other content management systems pre-installed for you. You get the picture. The AMIs allow you to deploy a server, for whatever purpose you want to use them for, rather quickly. A user just has to select the type of instance they want to deploy (number of CPU cores + memory) and then pick the image they want to deploy with. The whole process takes less than 5 minutes.

There two major types of AMIs to work with. There are images built by Amazon itself, of which there are 170 at the time of this writing. But then there are also public images, created by the other members of the EC2 community. There are about 7390 of these, all created by other users of Amazon's cloud services and then shared with the rest of the community to help others. Or so you would assume ...

Certified Pre-owned AMI

On April 8th, 2011, Amazon sent out the following email to its Elastic Compute Cloud customers acknowledging the presence of compromised images in their community. In this email they notified the members using the compromised AMIs of the danger they're facing and the necessary course of action to remediate the threat. This is one such email:

The email identifies the affected AMI and warns against any continued use of it. It suggests, and rightfully so, that any server instance running an infected image should be for all intents and purposes considered 100% compromised. All services running on those instances should be migrated to new (clean) images. This is definitely the right recommendation to make to its customers, as nobody can guarantee that the affected servers have not been compromised already. Naturally, they do not disclose the number of instances that have been affected from this AMI but simply judging from the sheer size of EC2 and the number of servers hosted there we can't help but assume the number is not very small. The infected image is comprised of Ubuntu 10.4 server, running Apache and MySQL along with PHP. This is a pretty typical LAMP server setup so we're assuming a lot of users opted to use it, especially if they are hosting a web site. To make things worse the image appears to have been published in October of 2010, which is 6 months ago and we are only hearing about this problem now.

So what exactly happened here? An EC2 user that goes by the name of guru created this image, with the software stack he uses most often and then published it to the Amazon AMI community. This would all be fine and dandy if it wasn't for one simple fact. The image was published with his SSH key still on it. This means that the image publisher, in this case guru, could log into any server instance running his image as the root user. The keys were left in /root/.ssh/authorized_keys and /home/ubuntu/.ssh/authorized_keys. We refer to the resulting image as 'certified pre-owned'. The publisher claims this was purely an accident, a mere result of his inexperience. While this may or may not be true, this incident exposes a major security hole within the EC2 community.

While the ability to publish and use community created AMIs is a nice addition to EC2 it also leaves its community widely exposed to a wide range of security vulnerabilities. A foreign SSH key injected into ~/.ssh/authorized_keys like in the one described in Amazon's email is something that most users would probably never notice. Vast majority of users will simply select the AMI they want to use, pick their instance type and off they go. They will not take time to do a security audit of the system. They might not even know how to do a security audit of the system. Most business are rushing to get their products to market, and cloud is just another shortcut to getting their servers up and running and not having to worry about machine maintenance. To them, the more time they save configuring the systems the more time they have to get their product out of the door quicker. In which case it makes perfect sense to
them to select a community released AMI if it meets all their software requirements. This is exactly why we think this type of attack on the EC2 community can prove to be particularly effective.  

Tip of the iceberg

Whether this was an attack or not is uncertain. I suspect it was just an honest mistake, but that is besides the point. What we do know is that if something this transparent took 6 months to get caught than the cloud community is due for a rude awakening. This exposes a real security vulnerability present within the community that needs to addressed as soon as possible. Currently there is no true way for customers of EC2 to trust any of the publicly published AMIs. How could they? These images are created by unknown 3rd parties and there's no real way of knowing how the images were created. A truly malicious user can easily put in a truly hidden backdoor that might never get discovered. They could recompile ssh-daemon with their own backdoor, that would allow their user to be permitted access to the system. Such a backdoor would be near impossible to detect. An attacker could also put a backdoor into the kernel itself, install a rootkit, create a trojan that phones home, etc. The possibilities are endless and the security measures are unfortunately too few. The point here is that any AMI could have a backdoor that is so hard to detect that it would probably never be detected by the community. This sort of vulnerability allows for a lot of blanketed attacks. Much like the AMI mentioned earlier in this email was comprised of widely used software so could a truly malicious AMI created by an attacker. Malicious entities could potentially flood the EC2 community with hundreds of infected AMIs with the most popular software stacks such as LAMP, CMS servers, video/audio streaming servers, etc. These AMIs would then get consumed by the unsuspecting cloud users and their machines compromised.  

Cloud security is no small issue, and the problem with compromised AMIs is not one to take lightly. We hope this incident proves to be a tipping point (no pun intended) for the way EC2 manages AMIs. Hopefully Amazon EC2 Security Team finds a way to deal with these challenges. One possibly solution would be for Amazon to become prolific in creating AMIs that the community is demanding. Perhaps have a form where customers can demand certain types of images and Amazon employees create them. Another solution might be create a method of easily creating images from scratch. Perhaps some sort of automated install scripts that create software stacks based on customer needs. Yet another solution would be to create an open community that does peer reviews of published AMIs although this could prove to be an insurmountable task. As it currently stands the EC2 community AMIs are unsafe for use. We suggest all EC2 users to carefully check their running instances for problems such as the ones we described, and optimally they should create and install their own AMI images.

Tags: backdoor,cloud,vulnerability
Published On: 2011-04-11 17:00:00

Comments post a comment

  1. Simon commented on 2011-04-12 @ 15:57

    Well written and informative article. Hopefully articles and incidents like this will make people sit up and think about their cloud/virtualization security....

  2. haroon commented on 2011-04-12 @ 17:04

    We demo'd this in 2009 with our "clobbering the cloud" presentation: http://www.sensepost.com/blog/3797.html

    We uploaded an Evil AMI, and promoted it to the front page to get more click action..

  3. @somic commented on 2011-04-12 @ 19:39

    You did not mention security groups in your post. I can't recall default setup for new accounts in EC2, but one can easily start instances in EC2 with SSH only allowed from one IP address. Additionally, such access can be controlled on demand - allow access, log in, do some work, log out, disable access.

    Why AWS did not mention security groups (their own feature!) as a way to mitigate this attack vector in their email is beyond me.