How Hackers Can Attack Your Amazon Web Services (AWS) Resources

How Hackers Can Attack Your Amazon Web Services (AWS) Resources

AWS Resources: Attacked Directly Through the Network

In our previous post Leveraging Amazon Web Services Securely, we focused primarily on Identity Access Management (IAM).

Stolen credentials remain the number one action in cybersecurity breaches. However, attackers still have other routes to your systems and data.

Once you secure your AWS IAM, where do you focus next? In this piece we will look at securing AWS resources that can be attacked directly through the network.

What Do We Mean by AWS Resources?

Specifically, we’ll cover Elastic Compute Cloud (EC2) instances, Simple Storage Service (S3) and Relational Database Service (RDS). These are the primary services that allow you to operate your applications and store your data.

The good news is Amazon controls access inside of the AWS environment via IAM and Virtual Private Clouds (VPCs).   However, often for these resources to be useful, they must be exposed to the public Internet, and this is where your configuration choices matter.

Elastic Compute Cloud (EC2) Security

Let’s start with EC2.  AWS EC2 instances run your workloads and can be kept private but can also be exposed to the public Internet.

When exposed, additional configuration is required to not only secure those instances, but also other instances or data resources that you have in your environment.  The good news is AWS provides multiple layers to control access.

Virtual Private Clouds (VPCs) allow for you to group your instances together logically where access outside of the ‘cloud’ requires specific defined gateways.

This is now the default mode for AWS meaning any new instance created is automatically placed in your account’s default VPC for that region.

We recommend leaving the default VPC empty and instead create specific VPCs for applications and services.  Within each VPC, you can also specify multiple subnets.  It is recommended to group similar types of instances together on the same subnet.

For example, Internet facing web servers could be in a subnet and application servers on another subnet and maybe NoSQL servers on yet another subnet.

The advantage of this approach is that you can control the network traffic that flows in and out of each subnet with Network Access Control Lists (NACLs).  Think of a NACL as traditional firewall where you can control allowed sources, destinations, and ports for both inbound and outbound network traffic.

In our previous example, your subnet with the Internet facing web servers can have a NACL that allows access from the Internet, but limits to web ports and high ephemeral ports for client connections.  Your subnet containing the application servers can be limited to only communicating with your web server subnet and the NoSQL subnet, but not with the Internet.  A similar NACL can be set up for the NoSQL subnet, and ports can be specified to limit even internal traffic to the appropriate ports.


Figure 1. Example inbound NACL rules for Internet facing web server subnet.

Another facility you likely have already encountered is the Security Group (SG).  SGs are applied directly to an EC2 instance, and if you don’t specify a SG at instance launch, a new one will be created and assigned for you.  SGs are like a local firewall controlling traffic in and out of the instance.  You can assign a SG to multiple instances, giving you another way to limit which instances can communicate with each other and on which ports.  You can even specify which SGs can communicate with each other if desired.

Often there is confusion about when to use NACLs and when to use SGs.  We recommend setting NACLs to control the allowed sources and destinations and only limiting by port when there a few consistent rules that can be applied like in our web server subnet example.  SGs are then used to define more granular rules over ports.

One word about SGs though is that they seem to multiply almost like Tribbles.  Because a new SG can be created every time an instance is launched, we recommend creating at least one default SG and use one of the defaults every time.  Too many unused, unlabeled SGs lead to extra work and confusion when diagnosing connectivity problems and potentially mask security risks when an inappropriately open SG is used on an instance.

PROTIP: Remote Access

It’s probably obvious that through the same facilities, you can allow access for your operations team to maintain services through SSH, VNC, RDP, etc.  However, in more complex environments with multiple subnets, instances, and SGs, we recommend using the Bastion node approach for maintenance access.

For example, if you have mostly Linux instances in your environment and use SSH to remotely connect, instead of opening port 22 through all NACLs and SGs from your operations environment (e.g. the office), you would just allow access to single or small group of Bastion instances.  SSH would then be allowed through the environment only from the internal Bastion subnet.

The advantage is that you only need one NACL and/or SG configured that allows SSH from the outside and of course only must maintain public keys on the Bastion node(s).  By eliminating redundant configuration across all remotely accessed instances, you ensure better consistency and security.

Simple Storage Service (S3) Security

Reports circulating over the past year have brought to light the potential exposure of data via publicly accessible S3 buckets.

The latest data shows this may be as high as 7% of all S3 buckets.

That’s a lot of data that the bucket owners may not want open to everyone.  Of course, preventing this type of public access is as easy as a) disabling public access or b) disabling at least anonymous access.  For any data which should only be internally accessed either by program or human, this makes sense.  However, what about data that needs to be accessible more globally?

As an example, a common use case for S3 buckets is to serve static content for a web site.  Doing so means objects in your bucket must be exposed globally, but how do you prevent everyone from having direct access to your bucket?  Fortunately, the consistent and ever-present AWS policies can help here.  By applying a policy directly to the bucket, you can ensure that anonymous read access is only available via your web site by specifying a ‘referer’.

amazon-policy-limiting-bucket-access

Figure 2.  An example policy limiting access to an example bucket from www.versprite.com

This is just one example of bucket policy for controlling access, but the use cases and examples available in the AWS documentation are extensive.

Data stored in S3 that is sensitive or of high business or operational value will not intentionally be exposed publicly, however, extra steps should be taken to secure that data. For this, we can encrypt the S3 data both when ‘at rest’ (when stored) and ‘in transit’ (when transmitted).  Encrypting at rest is easy and should almost always be used.  S3 provides an AES-256 encryption by just enabling that control.

Alternately, you can create your own keys managed through AWS to encrypt your buckets.  By default, S3 traffic through the S3 REST API is encrypted via SSL. However, non-SSL access is still allowed.  To ensure all operations (specifically get and put objects) are encrypted, use the aws:SecureTransport conditional in your bucket policy.

Relational Database Service (RDS) Security

AWS provided relational databases are a convenient way to provide structured database services for your applications, but often will contain important and sensitive data just like S3.  Fortunately, RDS instances can have their access secured like EC2 instances via inclusion in a VPC with protection from NACLs and SGs.

Also, like S3 buckets, the data stored within can be encrypted with built-in AES-256.  This encryption at rest also applies to any database snapshots.  Again, like S3, there are very few reasons not to encrypt your RDS instances.  Finally depending on the database type in transit, data can also be encrypted with SSL.  Some database types support their own in transit encryption, so look for that in the AWS documentation.

Amazon Web Services allows you to quickly and easily provision services for running applications and storing your data, but without proper configuration could be exposed to the wrong audience.

Fortunately, there’s a rich set of tools and facilities to help you secure those resources while offering the flexibility needed to operate per your business needs.  I hope that you’ll take the time to review your AWS resource setup today and apply the guidelines discussed here as needed.

VerSprite Security Operations

The focus of SecOps services revolves around security engineering for Cloud and On-Prem environments (which includes Managed Hosting or CoLo environments).

VerSprite offers a range of managed security services aimed at providing a service that addresses client challenges across vulnerability management, threat analysis, technical remediation, system auditing/ hardening, and more. VerSprite’s SecOps →

Security Operations

We solve unique problems for clients plagued with security noise, limited resources, or costly 3rd party tools that under-deliver. Learn More →