The Most Common Security Misconfigurations in Cloud Infrastructure

Written by: David Henderson on Jun 11, 2024

Advancements in cloud hosting have undoubtably helped fuel the rapid advancement in development technologies. However, with this in mind, has security of these environments been able to keep pace?

Cloud inf inpage

Cost efficiency and flexibility are factors that have jolted cloud technology over the past decade. However, like any traditional hosting environment, cloud infrastructure is also subject to security vulnerabilities, particularly through misconfiguration.

In this article, we take a look at the most common security misconfigurations we see any through our work in penetration testing.

1. Publicly Exposed Resources

Publicly exposed resources such as security groups allowing access from 0.0.0.0/0 (the internet) and S3 buckets with no access control lists allow for a large attack surface to be present against your cloud accounts, where services can be probed, scanned, attacked and accessed. S3 used to be famously easy to misconfigure where as now by default they are locked down from everything for a good reason, nobody should see your stuff! Access Control across resources should be regularly checked, ensuring that what is made public can be public, with everything else being protected with other services. For example for an S3 website, you can use CloudFront which will hide the bucket and serves the traffic securely. EC2 instances which serve as web servers can be protected hidden with a combination of LoadBalancers and private subnet hosting, which allow access on ports 80/443 as needed and forward that traffic to EC2 instances. By hiding the server within the VPC, it wont be subjected to SSH brute-force attacks or probing.

2. Outdated Credentials/Access Keys

Old, weak and compromised passwords without MFA can often lead to account and resource compromise, and the same goes for outdated Access Keys. Numerous times have we encountered root keys being actively used with no MFA for years on end, the keys which give you keys to the kingdom allowing for full account takeover if compromised. Key and credential rotation is recommended to be done every 90 days with root keys disabled entirely, whilst everything is secured with MFA. Good credential usage is also key, to ensure that they are never hard coded or uploaded to repos like GitHub.

3. Misconfigured Policies and Permissions

Principle of least privilege is the term i hate typing the most, reason being that I have to type it very often. Assigning minimal access to users and services is hard work however if not carried out, it leaves a great amount of space for privilege escalation and data exfiltration attacks in event of compromise. Google Cloud does this very nicely where unused permissions are highlighted within the IAM dashboard, telling you what to remove. We should make use of the features we have such as the GCP IAM permissions overview as well as AWS’ access analyser, wildcard policies and condition based rules. For example giving a junior developer access to everything production based is not wise, therefore the environments should be appropriately separated but the access policies they are given should focus on access to the required services only, conditional access allowing them to modify the resources tagged with “staging” values for example, only permit them with certain actions such as READ access only to S3 but maybe READ and WRITE to Lambda if its appropriate. Wildcard rules within the policies can be used to give access to a wider umbrella of resources and when reinforced with conditionals, they can create really strong policies, minimising access and risk.

4. Lack of Audit Logging

Lack of monitoring, logging or audit logging with services such as CloudTrail is often encountered across tested environments as extensive logging can be very expensive. Not only do the logs need to be recorded and stored but also often analysed to make the life of humans much easier. These services however provide a crucial piece of information which is "HAVE I BEEN COMPROMISED YET" where wide sets of rules can be set and outlined to trigger alerts when they are tripped. CloudTrail within AWS for example records all the user activity within the account such as a user changing security groups or S3 ACLs, which if recorded and monitored for could trigger alerts to the administrators. These would help ensure that compromised accounts or services get caught rapidly when they are trying to expose security groups to additional inbound addresses in case their IAM access policies aren't restrictive enough.

5. Lacking Update and Maintenance Procedures

The security of the physical devices and data centres is not a "you" problem however whats on these servers rented out by you is. This means that how you configure the available resources and if they are adequately patched is up to you to maintain. There is a wide array of services which make this process easier to achieve such as automatic maintenance windows for RDS databases or serverless solutions which require little to no maintenance, however more traditional deployments which use EC2 and VMs still need to be patched and have security updates applied on a regular basis. If you are confident in your stack and the applications hosted, these can be automated with services such as Systems manager, or good old Cron jobs. Each hosted service should be monitored for their health but also have regular checks, ensuring that available patches (within the appropriate window) are applied especially for publicly available services. For example an apache or php web servers exposed to the world on ports 80 and 443 will have a massive target on their back, where attempts to exploit outdated versions will be carried out daily. Security of components must be thought of in a layer approach to help cover your back if something fails or is not kept up to date, for example:

  1. Load Balancer Exposed to the world
    1. Limited security groups (as applicable)
  2. Port 80 traffic is forwarded to 443 automatically via Forward Rule
    1. The attached SSL policy follows TLSv1.2+ and best practices
  3. Additional LB configuration features are enabled, such as Dropping invalid headers
  4. Traffic reaching the EC2 instance is handled by the running processes as required
  5. Components are regularly maintained and up to date

This example deployment ensures that the EC2 instance is kept from view and the LB is the only exposed component. The ssl can be managed via AWS Certificate service which makes life easier rather than hosting it on the EC2. With enforced HTTPS, security headers and good TLS configurations, the security of established connections is achieved and the attack surface is kept to a minimum. These layers help protect the web server against basic attacks however if the hosted application is exploited, it may lead to on-server vulnerabilities being exploited as well, which is where the server level patching is carried out.

Having specialised in Cyber Security for over 5 years, witnessing the rapid growth of cloud infrastructure has been nothing short of incredible. However, this growth also sees the increase in misconfigurations and vulnerabilities. Bad actors are seeking to exploit your cloud services through any means, it's therefore critical to regularly test your infrastructure as part of your overall security posture.

Lukasz Mrozowski (Senior Cyber Security Consultant, CSTM AWS)

Need help?

Email Us
email hidden; JavaScript is required

Or send us a quick message

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.