Why choose Panoptica?
Four reasons you need the industry’s leading cloud-native security solution.
As organizations accelerate their move to the cloud, it’s no surprise that many CISOs want to stick with what they know when it comes to security. On-premises, traditional best practices dictate that every endpoint or server needs to have its own endpoint protection. This is why antivirus software on machines is usually a top priority. However, lifting and shifting this approach to the cloud just doesn’t work.
Let’s take a deeper dive.
Think about the legacy firewall approach on-premises, where a perimeter-based firewall around the network was seen as sufficient to keep attackers at bay. Today, it’s well established that the idea of the traditional perimeter is dead, and that hybrid and complex IT environments can no longer be protected using a North-South perimeter approach. After all, most of the traffic is already inside the network, East-West. The next logical step is therefore endpoint protection, where each machine or server has its own protection in place. Even if an attacker enters the network, (and it’s now best-practice of a zero-trust model to assume access) critical assets are kept under lock and key.
However, in our hybrid and multi-cloud environment, security solutions need to evolve past a reliance on endpoint protection, too. On the cloud, endpoint protection technologies like Endpoint Detection Response (EDR) are comparable to workload protection, where Cloud Workload Protection Platforms (CWPP) use signature-based detection and anomalous behaviors to identify suspicious activity. By design, this is only effective as a security measure once attackers are inside the network.
With the principle of least privilege, a concept largely used for on-prem, is a huge amount of work to apply to the cloud, for little reward or risk reduction.
The principle of least privilege (PoLP) is a security concept for computer systems where you give users exactly the permissions they need to do their job, and nothing more. It was created for and applied to on-premises security environments, and on-premises at least, it can be extremely effective at reducing risk.
Here’s how it works.
All users have access permissions, which will govern what they can interact with in any network, whether that’s access to read-only versions of files, read write access, root privileges, or any others. Excessive privileged access means that a user has more access to an environment than they need to given their designated role.
If a freelance designer is given all-access to their client’s network, with virtually unlimited
privileges, when all they really need is permissions to a single folder where the logos are kept – it’s quite clear that there’s a problem with their level of privilege access policies. In contrast, if a super user account can only view read-only files, they obviously don’t have enough access, and least privilege has gone awry somewhere, probably with too coarse policies dictating their access.
In a smart and secure on-premises environment, tools like micro-segmentation and zero trust policies have used the principle of least privilege to narrow down risk in several ways:
This is the process of isolating specific “crown jewel” applications so that even if an attacker could make it into your environment, they would be unable to reach that data or application. As few people as possible would be given credentials that allow this kind of access, therefore following least privilege access rules. Crown jewel applications could be anything from where sensitive customer data is stored, to business-critical systems and processes.
Based on the role that they hold at the company, RBAC or role-based access control allows specific access to certain data or applications, or parts of the network. This goes hand in hand with the principle of least privilege, and means that if credentials are stolen, the attackers are limited by what access the employee whose credentials were obtained holds. As this is based on users, you could isolate privileged user sessions specifically to keep them with an extra layer of protection. Only if an administrator account or one with wide access privilege is stolen, would the business be in real trouble.
This task is usually done with micro segmentation, where specific apps, users, data, or any other element of the business is protected from an attack with internal, next-gen firewalls. Risk is reduced in a similar way to the examples above, where the requisite access needed is provided using the principle of least privilege to allow access to only those who need it, and no one else. In some situations, you might need to allow elevated privileges for a short period of time, for example during an emergency. Watch out for privilege creep, where users gain more access over time. This can be hard to track and often can go uncorrected.
More recently, with the rise in cloud-native deployments and data centers, many security professionals have attempted to lift and shift the concept of least privilege to the cloud.
When it comes to securing the cloud, here is why we believe that least privilege is dead.
Least privilege is predicated on the idea of escalating privileges and credentials. An attacker breaches the network, and they can make lateral moves from one account to another, moving across the network and gaining greater and greater access until they reach the ultimate payload. This is a very “onprem” concept.
To take full account ownership on the cloud, a user would really only need to have a couple of permissions, in an environment where there are often tens of thousands. For example, on AWS there are more than 10,000 different IAM actions, distributed across the various services in the cloud. These permissions are segmented between read, write, or management actions. To create a data breach, all an attacker needs is to gain access to a single IAM role or set of user credentials that has S3:GetObject only with a wildcard in the resource. In short, any S3 bucket that doesn’t have explicit restrictions to the account is open for business to an attacker. We talk a lot about the risks of S3 buckets in this article on the risks of misconfigured cloud buckets and what you can do about them.
This is exactly what happened in the CapitalOne case. Security experts continue to call out the famous data breach as an issue that could have been prevented with least privilege in mind, but this really isn’t the case. With Capital One, just two IAM actions, ‘List S3 buckets’and ‘Get objects’ (a sync command) allowed the attackers access, the ability to prompt a $190M settlement, and a data breach of more than 100M customers.
With this attack path available from just two permissions, the principle of least privilege just isn’t possible. It’s not about restricting privilege access permissions or using rolebased user access on cloud platforms like AWS, Azure, or GCP. The truth is you need a completely different approach.
Image Source: ShiftLeft
The same is true when you think about threats due to Privilege Escalation in the cloud. While least privilege claims to protect against lateral movement and privilege escalation, in the cloud you can achieve this with just a couple of permissions.
Think about AWS, for example. As RhinoSecurity points out in this great blog, an attacker could use ec2:RunInstances and iam:PassRole as just two permissions that allow for full account ownership. It’s not about gaining specific privileged credentials at all, as this could be done with minimal privilege or superuser privileges alike.
The principle of least privilege also relies on the idea that you can manipulate and manage permissions and access rules, fully controlling your environment.
That’s also not really an idea that can be widened to fit a hybrid or cloud-native environment.
On both AWS and Azure, there are managed policies and roles that are set out by the Cloud Service Provider that can’t be manually changed. In most cases, users and developers are unaware of the risks of these policies and roles.
Two examples are the AWSLambda_ FullAccess and AWSElasticBeanstalkManaged Updates-CustomerRolePolicy
Although these are used by-design to achieve rapid agile development cycles out of the box, making DevOps lives easier, both also allow for privilege escalation when exploited.
One good example is the AWSElasticBeanstalkService managed policy that AWS announced for deprecation, and is no longer available for attachment to users, groups or roles, as of April 15th, 2021.
AWS recommends using the new managed policy, AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy
In this policy, some actions have had certain resources and conditions defined, but privilege escalation is still possible by exploit, a huge security gap. Here’s a gist Azazar created for more information and a proof of concept.
The Cloud Infrastructure Entitlement Management (CIEM) domain is built around tackling these issues and supporting businesses in achieving the principle of least privilege in the cloud.
However, this causes a huge headache for DevOps teams and cloud security owners when attempting to maintain lean policies under strict resource requirements. In some cases, CIEM tools will claim to offer ways to achieve the principle of least privilege creation at the build stage, baked-into CI/CD pipelines. This ROI is in fact, impossible, and the attempt causes an endless amount of maintenance and frustration.
Most developers in cloud-native companies are unaware of the risks of managed policies or wildcard policies in the cloud, and that’s fine! It’s not part of their role to gain such an in-depth knowledge of cloud risks. In fact, attempting to get up to speed with all the changing risks of a dynamic environment would slow down the very DevOps pace that they are aiming to achieve by leveraging the cloud.
Consider the time that would be spent every time we modify the policies used by our users or applications, for example when we add a new database or new IAM actions. Trying to achieve privilege enforcement here would be beyond tedious. This process involves continuously checking each identity and specific permissions, asking yourself — is it necessary for this identity to be using these specific permissions? If not, and we decide to remove one, will it break production or cause problems for the user? It’s simply not maintainable or scalable as businesses grow.
Cloud-native companies have just two options.
The second option generates so much maintenance and overhead for security and operations teams, that it negates the benefits of the cloud in the first place. You might be working towards a least privilege model, but you won’t have much else!
For system stability, and to restrict access in a smart way, what kind of privilege enforcement does make sense to apply to the cloud?
At Panoptica, our approach is to offer Guardrails to solve this use case. By creating your own managed policy unique to your environment that blocks specific attack paths, there’s no need to fight with the least privilege model. You can apply access controls using Guardrails, and your teams can work unimpeded without any impact to the pace of DevOps.
Instead of getting fine-grained about specific one-to-one connections between the account and a user or working endlessly on permissions into any given resource (all of which need to be maintained over time), you just set and forget Guardrails that block the potential attack.
As part of our cloud security platform, we offer automated Guardrails that are dynamically built inside your DevOps tools, for example via Terraform or JSON, but can be customized to fit your specific environments’ requirements. A misconfiguration can be found and alleviated automatically, eliminating the vast majority of the manual work that DevOps will need to do. If an asset has an overly-permissive role where it should have just read-only access, the required Guardrail will automatically appear with all the necessary denies, shoring up this risk. No impact on production, and no need to change the read-only permissions themselves.
Much smarter than relying on the principle of least privilege for an environment it was never intended for, right?
The idea of keeping your assets secure using private networks is a pervasive myth.
Many people that use the public cloud rely on segmenting assets in private networks inside their cloud environments to keep their assets secure. Networking-wise their assets are then private, so it’s easy to see where this misconception comes from. While it’s true that if there is no public IP, or if the private network isn’t exposed to the internet, then these assets can’t be accessed externally. However, that doesn’t mean there is no access at all and that their data is safe from manipulation, extraction, or risk.
The truth is that every single asset in AWS, or on any public cloud is accessible through a cloud service provider’s API. It doesn’t matter whether they are private or public networks, they can all be accessible, and allow attackers to shut down services or make changes to your environment, if the right permissions are granted.
All that needs to happen is a single user credential or public access key exposure — something as simple as a compromised developer workstation that has sufficient permissions, and the attacker will be able to access any private asset in the cloud.
Let’s look at three real-world examples of how an attack on “private” networks can occur, and how the misconception of data safeguarding in the cloud occurs.
First, let’s take the example of a developer who generates an access key for his AWS account. The user’s access key includes the permissions to deploy and modify existing EC2 instances, as well as their network access rules. Within the environment, the company keeps private workloads that run the customer database. Inside this database there is sensitive financial information, from credit card details to credentials such as usernames and passwords.
Our developer is only human, and as part of his code, he forgets his access key ID and the secret access key in a public GitHub repository. As many attackers have automation working around the clock to constantly scan public GitHub repositories, it’s only seconds or minutes later that the access key ID and secret access key are in the hands of a nefarious actor. Quickly, they open the existing server to the public internet, which means the customer database (and all the sensitive information held within) is now accessible through a publicly exposed server.
Another common exposure point, your admin workstations — likely to have some of the highest permissions and therefore a regular target. In this scenario, a malicious bot is looking to steal secret credentials to cloud environments from admin workstations, and it accomplishes this using an open-source library from inside an asset discovery tool. This tool works to continuously map oftenneglected assets in your cloud environment such as old backups for databases, snapshots for old servers, and more.
When the bot uncovers these assets, it will automatically attempt to share those assets with its own cloud account. If successful, this data can then be extracted and scraped to be sold on the Dark Web. As of 2020 alone, 15 billion stolen login details , stemming from 100,000 breaches circulating on the Dark Web.
If you think it can’t happen to you — you’re wrong.
Our last example involves cross-account access. Imagine your platform engineering manager uses cross-account access as a legitimate part of his job, in this case to integrate a solution to a data analytics platform. The account is used for private data only, and none of the assets are publicly exposed to the web. Secure? Maybe not.
In this case, the data analytics solution itself is breached because of an application vulnerability in the SaaS platform. This may have come from human error on the vendor’s part – but there is nothing that your business, or your platform engineer could have done to prevent it. The attacker abuses the cross account access to its customer, extracts all data, and then encrypts it – holding your business to a heavy ransom to release the data, and throwing your organization into a media spotlight nightmare.
Using private networks and then patting yourself on the back that your public cloud environment is secure is not a sound cloud security strategy. It’s important to recognize that on the public cloud – nothing is ever really private when a sophisticated attacker (or plain old human error) gets involved! There are several routes to data leaks, ransomware attacks, or public access for your private data. Don’t believe us? Check all your cloud subdomains on Panoptica’s tool, Recon.Cloud. Here, we can show you simply by scanning your AWS public cloud footprint, how much of your internal network a potential attacker can actually see.
Don’t ignore private networks – because we can guarantee that the attackers won’t either.