Tools Resources

EKS Authentication: Part 1

author_profile
Noga Yam Amitai
Wednesday, Feb 16th, 2022

EKS Overview

Kubernetes Background

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications – in other words, a container orchestration platform.

A Kubernetes cluster is built out of a control plane and worker nodes. The control plane manages the worker nodes and pods in the cluster, and the worker nodes are machines that host the pods that are the components of the application workload (for more information - Kubernetes Components). It can be challenging to configure, deploy, and manage a cluster on your own, and with the increasing use of Kubernetes, many cloud providers now offer a managed Kubernetes service. This allows companies to have easy access to a Kubernetes cluster without having to set it up or maintain it by themselves.

In AWS this service is called EKS (Elastic Kubernetes Service). Each managed Kubernetes service works a bit differently and varies in the amount of “management” for which the provider is responsible. In AWS EKS, you have no access to the master nodes (the nodes that construct the control plane), but you can still access the API server and use tools such as kubectl to manage the workloads in your worker nodes.

Kubectl

Kubectl is a command line tool that lets you control Kubernetes clusters. It allows you to perform almost any operation on Kubernetes resources, such as create, delete, and update them. Kubectl crafts the HTTP request that matches the command you run and sends it to the Kubernetes API server. For example, if we run the “Kubectl get pods” command, Kubectl will send the HTTP request “GET /apis/apps/v1/namespaces/default/pods” to the Kubernetes API server.

Kubectl gets its configuration from the kubeconfig file usually located in $HOME/.kube/config path.
Here is a kubeconfig file example from the Kubernetes documentation: (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts)

Kubectl

The kubeconfig file has three sections: clusters, contexts, users.

  1. Clusters – Information about the cluster we want to connect to, api-server address, and the certificate-authority-data field (other fields can also appear).
  2. Contexts – Constructs of the cluster, user, and namespace (“default” namespace if not specified), and the name of the context. This way the Kubectl will know which cluster to connect to, and with what credentials (the user’s).
  3. Users - Information about different users of the clusters and their authentication details. There are multiple user credential types, such as certificates, authentication tokens, and basic authentication with username and password.

To be able to communicate with your cluster, your kubeconfig file must be configured correctly, and this is another point where a managed Kubernetes service can assist, as we will soon see.

EKS Authentication

As previously mentioned, EKS is AWS’s managed Kubernetes service. As such, it takes the responsibility of deploying and managing the Kubernetes cluster off of you. In addition, EKS is integrated with other AWS services, among them is the Identity and Access Management (IAM), which is the main subject of this research. EKS uses IAM for cluster authentication, but the authorization still happens on native Kubernetes using RBAC (Role Based Access Control).

In this section, we will take a close look at EKS’s authentication, and examine every step of the way, from creating a cluster to using Kubectl to run commands on your cluster, including what happens behind the scenes.

First, if you don’t have an EKS cluster, you can create one using one of AWS’s guides, or Panoptica's secure EKS deployment guide. Once our cluster is created, we want to be able to connect to it from our computer. In their guide, AWS instructs us to run the following command:

aws eks update-kubeconfig --region REGION --name CLUSTER-NAME
EKS Authentication

The command we ran changed our kubeconfig file automatically and added all of the relevant information about the new cluster so that we can immediately use Kubectl to communicate with our new cluster.

We can test it by running the command:

kubectl get svc
EKS Authentication

Let’s take a closer look at the users’ section in our new kubeconfig file:

kubeconfig file

We can see that the user authentication is different from the usual authentication methods (Bearer token, username and password, etc.). This exec section is specifying a command to run and arguments to use for authenticating the user. The command that is going to be run is:

aws --region us-east-2 eks get-token --cluster-name eks-demo
AWS EKS responds

AWS EKS responds with an access token. This access token will be added to the HTTP requests that Kubectl sends to the API server.

We can test it by running Kubectl get svc and pass it through a proxy:

Kubectl get svc and pass it through a proxy

Now, we will run the command aws –region us-east-2 eks get-token –cluster-name eks-demo and switch the authorization token in the request, to the new token that we just generated: 

aws –region us-east-2 eks get-token –cluster-name eks-demo

Voila! We got the same answer with our token.

Token Exploration

Let’s explore this token for a bit.

The token is simply a base64 encoded string, if we decode it (without the k8s-aws-v1.), we will get the resulting string below:

https://sts.amazonaws.com/?Action=GetCallerIdentity&Version=2011-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={ACCESS-KEY-ID}%2F20220201%2Fus-east-1%2Fsts%2Faws4_request&X-Amz-Date=20220201T073249Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host%3Bx-k8s-aws-id&X-Amz-Signature=2de2d0807e2c4caa6dccd03e90faef0d65ba3876e0c35af892bad135e8e659d1

Interesting! This is an HTTP request to AWS STS (Security Token Service), and to be specific, it is a “GetCallerIdentity” request (GetCallerIdentity request documentation).

This request is sent to the EKS cluster encoded as a token used for authentication. The EKS cluster receives the request, extracts the token, decodes it, and sends the same STS “GetCallerIdentity” request to the AWS STS service. The AWS STS response details provide our EKS cluster with the exact identity which is trying to perform the action. If the cluster gets a successful response for this request, it knows that AWS authenticated the user, and it gets the user’s identity.

So, what exactly is the response that the cluster gets?

Since the cluster access the AWS STS “GetCallerIdentity” API, we can run a similar command in our local CLI.

aws sts get-caller-identity
Token Exploration

However, we didn’t pass our credentials (AWS Access Key) to the cluster, so how can it get the same response as when it includes the connected user identity?

Let’s take the base64 decoded token and try to send it to STS ourselves:

AWS Access Key

We got the error “Signature Does Not Match”.

Why is that?

Most of the requests to AWS need to be signed to verify the identity of the requester and to protect the data from being altered in transit. The signing process is complex, and we are not going to examine it in detail in this post (for more information about signing AWS requests - https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
There are two main things that we need to understand in this case:

  1. The user’s access key id, and secret access key are part of the signature.
  2. The cluster name, referred as x-k8s-aws-id, is also part of the signature.

If we want to be able to get a response to the request, we must add the x-k8s-aws-id header with our cluster name as the value.

the x-k8s-aws-id

Basically, we got the same answer as we did when we ran the “sts get-caller-identity” from the CLI. Once our cluster gets this response, there is one last step to complete the authentication process, and that is to translate the AWS identity into a Kubernetes identity.

“aws-auth” ConfigMap

In general, a ConfigMap in Kubernetes is an API object that lets you store data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. The main point of integration between Kubernetes and AWS IAM is the “aws-auth” ConfigMap.

This ConfigMap object maps AWS identities to Kubernetes identities. It is not enough that the "GetCallerIdentity" request is successful, because the AWS identity has no meaning in native Kubernetes. You cannot assign access permissions without having a parallel Kubernetes identity that works with RBAC.

Here is an example of an “aws-auth” ConfigMap from AWS's documentation:

“aws-auth” ConfigMap

There are two main parts in the ”aws-auth” ConfigMap:

  1. mapRoles – Allows you to map an IAM role, using its ARN, to a Kubernetes identity or group. By that, each IAM identity that can assume that role, will have access to your cluster based on the permissions of the group/username it is mapped to.
  2. mapUsers – Allows you to map an IAM user, using its ARN, to a Kubernetes user. Then, this user will have access to your cluster based on the roles/cluster roles that are bound to its Kubernetes user. You can also map IAM users to be a part of a Kubernetes group and have its permissions.

Users that have permissions to edit the “aws-auth” ConfigMap, can become cluster admins by mapping themselves to a group that is already bound to a cluster-admin (“system:masters” group for instance).

Note that if you created your cluster using AWS CLI, the ”aws-auth” ConfigMap is not created automatically. Follow this guide to apply it to your cluster.

You can view the “aws-auth” ConfigMap content in your cluster by running this command:

kubectl get configmap aws-auth -n kube-system –o yaml
aws-auth ConfigMap

If an AWS identity is mapped in your “aws-auth” ConfigMap to a Kubernetes identity, this identity will be able to access your cluster. The scope of access will be determined by the roles/cluster roles that are bound to this identity.

Summary

Let’s review a successful flow of running a kubectl command with the help of a schema from AWS’s documentation:

kubectl command with the help of a schema from AWS’s
  1. Pass Amazon Web Services Identity - The “aws eks get-token” command run in the background while using the kubectl tool, and the token is attached to the Kubernetes API request.
  2. Verify Amazon Web Services Identity - Kubernetes cluster uses the token and sends the decoded string to AWS STS to get the user’s identity.
  3. Role Based access control - Kubernetes translates the AWS identity to a Kubernetes identity using the "aws-auth" ConfigMap and checks if this identity is authorized to perform the required action according to the roles/cluster roles that are bound to this identity.
  4. Kubernetes action allowed – The action is executed, and the user gets its output.

It is important to understand that managed Kubernetes services, though very fast and easy to use, come at a cost. You will not have complete control over your cluster, and parts of the deployment might be hidden. Before using EKS, or any other managed service, try wrapping your head around what AWS (or other cloud providers) is doing for you. The more knowledge you have on how things work behind the scenes, the more secure your cluster will be, especially when it comes to authentication. In this post, we looked into EKS's authentication process, hoping to help you use it more wisely and with more awareness.

In the next part of our EKS series, we will deep dive into the implementation of the aws-iam-authenticator tool which is being used by the cluster for the AWS IAM credentials authentication.

Popup Image