This guide describes how to manage access to an Elastic Kubernetes Service (EKS) Instance Profile - User Impersonation cluster via the StrongDM Admin UI. This cluster type supports AWS IAM role authentication for EKS resources and gateways running in EC2. EKS clusters are added and managed in both the Admin UI and the AWS Management Console.
Prerequisites
Before you begin, ensure that the EKS endpoint you are connecting is accessible from one of your StrongDM gateways or relays. See our Nodes guide for more information. It is recommended to have the Audit and Authentication logs enabled for the EKS cluster, as these logs can be very helpful in troubleshooting any connection issues that you may encounter.
Credentials-reading order
During authentication with your AWS resource, the system looks for credentials in the following places in this order:
- Environment variables (if the Enable Environment Variables box is checked)
- Shared credentials file
- EC2 role or ECS profile
As soon as the relay or gateway finds credentials, it stops searching and uses them. Due to this behavior, we recommend that all similar AWS resources with these authentication options use the same method when added to StrongDM.
For example, if you are using environment variables for AWS Management Console and using EC2 role authentication for an EKS cluster, when users attempt to connect to the EKS cluster through the gateway or relay, the environment variables are found and used in an attempt to authenticate with the EKS cluster, which then fails. We recommend using the same type for all such resources to avoid this problem at the gateway or relay level. Alternatively, you can segment your network by creating subnets with their own relays and sets of resources, so that the relays can be configured to work correctly with just those resources.
Cluster Setup
You can find information about your cluster in the AWS Management Console on your EKS cluster’s general configuration page.
Manage the IAM Role
- In the AWS Management Console, go to Identity and Access Management (IAM) > Roles.
- Create a role to be used for accessing the cluster, or select an existing role to be used.
- Attach or set the role to what you are using to run your relay (for example, an EC2 instance, ECS task, EKS pod, and so forth). See AWS documentation for information on how to attach roles to EC2 instances, set the role of an ECS task, and set the role of a pod in EKS.
- Copy the Role ARN (the Amazon Resource Name specifying the role).
Grant the Role the Ability to Interact With the Cluster
- To create a ClusterRole/ClusterRoleBinding with minimum privileges require for healthcheck and user impersonation. Create a new yaml file named readOnly.yml with the following contents (you can change the names of the ClusterRole/CluserRoleBindings to fit your naming conventions).
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-read
rules:
- apiGroups:
- '*'
resources:
- deployments
- pods
- namespaces
verbs:
- get
- describe
- apiGroups:
- ""
resources:
- "users"
- "groups"
verbs:
- impersonate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-read
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: k8s-read
- Ensure your kubectl context is set to the the intended EKS cluster, then add the ClusterRole/ClusterRoleBinding by running
kubectl apply -f readOnly.yml
- While authenticated to the cluster using your existing connection method, run the following command to edit the aws-auth ConfigMap (YAML file) within Kubernetes:
kubectl edit -n kube-system configmap/aws-auth
4. Copy the following snippet and paste it into the file under the data: heading, as shown:
data:
mapRoles: |
- rolearn: <ARN_OF_INSTANCE_ROLE>
username: <USERNAME>
groups:
- <GROUP>
5.In that snippet, do the following:
- Replace <ARN_OF_INSTANCE_ROLE> with the ARN of the instance role.
- Under groups:, replace <GROUP> with the appropriate group for the permissions level you want this StrongDM connection to have (see Kubernetes Roles for more details).
Example:
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789012:role/Example
username: sdm-user
groups:
- k8s-read
The name under groups: in the mapRoles block must match the subject name in the desired ClusterRoleBinding, not the name of the ClusterRoleBinding itself. Save the file and exit your text editor.
Add Your EKS Instance Profile Cluster in the StrongDM Admin UI
- Log in to the Admin UI and go to Infrastructure > Clusters.
- Click the Add cluster button.
- Select Elastic Kubernetes Service (instance profile) as the Cluster Type and set other resource properties to configure how the StrongDM relay connects.
Add Your EKS Instance Profile Cluster in the StrongDM Admin UI
- Log in to the Admin UI and go to Infrastructure > Clusters.
- Click the Add cluster button.
- Select Elastic Kubernetes Service (instance profile - User Impersonation) as the Cluster Type and set other resource properties to configure how the StrongDM relay connects.
Elastic Kubernetes Service Instance Profile - User Impersonation Cluster Setup in Admin UI
4. Click Create to save the resource.
The Admin UI updates and shows your new cluster in a green or yellow state. Green indicates a successful connection. If it is yellow, click the pencil icon to the right of the server to reopen the Connection Details screen. Then click Diagnostics to determine where the connection is failing.
Resource Properties
Configuration properties are visible when you add a Cluster Type or when you click to view the cluster’s settings. The following table describes the settings available for your EKS Instance Profile cluster.
Property | Requirement | Description |
Display Name | Required | Meaningful name to display the resource throughout StrongDM; exclude special characters like quotes (") or angle brackets (< or >) |
Cluster Type | Required | Select Elastic Kubernetes Service (instance profile) |
Endpoint | Required | API server endpoint of the EKS cluster in the format <ID>.<REGION>.eks.amazonaws.com, such as A95FBC180B680B58A6468EF360D16E96.yl4.us-west-2.eks.amazonaws.com; relay server should be able to connect to your EKS endpoint |
IP Address | Optional | Shows up when a loopback range is configured for the organization; local IP address used to connect to this resource using the local loopback adapter in the user’s operating system; defaults to 127.0.0.1 |
Port Override | Optional | Automatically generated with a value between 1024 to 59999 as long as that port is not used by another resource; preferred port can be modified later under Settings > Port Overrides ; after specifying the port override number, you must also update the kubectl configuration, which you can learn more about in section Port Overrides |
Secret Store | Optional | Credential store location; defaults to Strong Vault; to learn more, see Secret Store options |
Server CA | Required | Pasted server certificate (plaintext or Base64-encoded), or imported PEM file; you can either generate the server certificate on the API server or get it in Base64 format from your existing Kubernetes configuration (kubeconfig) file |
Cluster Name | Required | Name of the EKS cluster |
Region | Required | Region of the EKS cluster, such as us-west-1 |
Healthcheck Namespace | Optional | If enabled for your organization, the namespace used for the resource healthcheck; defaults to default if empty; supplied credentials must have the rights to perform one of the following kubectl commands in the specified namespace: get pods, get deployments, or describe namespace |
Authentication | Required | Authentication method to access the cluster; select either Leased Credential (default) or Identity Aliases (to use the Identity Aliases of StrongDM users to access the cluster) |
Identity Set | Required | Displays if Authentication is set to Identity Aliases; select an Identity Set name from the list |
Healthcheck Username | Required | If Authentication is set to Identity Aliases, the username that should be used to verify StrongDM’s connection to it; username must already exist on the target cluster |
Assume Role ARN | Optional | Role ARN, such as arn:aws:iam::000000000000:role/RoleName, that allows users accessing this resource to assume a role using AWS AssumeRole functionality |
Assume Role External ID | Optional | External ID if leveraging an external ID to users assuming a role from another account; if used, it must be used in conjunction with Assume Role ARN; see the AWS documentation on using external IDs for more information |
Resource Tags | Optional | Resource tags consisting of key-value pairs <KEY>=<VALUE> (for example, env=dev) |
Display name
Some Kubernetes management interfaces, such as Visual Studio Code, do not function properly with cluster names containing spaces. If you run into problems, please choose a Display Name without spaces.
Client credentials
When your users connect to this cluster, they have exactly the rights permitted by this AWS key pair. See AWS documentation for more information.
Server CA
How to get the Server CA from your kube config file:
- Open the CLI and type cat ~/.kube/config to view the contents of the file.
- In the file, under - cluster, copy the certificate-authority-data value. That is the server certificate in Base64 encoding.
- cluster:
certificate-authority-data: ... SERVER CERT BASE64 ...
Secret Store options
By default, server credentials are stored in StrongDM. However, these credentials can also be saved in a secrets management tool.
Non-StrongDM options appear in the Secret Store dropdown menu if they are created under Network > Secret Stores . When you select another Secret Store type, its unique properties display. For more details, see Configure Secret Store Integrations.
RBAC Configuration
You can now create RBAC Roles to grant permissions that will be applicable based on the StrongDM role(s) assigned to the StrongDM users.
Example of RBAC configuration to limit StrongDMole “developer” to a namespace “dev”
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: sdm-developer
rules:
- apiGroups:
- ""
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- "authorization.k8s.io"
resources:
- selfsubjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sdm-developer
namespace: dev
subjects:
- kind: Group
name: developer # Name of the StrongDM role
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdm-developer # Name of the cluster role you want that StrongDM role to have
After the role is created, you can check the access works as intended, for example:
$ kubectl auth can-i --as=* --as-group=developer get pods -n dev
yes
$ k auth can-i --as=* --as-group=developer get pods
no