Charles Guebels
Handling Secrets and Parameters on AWS EKS
Security best practices require to protect confidential data (e.g. passwords, tokens, API Keys). When using AWS these information are usually stored in AWS Secrets Manager or AWS Systems Manager Parameter Store (SSM Parameter Store).
Creating and using secrets within AWS is quite simple but accessing them from a Kubernetes cluster is not. We usually need to access secrets from a pod to retrieve Datastores credentials, API Keys, etc.
In this article we’ll see how to setup EKS to be able to use secrets and parameters stored in AWS Secrets Manager and AWS Systems Manager Parameter Store.
Table of Contents
- Kubernetes Secrets versus AWS Secrets
- Prerequisites
- Retrieving secrets and parameters
- Handling JSON secrets
- Synchronising secrets
- Conclusion
Kubernetes Secrets versus AWS Secrets
In Kubernetes to store secrets we can use the kind Secret
:
1
2
3
4
5
6
7
apiVersion: v1
kind: Secret
metadata:
name: secret-demo
type: Opaque
data:
password: U3Bpa2VzZWVk
However, using the kind SecretProviderClass offered multiple advantages in a Cloud context:
- Having a single file to configure the secrets allows to manage them from a single place.
- Having secrets outside of the cluster eases the possible integration with external tools. It is also very easy to share a secret across several Kubernetes namespaces and to keep it in sync.
This externalisation allows also to delegate the secrets creation to an Infrastructure as Code (IaC) tool for example. This way there is no need to find a solution to access Kubernetes’ API (often in private subnets) to create secrets and keep them up to date. - Using AWS Secrets Manager or AWS SSM Parameter Store from an EKS cluster in correlation with Kubernetes service accounts allows fined grain control over who can access which secrets. It is also easily possible to define groups of secrets with a reusable list of secrets.
Prerequisites
To be able to use AWS Secrets Manager or AWS SSM Parameter Store from Kubernetes a bit of configuration is required.
We need to:
- create an OpenID Connect identity provider to be able to integrate IAM roles with Kubernetes ServiceAccounts
- install Kubernetes Secrets Store CSI Driver
- install AWS Secrets and Configuration Provider (ASCP)
IAM role and policy
Then we need to create an IAM policy and an IAM role to be used by a service account.
A service account provides an identity for processes that run in a Pod. These processes will have the permissions of the AWS IAM role attached to the service account.
The following policy allows:
- the retrieval of secrets from AWS Secrets Manager and AWS SSM Parameter Store
- the use of a KMS key (required if the secrets are encrypted).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"ssm:DescribeParameters",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"kms:DescribeCustomKeyStores",
"kms:ListKeys",
"kms:ListAliases"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"kms:Decrypt",
"kms:GetKeyRotationStatus",
"kms:GetKeyPolicy",
"kms:DescribeKey",
],
"Effect": "Allow",
"Resource": "<KMS_KEY_ARN>"
}
]
}
To follow the principle of least privilege we create an IAM role with a trusted policy which restricts its usage to a specific EKS cluster, namespace and service account.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRoleWithWebIdentity",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Condition": {
"StringEquals": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com",
"oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<K8S_NAMESPACE>:<SERVICE_ACCOUNT_NAME>"
}
}
}
]
}
In this article we will use spikeseed-blog
as the namespace and admin-sa
as the service account.
The last step is to attach the policy with the role.
ServiceAccount kind
Now we can create a ServiceAccount to allow the pods to assume the IAM role.
1
2
3
4
5
6
7
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-sa
namespace: spikeseed-blog
annotations:
eks.amazonaws.com/role-arn: <IAM_SERVICE_ACCOUNT_ROLE_ARN>
It is important to note that this service account is only available to the specified namespace.
SecretProviderClass kind
With the SecretProviderClass kind we can define to which secrets a pod has access to.
But first we need to create some secrets and parameters:
- A Secrets Manager simple secret (a plain text secret). This secret will be identified by mySimpleSecret in the examples below.
- A Secrets Manager JSON formatted secret (the secret is the whole JSON). This secret will be identified by myJSONSecret in the examples below.
- An SSM Parameter Store parameter. This parameter will be identified by /spikeseed/blog/myparameter in the examples below.
Then we can create a SecretProviderClass
manifest.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: admin-aws-secrets
namespace: spikeseed-blog
spec:
provider: aws
parameters:
objects: |
- objectName: "mySimpleSecret"
objectType: "secretsmanager"
- objectName: "myJSONSecret"
objectType: "secretsmanager"
- objectName: "/spikeseed/blog/myparameter"
objectType: "ssmparameter"
In this example we have two secrets from AWS Secrets Manager (using the secret name) and one from SSM Parameter Store (using the parameter key).
Again, these secrets are only available inside the specified namespace.
Retrieving secrets and parameters
Finally, we are going to check that a Kubernetes pod can use the secrets and parameters we have previously defined. To do so we create a simple Kubernetes Deployment.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
namespace: spikeseed-blog
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
serviceAccountName: admin-sa
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "admin-aws-secrets"
containers:
- name: demo-deployment
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
We mount a volume in the pod using the secrets-store.csi.k8s.io
driver and the SecretProviderClass
we have created earlier.
Important notes:
- namespace must be the same for the
SecretProviderClass
,ServiceAccount
andDeployment
. - serviceAccountName must have the same name as the
ServiceAccount
created previously. - secretProviderClass must have the same name as the
SecretProviderClass
created previously. - mountPath is the directory path in the pod file system where we will be able to read all the secrets and parameters included in the specified secret class.
- volumes.name and volumeMounts.name can have any value but must be the same.
After the deployment we can connect to the pod and execute the following commands to check that our secrets are now accessible from our Kubernetes pod:
1
2
3
4
5
6
7
8
9
10
11
12
13
$ ls /mnt/secrets-store/
-rw-r--r-- 1 root root 11 Jan 31 23:10 _spikeseed_blog_myparameter
-rw-r--r-- 1 root root 74 Jan 31 23:10 mySimpleSecret
-rw-r--r-- 1 root root 72 Jan 31 23:10 myJSONSecret
$ cat /mnt/secrets-store/_spikeseed_blog_myparameter
My parameter
$ cat /mnt/secrets-store/mySimpleSecret
Arhs Spikeseed is hiring but it is not a secret ;)
$ cat /mnt/secrets-store/myJSONSecret
{ "username": "usernameSecretValue","password": "passwordSecretValue" }
For the JSON secret, to only display a property we need an extra tool like jq:
1
2
3
4
5
$ cat /mnt/secrets-store/myJSONSecret | jq -r .username
usernameSecretValue
$ cat /mnt/secrets-store/myJSONSecret
passwordSecretValue
We will see later a better way to retrieve these values.
Secrets and environment variables
Of course, retrieving secrets from a file comes with its limitation and we usually expect to have them in environment variables.
To do so we need to update the SecretProviderClass manifest:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: admin-aws-secrets
namespace: spikeseed-blog
spec:
provider: aws
### Start update
secretObjects:
- secretName: my-k8s-secrets
type: Opaque
data:
- objectName: mySimpleSecret
key: simpleSecret
- objectName: myJSONSecret
key: jsonSecret
- objectName: parameterAlias
key: myParameter
### End update
parameters:
objects: |
- objectName: "mySimpleSecret"
objectType: "secretsmanager"
- objectName: "myJSONSecret"
objectType: "secretsmanager"
### Start update
- objectName: "/spikeseed/blog/myparameter"
objectType: "ssmparameter"
objectAlias: parameterAlias
### End update
We have added a new secretObjects section to create a Kubernetes secret named my-k8s-secrets containing the three keys: simpleSecret, jsonSecret and parameterAlias.
Notes:
- For AWS Secret Manager secrets, objectNames must have the same values in the secretObjects and parameters sections
- For SSM Parameter Store secrets, we need to use an ObjectAlias where
parameters.objects.objectName = "/spikeseed/blog/myparameter"
butsecretObjects.data.objectName = parameters.objects.objectAlias
.
Next, we need to update the Deployment manifest.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[...]
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "admin-aws-secrets"
containers:
- name: demo-deployment
image: nginx
### Start update
env:
- name: SIMPLE_SECRET_ENV_VAR
valueFrom:
secretKeyRef:
name: my-k8s-secrets
key: simpleSecret
- name: JSON_SECRET_ENV_VAR
valueFrom:
secretKeyRef:
name: my-k8s-secrets
key: jsonSecret
- name: MY_PARAMETER
valueFrom:
secretKeyRef:
name: my-k8s-secrets
key: myParameter
### End update
[...]
We have added a new env section to have three environment variables: SIMPLE_SECRET_ENV_VAR
, JSON_SECRET_ENV_VAR
and JSON_SECRET_ENV_VAR
where key references secretObjects.data.key from the SecretProviderClass configuration.
After re-deploying both manifests we can connect to the pod and execute the following commands to check that our secrets are now accessible from a Kubernetes pod:
1
2
3
4
5
6
7
8
$ echo $SIMPLE_SECRET_ENV_VAR
Arhs Spikeseed is hiring but it is not a secret ;)
$ echo $JSON_SECRET_ENV_VAR
{"username": "usernameSecretValue","password": "passwordSecretValue"}
$ echo $MY_PARAMETER
My parameter
Handling JSON secrets
Storing JSON in an environment variable is rarely a good idea. But when we have no choice, we need to make sure that the value in AWS Secrets Manager is not stored in pretty printed format (with newlines, carriage returns, tabs, etc.) like in this example:
To solve this issue all formatting characters must be removed, otherwise we may end up with the environment variable JSON_SECRET_ENV_VAR
containing only part of the JSON.
But we usually want to have the username
and the password
values from the JSON secret into two different environment variables. Fortunately, the jmesPath field allows us to do exactly that (JMESPath stands for JSON Matching Expression paths and is a query language for JSON).
Once again we need to update the SecretProviderClass manifest:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: admin-aws-secrets
namespace: spikeseed-blog
spec:
provider: aws
secretObjects:
- secretName: my-k8s-secrets
type: Opaque
data:
### Start update
- objectName: usernameAlias
key: username
- objectName: passwordAlias
key: password
### End Update
parameters:
objects: |
- objectName: "myJSONSecret"
objectType: "secretsmanager"
### Start update
jmesPath:
- path: username
objectAlias: usernameAlias
- path: password
objectAlias: passwordAlias
### End Update
In the secretObjects section we added two dedicated variables (username
and password
) where keys are used as references in the Deployment manifest and objectNames are used as references in the parameters section. And like for AWS SSM Parameter Store secrets secretObjects.data.objectName = parameters.objects.objectAlias
.
We update the Deployment manifest one last time to have two environment variables (USERNAME
and PASSWORD
) as we previously did.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: apps/v1
kind: Deployment
[...]
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: my-k8s-secrets
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: my-k8s-secrets
key: password
[...]
If we connect to the pod:
$ echo $USERNAME
usernameSecretValue
$ echo $PASSWORD
passwordSecretValue
Note: Now in the pod file system we have two new files named username and password containing the individual secret values.
Synchronising secrets
Before we conclude, it is worth mentioning that the real added value of using the AWS Secrets and Configuration Provider is the ability to keep Kubernetes secrets synchronised with the AWS secrets. Without this feature, if an AWS secret is changed, the pod must be recreated to get the new secret value.
To enable this feature, we need to add two properties to the Secrets Store CSI Driver installation:
syncSecret.enabled: true
enableSecretRotation: true
Moreover, to change the secrets synchronisation frequency (by default 120s) there is the property rotationPollInterval
.
But it is very important to keep in mind that this synchronisation doesn’t update the environment variables. It only refreshes the secrets contained in the mounted secret volume. In our examples it corresponds to the files in /mnt/secrets-store/
. To update the environment variables linked to the secrets we still must restart the pod or use an extra tool like Reloader.
Conclusion
In this article we have seen how to configure an EKS cluster to be able to use AWS Secrets Manager and AWS Systems Manager Parameter Store secrets and parameters. We also discussed several ways to share these secrets with Kubernetes pods trying to cover multiple use cases.
As always with Kubernetes huge ecosystem nothing is easy at first, but once we understand the nuts and bolts the result is quite neat.