Accept
This website is using cookies. More details

Dimitri Appriou
I'm a Cloud Engineer. Don't ask what your cloud can do for you. Ask what you can do for your cloud. Well actually no just let the cloud work for you.

Effective secrets management in AWS

Effective secrets management in AWS

Today, secrets are used everywhere. Being the key to infrastructures and the data they are holding, a compromised secret can lead to security breaches of catastrophic proportions. Consequently, securing secrets should be an utmost priority.

Secrets management covers multiple topics from generation, access control to expiration. In this article, we will focus on managing the secrets lifecycle when we have an infrastructure deployed on AWS. For the other topics, a good introduction can be found in the Secrets Management Cheat Sheet.

Multiple tools/services can be used to store secrets, and it goes without saying that a VCS (Version Control System) is not one of them. When using AWS, AWS Secrets Manager and AWS SSM Parameter Store, should always be a first choice as they are perfectly integrated with other AWS Services and are easy to use. They are both versioned key/value stores that optionally encrypt secrets with a KMS Key. AWS Secrets Manager is more expensive ($0.40 per secret per month and $0.05 per 10,000 API calls) but provides full secret rotation integration with services like RDS, Redshift, DocumentDB, offers cross-region replication out of the box and allows to store up to 10KB of data. On the other hand, AWS SSM Parameter Store is free for Standard Parameters ($0.05 per Advanced Parameter per month) and costs $0.05 per 10,000 API calls. Standard parameters can only store 4KB of data (8KB for Advanced Parameters).

We should choose the service that fits our need and stick with it to have a single source of truth and avoid having secrets scattered between multiple services.

Now that we have the tools, the main question is “How do we create, update and delete secrets in a DevOps way?”

To answer this question, we will discuss different approaches and conclude with a list of pros and cons.

Managed secrets with Cloudformation and Terraform

A secret should be considered as any other resource of our infrastructure, therefore, storing random secrets (secrets not provided by an external source) should be done using Infrastructure as Code tools to avoid any unnecessary manual operation.

With Terraform:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#--------------------------------------
# Variables
#--------------------------------------
variable "kms_key_name" {
  type = string
}

variable "sm_secret_key" {
  type = string
}

variable "ssm_secret_key" {
  type = string
}

#--------------------------------------
# KMS
#--------------------------------------

data "aws_kms_key" "this" {
  key_id = "alias/${var.kms_key_name}"
}

#--------------------------------------
# Generates secret
#--------------------------------------

resource "random_password" "this" {
  length           = 32
  special          = false
}

#--------------------------------------
# Creates AWS Secret Manager Resource
#--------------------------------------

resource "aws_secretsmanager_secret" "this" {
  name        = var.sm_secret_key
  description = "My secret description"
  kms_key_id  = data.aws_kms_key.this.arn
}

resource "aws_secretsmanager_secret_version" "this" {
  secret_id = aws_secretsmanager_secret.this.id
  secret_string = jsonencode({
    password = "${random_password.this.result}"
  })
}

#--------------------------------------
# Creates AWS SSM Parameter Store Parameter
#--------------------------------------
resource "aws_ssm_parameter" "secret" {
  name        = var.ssm_secret_key
  description = "My secret description"
  type        = "SecureString"
  value       = random_password.this.result
  key_id      = data.aws_kms_key.this.arn
}

It is very important to remember that any secret generated by Terraform is stored in a .tfstate file. Therefore, a Remote Backend should be used, the S3 bucket should be encrypted using a KMS key and access to it should follow the least privilege principle.

With Cloudformation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Parameters:

  SecretName:
    Type: String
  KmsKeyId:
    Type: String

Resources:

  MySecret:
    Type: AWS::SecretsManager::Secret
    Properties:
      Name: !Ref SecretName
      Description: My secret description
      GenerateSecretString:
        SecretStringTemplate: '{}'
        GenerateStringKey: 'password'
        PasswordLength: 32
      KmsKeyId: !Ref KmsKeyId

Cloudformation comes with multiple limitations:

  • it cannot generate multiple secrets with a single AWS::SecretsManager::Secret resource
  • it cannot create SSM Parameter of type SecureString
  • AWS Secrets Manager and AWS SSM Parameter Store secrets can only be used (referenced in a Cloudformation template) with a handful set of services

The deletion of a secret in Secrets Manager with Cloudformation or Terraform is not immediate, it is usually pending for 7 days, which prevents the creation of a new secret with the same name. To avoid this issue, we need to use the AWS CLI to delete the secret with the option --force-delete-without-recovery.

As we can see, IaC tools (especially Terraform) will fit most use cases.

Managing 3rd Party Secrets

When a secret is provided by an external source (like an API Key used to access a 3rd Party service), the solutions presented in the previous part using random secrets will not work and automation will be limited. However, they are solutions to make our lives easier, avoiding to go to multiple AWS accounts and update the secret manually in the AWS console.

Python script and AWS S3 Bucket

Important notes:

  • The following script store secrets in AWS SSM Parameter Store but it would be the same with Secrets Manager.
  • We are using Python but any programming language with an AWS SDK would work.

This solution is quite easy. First, we create an utility class which will handle the calls to SSM, a Dry Mode and update only what is necessary:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import sys
import boto3

class SsmParameterUpdater:

  def __init__(self, profile_name, region_name):
    if len(sys.argv) == 2 and sys.argv[1] == '--nodryrun':
      self._dry_run = False
      print('==================')
      print('Update Mode')
      print('==================')
    else:
      self._dry_run = True
      print('==================')
      print('Dry Run Mode')
      print('==================')
    session = boto3.Session(profile_name=profile_name, region_name=region_name)
    self._ssm_client = session.client('ssm')

  def get_ssm_parameter(self, name: str):
    try:
      return self._ssm_client.get_parameter(Name=name, WithDecryption=True)['Parameter']
    except:
      return None

  def put_ssm_parameter(self, name: str, value: str, value_type: str = 'SecureString'):
    param = self.get_ssm_parameter(name)
    if not param or param['Value'] != value or param['Type'] != value_type:
      if not self._dry_run:

        self._ssm_client.put_parameter(
          Name=name,
          Value=value,
          Type=value_type,
          Overwrite=True
        )

        print(f'[modified] {name}')

      else:
        print(f'DR - [modified] {name}')
    elif not self._dry_run:
      print(f'[no change] {name}')
    else:
      print(f'DR - [no change] {name}')

Then we can use the class SsmParameterUpdater in a new .py file.

1
2
3
4
5
6
7
from ssm_utils import SsmParameterUpdater

ssm = SsmParameterUpdater('myapp-dev', 'us-east-1')

ssm.put_ssm_parameter('/myapp/database/password/1', 'xxxx')
ssm.put_ssm_parameter('/myapp/database/password/2', 'xxxx')
# [...]

By default, the parameters are of type SecureString. We can create regular String or StringList using an extra parameter:

1
ssm.put_ssm_parameter('/cicd/deployed/color', 'blue', value_type='String')

The script can either be executed in:

  • Dry Mode: python ssm-labs.py
  • Update Mode: python ssm-labs.py --nodryrun.

If we have multiple environments, each one will have its own file.

Once we are done, we need to upload these scripts to a Vault Bucket in order to version it and share it with other team members:

1
2
#!/bin/sh
aws s3 sync --profile myapp-dev . s3://myapp-vault/ssm-parameters --exclude "*" --include "ssm-*.py"

On the other side we need to be able to download the scripts from the Vault Bucket:

1
2
#!/bin/sh
aws s3 sync --profile myapp-dev s3://myapp-vault/ssm-parameters . --include "*.py"

These bash scripts should be pushed to a VCS along with the application and infrastructure code. But we must have a .gitignore file with the following entry ssm-*.py.

This solution is quite simple, but there are major drawbacks:

  • A secured bucket must be created and appropriate access control must be put in place.
  • We must not forget to execute the download script before doing any modification.
  • We must not forget to execute the upload script when we are done.
  • Without having to check in the AWS console we cannot be 100% certain of what is actually deployed.
  • Checking values change is complicated:
    • Even with versioning enabled at the bucket level, downloading the right version and comparing the files is painful.
    • AWS SSM Parameter Store has a history feature in the console, but each parameter must be check individually and it is difficult to detect change in a 4KB JSON.
    • For AWS Secrets Manager the CLI (or the AWS SDK) must be used.

Terraform and AWS S3 Bucket

This solution is comparable to the previous one, but instead of using a Python script, we are using Terraform templates that will be stored in S3.

All the files are stored in a secured_modules folder:

1
2
3
4
5
6
7
8
9
├── project
│   ├── ...
│   ├── secured_modules
│   │   ├── download.sh
│   │   ├── upload.sh
│   │   ├── main.tf
│   │   ├── providers.tf
│   │   ├── versions.tf
└── .gitignore

This time, all Terraform files in the secured_modules folder must be ignored by git (secured_modules/*.tf).

Then, as in the example with the random secrets, we create Secrets Manager resources, but this time the values are hardcoded:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#--------------------------------------
# Variables
#--------------------------------------
variable "kms_key_name" {
  type = string
}

#--------------------------------------
# KMS
#--------------------------------------

data "aws_kms_key" "this" {
  key_id = "alias/${var.kms_key_name}"
}

#--------------------------------------
# Creates AWS Secret Manager Resource
#--------------------------------------

resource "aws_secretsmanager_secret" "this" {
  name        = var.secrets_key
  description = "My secret's description"
  kms_key_id  = data.aws_kms_key.this.arn
}

resource "aws_secretsmanager_secret_version" "this" {
  secret_id = aws_secretsmanager_secret.this.id
  secret_string = jsonencode({
    password = "my_password"
  })
}

Of course, using Terraform instead of a Python script does not remove any drawback of this solution. This leads to the only option that remains, creating our own secrets manager.

Python script on steroid

As we have already discussed, AWS Secrets Manager and AWS SSM Parameters version each secret and both can retain up to 100 versions. After we have created 100 versions of a secret, each time we create a new version, the oldest version is removed from history to make room for the new one.

With that in mind creating a script with the following features is far from being a daunting task:

  • Creating a new secret.
  • Updating a secret.
  • Deleting a secret.
  • Viewing a secret.
  • Listing a secret version.
  • Comparing 2 versions of a secret.
  • Dumping all the secrets in the console or files.

The following script index.py is an example on how to get started.

This solution is very simple to use, allows to have full control on our secrets and can match any use case, even the random ones. Here below are some examples of how to use the script:

1
2
3
4
5
6
7
python secrets-mgmt.py list
python secrets-mgmt.py list-versions my_secret
python secrets-mgmt.py get my_secret latest
python secrets-mgmt.py dump my_secret latest
python secrets-mgmt.py create my_new_secret secret.json
python secrets-mgmt.py update my_new_secret secret.json
python secrets-mgmt.py delete my_new_secret

Conclusion

Despite numerous alternatives, as we have seen, managing secrets is not an easy task due to the lack of out of the box solutions.

To sum up this article and to choose the best solution, we must first consider if the secret is provided by a third party. If not, we should use the first solution detailed in this article (as it is a very simple one and tackling perfectly the problem). Otherwise, we saw different solutions:

  • Python script and AWS S3 Bucket.
  • Terraform and AWS S3 Bucket.
  • Python script on steroid.

In all three cases, we can consider that these are developments in the thinking of a team to arrive at the last solution. Therefore that solution would fit for a large range of situations and would be the best to use for your projects.

Now it’s your turn!

Schedule a 1-on-1 with an ARHS Cloud Expert today!