The Challenges of Secrets Management, from Code to Cloud
Managing secrets, such as passwords, keys and other sensitive information, is critical for protecting data and systems from unauthorized access and breach. However, it can be complex and challenging to do so across different environments, platforms and teams.
Various tools and technologies — such as key management services, secret stores and configuration management tools — can help. However, each tool has its own advantages and limitations, and choosing the right approach depends on the specific needs and context of the organization.
Some of the challenges of secret management include:
- Secure storage. Secrets must be stored in a secure manner to prevent unauthorized access. This can be difficult in cloud environments, where secrets may be stored across multiple servers and locations.
- Access control. It is important to have proper access controls in place to ensure that only authorized individuals or systems have access to secrets.
- Distribution. Secrets must be distributed to the systems and individuals that need them, while still maintaining security.
- Scale. Managing secrets at scale can be challenging, as the number of secrets and the number of systems that need access to them can grow quickly.
- Auditing and monitoring. It is important to be able to track and audit secret usage to detect and prevent misuse.
- Automation. Automating secret management can be difficult, especially in a dynamic, cloud-based environment.
- Compliance. Depending on the industry and location, there may be specific regulations and compliance requirements that must be met when storing and managing secrets.
- Rotation. Secrets need to be rotated regularly to minimize the risk of unauthorized access.
- Integration. Secret management should be integrated with other security measures and systems in the organization, such as identity and access management, and security incident and event management.
The Risks of Hard-Coded Secrets
Hard-coding secrets can create a range of challenges and risks for secret management, making it difficult to rotate, manage and secure secrets, track usage and integrate with other security tools.
It is important to use secure secrets management practices, such as storing secrets in a secure, centralized location, and using encryption, access controls, and audit logs to manage access and usage. By using these practices, you can minimize the risk of security breaches, ensure compliance with regulations, and enable more efficient management and scaling of your applications.
Tools to Detect Hard-Coded Secrets
TruffleHog is a powerful open source tool for identifying secrets and sensitive information across an organization’s entire software development life cycle (SDLC). In addition to identifying secrets in code, TruffleHog can also detect insecurely shared secrets in other areas of the SDLC, such as configuration files, build scripts and deployment pipelines.
TruffleHog’s engine is designed to verify over 700 unique credential types against the key provider to reduce false positives. This means that the tool can identify a wide range of potential security vulnerabilities, including API keys, passwords and access tokens.
One of the key benefits of TruffleHog is its ability to shift-left key rotation by automating the secret remediation process with the developer who leaked the key. This means that developers can quickly and easily identify and fix security vulnerabilities as part of their normal workflow, without having to rely on dedicated security teams to manage the process.
Keep in mind that, when running TruffleHog on a repository, it will only scan the committed files, so if you have any untracked files containing secrets, it will not detect them.
Dockle is a container image linter tool designed to help developers build secure and best-practice Docker images. By using Dockle, developers can ensure that their Docker images are secure and follow best practices. It uses checkpoints that include Center for Internet Security (CIS) Benchmarks, which are a set of industry-standard best practices for securing IT systems and infrastructure.
Dockle is a valuable tool for any organization that uses Docker images in their software development and deployment processes. By providing automated checks for security issues and vulnerabilities, as well as best practices for image design and configuration, it can help improve the security posture of containerized applications and reduce the risk of data breaches and other security incidents.
CodeQL is an open source security-analysis tool developed by GitHub. It provides a powerful framework for performing automated code analysis and identifying security vulnerabilities in code.
This repository contains the standard CodeQL libraries and queries that power GitHub Advanced Security and other application security products that GitHub makes available to its customers worldwide. With CodeQL, developers can easily scan their codebase for vulnerabilities, identify common coding errors and find potential security flaws before they become a problem.
CodeQL also provides a flexible query language that allows developers to define custom queries for analyzing code. These queries can be used to identify security vulnerabilities, detect code smells and best practices violations, and track down code defects.
Trivy is an open source vulnerability scanner and security tool that can be used to find security issues such as vulnerabilities, misconfigurations, secrets and software bills of materials (SBOMs) in various environments including containers, Kubernetes (K8s), code repositories and clouds.
Trivy uses a database of known vulnerabilities to scan images and detect any potential issues. It can scan multiple types of container images, including Docker and Open Container Initiative (OCI) images, as well as Kubernetes manifests and Helm charts.
In addition, Trivy can scan git repositories for secrets, Amazon Web Services (AWS) S3 buckets for publicly exposed data, and other cloud services for misconfigurations.
Trivy can be used both in development and production environments and can be integrated with CI/CD pipelines to automate security scanning.
What If Detection Fails?
TruffleHog and Dockle can’t detect every possible instance of secrets or vulnerabilities in container images or code. Secrets can be obfuscated in various ways, such as using different casing or encoding techniques, making them harder to find.
Additionally, some encryption techniques may not be detectable by these tools, especially if they use complex encryption methods that cannot be easily reverse-engineered.
However, TruffleHog and Dockle are still valuable tools for identifying many common security issues and vulnerabilities in container images and code. While they may not catch every possible instance of a security issue or vulnerability, they can still help significantly improve an organization’s overall security posture, especially when integrated with other security solutions and processes.
The Problem with Kubernetes ConfigMaps
Kubernetes ConfigMaps can be a potential source of security vulnerabilities if they contain sensitive information and are committed to a git repository. ConfigMaps can be easily accessible by anyone with access to the git repository, making them vulnerable to exploitation.
To address this issue, adopt a security-first mindset when working with ConfigMaps; use Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) to ensure that only authorized users have access to ConfigMaps. Additionally, encrypt ConfigMaps to prevent unauthorized access and to use additional metadata and filters to help identify the purpose of the secrets contained within them.
Also, try to avoid hardcoding sensitive information in ConfigMaps; use environment variables or external secret stores, such as Kubernetes Secrets or external key management services, to store sensitive data. By doing so, you can prevent it from being accidentally exposed or compromised.
Overall, ConfigMaps can be a valuable tool for managing configuration data in Kubernetes environments, but it is important to use them securely and to take steps to mitigate potential security vulnerabilities.
Using Kubernetes Secrets
Kubernetes Secrets is a more secure approach to storing sensitive information than ConfigMaps, because Secrets are encrypted at rest and in transit. By following best practices for securing Secrets — and continuously auditing and monitoring your Kubernetes clusters for vulnerabilities — you can help ensure your sensitive information is properly protected.
Kubernetes Secrets should be properly secured and only accessible by authorized users. This can be achieved by using RBAC or ABAC to control access to Secrets and ensuring they are only accessible to the specific services that require them.
Managing the life cycle of Secrets can be challenging, especially when it comes to key rotation and expiry. Secrets should be properly identified and labeled with metadata so that their purpose and expiration date are clear, and to ensure that they are rotated and updated regularly, to prevent unauthorized access.
Additionally, Secrets should be properly encrypted and the encryption keys are securely managed.
Enforce stringent SecurityContext with admission controllers in Kubernetes to ensure that your Secrets are properly secured. SecurityContext allows you to set various security-related properties on objects in your cluster, including pods, containers and volumes.
By setting and enforcing strict security context on your pods and containers and using admission controllers, you can prevent lateral movement and limit the scope of potential security breaches. Additionally, if you are running an older version of Kubernetes that does not support security context, you can use Pod Security Policies (PSPs) to achieve similar functionality.
By implementing these measures, you can reduce the attack surface of your cluster and make it more difficult for attackers to compromise your sensitive data.
OpenID Connect (OIDC) is a widely-used authentication protocol built on top of the OAuth 2.0 authorization framework. It provides a simple and secure way for users to authenticate and authorize access to web and mobile applications using their existing OIDC credentials, such as those from Google, GitHub or Okta.
One of the popular applications that leverages OIDC is HashiCorp Vault, a secrets management tool used to store and manage sensitive information. By configuring OIDC authentication for Vault, users can authenticate with it using their existing OIDC credentials, thus eliminating the need for creating separate usernames and passwords for Vault.
Configuring OIDC authentication for Vault involves creating an OIDC provider, configuring Vault to use OIDC authentication, and testing the authentication flow using the Vault CLI. This can help simplify the authentication process for users while maintaining strong security measures.
To get started with configuring OIDC authentication for Vault, you can follow the comprehensive guide provided by HashiCorp in its documentation.
When used in combination with Kubernetes, Vault can provide an additional layer of security by leveraging Kubernetes authentication to control access to database credentials. Vault’s Kubernetes authentication method verifies the authenticity of a Kubernetes ServiceAccount token, which can then be used to authorize the creation of dynamic database credentials.
In order to enable this functionality, the Vault agent can be deployed as a sidecar container alongside the application container within a Kubernetes pod. The agent is responsible for handling the creation and management of dynamic database credentials, as well as the authentication and authorization of Kubernetes ServiceAccount tokens.
With this configuration in place, applications can access their dynamically generated database credentials through a specified path in Vault, without ever having to store static credentials in configuration files. Vault will automatically rotate the credentials at a specified interval, further reducing the risk of credential leaks.
To get started with configuring dynamic database credentials with Vault and Kubernetes, you can follow the detailed guide provided by HashiCorp in this blog post.
Vault Authentication and Authorization
The script provides a useful example of how to set up a Kubernetes cluster with Vault authentication and secrets management.
The demo with the
k8s-vault-minkube-start.sh script covers:
- Authentication and authorization. The demo showcases the use of OIDC integration or any of the other IDP-related auth methods. This allows for a secure and flexible authentication process for users accessing the Vault server.
- Auditability. All events are sent over to an ELK stack, and alerts are set up with ElastAlert. Configuration as Code, Vault policy list and Vault policy read are used for spot audits, which enhances the overall auditability of the system.
- Temporality. The demo showcases how credentials can easily be rotated or used temporarily only. Secrets in the KV backend can be rotated easily and are versioned, which adds to the security of the system.
- History. Temporal secrets are revoked in a timely fashion and are not allowed to wander in the wild. This ensures that there’s accountability for the system’s history, and any potential security breaches can be identified and acted upon quickly.
Moreover, it is recommended to have separate repositories with your Vault configuration to ensure better organization and security of your secrets.
Note that the script is intended for testing purposes only and should not be used in production environments without proper modification and security hardening.
The Challenges of Running Vault
Running Vault can present several challenges, but with proper planning and implementation, it can provide an effective solution for secrets management. Some of the challenges you may encounter when using Vault include:
- Configuration. Using Vault for multiple solutions can result in a complex HCL repository, as well as additional Kubernetes and Terraform code to provision it. It’s essential to ensure that the configuration is organized and easily maintainable.
- Auth backend integration. Integrating different auth backend methods safely requires careful attention, as mistakes can lead to security vulnerabilities. It’s crucial to properly manage the exposure of credentials, ensure roles are revoked correctly and clean up temporal credentials.
- User training. Not all DevOps consumers may be familiar with Vault and its intricacies, so training is necessary to ensure proper usage.
To effectively use Vault, consider taking the following steps:
- Ensure that enough metadata is stored about the secret, enabling effective usage.
- Have backups in place in case of damage to the storage running Vault.
- If possible, remove root tokens for Vault in git to prevent unauthorized access.
- Be ready to secure master secrets in the secondary secret management system setup.
- Harden the environment where Vault runs to prevent unauthorized access.
- Be prepared for credential-related backend challenges.
Vault, like Kubernetes Secrets, can be a valuable tool when used correctly. However, it’s essential to understand and prepare for the various challenges that may arise to ensure its effective usage.
Secrets Management When Moving to the Cloud
When it comes to storing secrets in the cloud, there are a few challenges to consider. Two common solutions are the AWS SSM ParameterStore and AWS Secrets Manager. The examples we’ll explore here primarily focus on AWS, but it’s worth noting that the OWASP WrongSecrets project is currently working on including examples for Google and Azure.
These examples share similarities with the AWS examples and will be useful for organizations using those cloud providers. You can find more information about these examples on the WrongSecrets project’s website.
Here’s what to keep in mind when storing secrets in the cloud.
First, encryption is crucial; use AWS KMS to properly encrypt values or find alternative encryption options. Secrets should also be properly rotated and versioned. The AWS Secrets Manager and ParameterStore have different ways of exposing secrets and regulating access, so you should monitor access using CloudTrail. And as mentioned previously, store metadata about each secret to avoid forgetting its purpose.
Authentication matters, too. AWS Security Token Service (STS) can be used to authenticate and obtain a temporal credential such as a role, and identity access management (IAM) roles and policies define whether entities can use the AWS SSM ParameterStore or AWS Secrets Manager.
Resource policies in the Secrets Manager and IAM policies need to be carefully designed to avoid overly broad definitions, which can create powerful entities that have access to too much. Fine-grained policies should be used, and policies should not be attached to a single role.
When considering access levels, determine at what level an entity is allowed to access the SSM ParameterStore or Secrets Manager. Is access granted at the worker node level, Kubernetes role level, or pod level? The closer the authentication is done to the actual backend service, the more secure it becomes.
Setting up and configuring services like IAM, STS, Secrets Manager and ParameterStore can be achieved via the console, but using Infrastructure as Code (IaC) is a better approach. However, there’s a caveat to this approach: take care to resolve secrets properly, as secrets can be inserted in nefarious ways with various IaC providers.
Additionally, when authenticating, be sure to secure credentials and avoid storing secrets directly in CI/CD tooling. Instead, use a separate secrets management system and grant the necessary access to the CI/CD tooling.
You can leverage CI/CD tooling to rotate secrets or instruct other components to do it for you. For instance, the CI/CD tool can request a secrets management system or another application to rotate the secret.
Alternatively, the CI/CD tool or another component could set up a dynamic secret: The secret is invalidated when the consumer no longer lives. This procedure reduces possible leakage of a secret and allows for easy detection of misuse. If an attacker uses a secret from anywhere other than the consumer’s IP, you can easily detect it.
Alternatively, the CI/CD pipeline can leverage encryption as a service provided by a secrets management system to encrypt secrets before committing them to git. The consuming service can then fetch and decrypt the secrets during deployment.
Not all secrets must be stored in the CI/CD pipeline, and some may be better managed by the deployed services themselves, during deployment, runtime and destruction. Careful consideration should be given to the type of secrets involved and the overall security requirements of the system.
Backup of secrets is an important aspect of secrets management in CI/CD. It is recommended to back up secrets to separate storage that is used for product-critical operations, such as cold storage. This ensures that the secrets are not lost in case of any disaster or system failure.
Encryption keys, in particular, should be backed up in a secure location, as they are critical for encrypting and decrypting sensitive information. Backups should also be regularly tested to ensure that they can be restored when needed.
Storing Secrets in CI/CD: Best Practices
When storing secrets in CI/CD pipelines, it’s important to follow best practices to ensure their security. Here are a few guidelines to help you:
- Use a secrets management system to store secrets. This ensures that secrets are encrypted and stored securely.
- Rotate secrets regularly, especially if you suspect they may have been compromised. Use automation to manage the secret rotation.
- Do not store secrets in plain text. Instead, use encryption or hashing to protect them.
- Ensure that secrets are not stored in version-control systems like git. Instead, use a CI/CD pipeline to fetch secrets from a secure storage system.
- Use environment variables to store secrets within the pipeline, rather than hardcoding them into scripts or configuration files.
- Use a tool like Vault or Amazon Key Management Service (KMS) to encrypt and decrypt secrets on-the-fly within the pipeline, rather than having them persist in memory.
- Limit the number of people who have access to secrets. Only provide access to those who need it to do their job.
- Monitor access to secrets to detect and prevent unauthorized access.
- Consider using multi-factor authentication (MFA) for accessing secrets.
- Encrypt all network traffic between the CI/CD pipeline and the secrets management system.
Storing secrets in the cloud requires careful consideration of encryption, rotation and versioning, access control and authentication. Fine-grained policies should be used, and access levels should be determined based on the level of authentication needed.
Finally, IaC can be used to set up and configure services, but care should be taken to ensure proper resolution of secrets, and separate secrets management systems should be used whenever possible.
Overall, effective secrets management requires a proactive and holistic strategy that considers the entire life cycle of secrets, from generation to disposal, and involves all relevant stakeholders, including developers, operations, security and compliance teams.