Commonly (in 90% of cases), the most popular way to store configuration data for your applications is in text files. There are different formats such as INI, YAML, JSON, or Java Properties. We follow this practice because application frameworks provide a simple mechanism for it. In some cases, since these files are plain text, it is a common practice to save the versions of these files into the projects’ control version systems (Git, Bazaar, SVN) as well.
Now, imagine this possible scenario: Our server has a security issue and an unauthorized person is able to log in and read the configuration file. The “intruder” has access not only to our server, but (even worse) all sensitive data, including database credentials, API keys, log paths, encryption keys, etc.
So, at this point you should think…
Where can I safely store my sensitive data?
To answer this question, we need to survey the security of our data: how developers will access to it, how testers will access it, and how it will be accessed by the application deployed in different environments, like the production environment.
To allow all users who need to save and read secrets with the right level of access, a good practice could be to set different data sources as destinations for writing and reading your company secrets. These data sources could include the following:
- Environment vars
- Configuration files (must be encrypted)
- 3rd party services to store secrets
One safe approach could be to allow users to read secrets depending on their security level from a chain of previously mentioned data sources sorted by priority. This would allow all involved “actors” to store and read the secrets.
Knowing the data sources, we could create a simple Class to read a secret key from these different sources, assigning them a priority level. For instance, the developers could save secrets in Environment vars only for their local environments. The same concept applies to testing environments, but for servers in Staging or pre-prod environments, the secrets could be stored in encrypted files in order to let the devops team integrate it with their automation tools. For production environments, since most servers are located behind VPNs with access to the outside world and are more exposed to potential attacks, storing secrets in third party services such as Vault (by Hashicorp) or Amazon KMS are highly encouraged.
Vault vs. Amazon KMS
Vault and KMS differ in the scope of problems they are trying to solve. The KMS service is focused on securely storing encryption keys and supporting cryptographic operations (encrypt and decrypt) using those keys. It supports access controls and auditing as well.
In contrast, Vault provides a comprehensive secret management solution. The transit backend provides similar capabilities as the KMS service, allowing for encryption keys to be stored and cryptographic operations to be performed.
Flexible secret backends allow Vault to handle any type of secret data, including database credentials, API keys, PKI keys, and encryption keys. Vault also supports dynamic secrets, generating credentials on-demand for fine-grained security controls, auditing, and non-repudiation.
Avoid saving secrets in plain text files, specially if they are unencrypted and stored in source control with revision history. Figure out the security level needed for your application and the best data sources to store the secrets in a safe manner, define the priority order to read these data sources and the policies to fetch secrets from these sources, and follow the golden rule: keep all your actors involved in this process. Developers and testers will need a way to set their secrets, and the Devops/SRE team should be aware of the user roles that need to be available in your servers to properly access the secrets.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Every tech company, should be using a robust feature flag system. A well-built system will provide a host of value-adds and efficiencies for your dev team
Feature flagging is a technique development teams deploy to enable easy switches between codepaths in their systems, at runtime. In simpler terms, they’re control structures that toggle on and off the code inside them. Dev teams use feature flags for a wide variety of purposes, from canary releases to A/B…
Creating new microservices individually via feature flags ensures a low-risk release that will be easy on your devs, network administrators, and end-users