Saltstack sponsored this story.
Not too long ago, software was developed and tested by developers in silos, handed off to the operations team for deployment, and — as an afterthought before going live — given to the security team for a quick assessment.
The request to the security team went something like this:
“Hey security team, can you run a quick test on this before we deploy to production? Oh, by the way, we go live next week — no pressure!”
I’m oversimplifying, but hopefully, you get the point. Needless to say, this waterfall process of development didn’t work well; applications often broke down in unexpected ways. And, when they broke down, it wasn’t always clear why because the development and production environments weren’t always the same. Worse, since security was an afterthought, security vulnerabilities, misconfigurations and compliance violations showed up in production.
Shift Left, Not Right for Security
Picture software being developed in stages from left to right :
It’s easy to see why security considerations shifted to the right of the process: security considerations usually slowed down the process of getting functional software into the hands of paying users, so from a business perspective, it adds limited business value — unless of course one day a security issue ends up bringing down business value and credibility. Just ask Equifax!
Fortunately, the old way of software development is rapidly changing thanks to wide adoption of Agile development practices. These new processes are helping bridge the gap between development (Dev) and operations (Ops) teams in organizations that were previously silo’ed. But there is another key silo of information security (Sec) that is very often missing from this equation. For example, according to Gartner research: “75 percent of successful attacks occur against previously known vulnerabilities for which a patch or secure configuration standard was already available.”
As a result, organizations have now begun breaking down “Dev” and “Ops” silos to form DevOps, security has entered the fray to form DevSecOps. This is partly possible due to programmable nature of the modern infrastructure through use of API’s, and ability to express infrastructure as code.
As the latest member of the DevSecOps trifecta, security is also under the responsibility of all stakeholders in a DevOps team. But we have a long way to go to make sure everyone acknowledges their security responsibilities. That responsibility must also cover the entire development cycle from the very beginning. In short, security considerations need to shift left, and not right, in development.
Injecting security early into the development process can be done in many ways, from training engineers to following secure coding principles to adding static application security testing (SAST) and dynamic application security testing (DAST) tools as part of the CI/CD pipeline. The end result is that every iteration of the product gets checked for vulnerabilities, misconfigurations and compliance violations. But perhaps the simplest way to make security part of the DevOps process is to build transparency within the process.
Infrastructure as Code
One way to add transparency to the development process is by expressing the infrastructure that needs to be deployed as code, ideally version-controlled by either Git or some other version-control system. Any change to the code needs to be reviewed and approved before being deployed into production.
Expressing infrastructure as code has many benefits: First, everyone from development to operations to security can review which applications are being deployed, where they’re being deployed and how they’re being deployed. That provides assurance to the operations team that correct version of the scripts or applications are being used. And the security team will be assured applications are deployed with secure configurations and don’t leak secrets like credentials or keys. If for some reason they don’t meet the requirements, the security team can quickly propose a change — preferably via a pull request — that can be reviewed, approved and merged into the production infrastructure code base.
To explain how all this will work, let’s work through an example with Salt.
For the sake of this example, let’s assume a custom application needs to be deployed, for which the development team requires a Redis in-memory database, a web server and a user group to be created. They could express this requirement in terms of code with two key files: an orchestration file to describe the web infrastructure and a file to describe Redis deployment.
Salt Orchestration File
Here’s a simple example of a Salt orchestration file to deploy web infrastructure that installs redis, httpd and a web team user group.
# File : /srv/salt/orch/web-infrastructure.sls deploy_redis_servers: salt.state: - tgt: '*redis*' - sls: - redis
deploy_web_infra_servers: salt.state: - tgt: '*web*' - sls: - httpd - python - python.python-lxml - require: - salt: deploy_redis_servers deploy_project_group: salt.function: - name: group.add - tgt: '*' - arg: - web-team - require: - salt: deploy_web_infra_servers
Which could be deployed as follows :
mehul@saltmaster:~$ sudo salt-run state.orch orch.web-infrastructure
Redis Installation Salt State File
Here’s the Redis installation state file contributed by the dev team to install Redis version (assuming the Redis dependencies are installed) 4.0.10:
# File : /srv/salt/redis/redis.sls get-redis: file.managed: - name: /usr/local/redis-4.0.10.tar.gz - source: http://download.redis.io/releases/redis-4.0.10.tar.gz - source_hash: d2738d9b93a3220eecc83e89a7c28593b58e4909 cmd.wait: - cwd: /usr/local - names: - tar -zxvf /usr/local/redis-4.0.10.tar.gz -C /usr/local >/dev/null - watch: - file: get-redis make-and-install-redis: cmd.wait: - cwd: /usr/local/redis-4.0.10 - names: - make >/dev/null 2>&1 - make install >/dev/null 2>&1 - watch: - cmd: get-redis install-redis-service: cmd.wait: - cwd: /usr/local/redis-4.0.10 - names: - echo -n | utils/install_server.sh - watch: - cmd: get-redis - cmd: make-and-install-redis - require: - cmd: make-and-install-redis
Security as code
Before we go any further in this example, here’s some context on the Redis security model from Redis.io:
“Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually, it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket.”
When a member of security staff reads the above text, it’s natural for them to raise red flags about this deployment. They can then react in two ways: one, raise concerns to the leadership and require not using Redis or two, propose pragmatic fixes to the infrastructure code to mitigate the risk based on publicly available guidelines to secure Redis installations.
The pragmatic fixes could be:
- Secure file permissions on the redis data directory and configuration file so that unauthorized users can’t make config changes or read Redis data.
- Require a password to authenticate to the redis database.
- Implement an iptables rule so that only trusted clients can connect to it.
That could be implemented as follows with the additions to the redis Salt state file referenced above:
Additions to Redis Installation Salt State File
secure_redis_conf_permissions: file.managed: - name: /etc/redis/6379.conf - mode: 644 - require: - cmd: install-redis-service secure_redis_data_permissions: file.managed: - name: /var/lib/redis - mode: 700 - require: - secure_redis_conf_permissions require_redis_password: file.append: - name: /etc/redis/6379.conf - text: # This could be further encrypted with Salt Pillars - requirepass supersecret - require: - secure_redis_data_permissions
IPTABLES Salt State File
And finally, propose an iptables rule (which also happens to be one of the best defenses) to only allow connections from authorized clients:
# File: /srv/salt/iptables/iptables.sls # Allow connections from trusted ips iptables_allow_trusted_ips: iptables.append: - table: filter - chain: INPUT - jump: ACCEPT - dports: - 80 - 6379 - source: 10.20.0.0/24 - save: True # Deny everything unless defined enable_reject_policy: iptables.set_policy: - table: filter - chain: INPUT - policy: DROP - require: - iptables: iptables_allow_trusted_ips
To apply these rules, the orchestration file would need to be updated to execute the IPTABLES state file as follows :
# File : /srv/salt/orch/web-infrastructure.sls apply_iptables_rules: salt.state: - tgt: '*' - sls: - iptables - require: - deploy_web_infra_servers
As you can see, by expressing infrastructure as code, the security team is able to quickly review what is being deployed, propose a few quick changes to it, and as a result, deploy the application with a secure configuration into production.
Satisfying Compliance Requirements as a Byproduct
Another good thing about moving security to the left in the development process is that the compliance requirements get satisfied upfront as a byproduct.
Here are some examples of PCI DSS, Center for Internet Security (CIS) Critical Controls and 800-53 compliance requirements:
PCI DSS requirement 2: “2.2 Develop configuration standards for all system components that address all known security vulnerabilities and are consistent with industry-accepted definitions.”
CIS Critical Control 5: “Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers”
800-53 IA-5: “Password-based authentication for information system”
As you can see, just by deploying a securely configured infrastructure, a good portion of these compliance requirements are satisfied even before the product is deployed to production.
Operations as Code
Once the application is securely configured and deployed, the next step is to make sure it’s up and running. In some cases, the operations aspect of the process — such as managing and monitoring the application — can also be expressed in terms of code.
Monitoring (Salt Beacons)
Salt Beacons provide the operations team the ability to monitor files, processes, services and a host of other things, and can trigger events when a certain criterion is met (such as failed logins, unauthorized changes to critical files or processes or service termination).
In the above example, the operations team could configure a Beacon to alert if, say the status of “redis-server” service has changed.
Here is an example :
# File : /etc/salt/minion beacons: service: - services: redis-server: onchangeonly: True
Responding (Salt Reactor)
And finally, alerts are only good if some action can be taken on them. Salt reactors can be configured to react to the above alert and restart the service. Here is an example:
# File : /etc/salt/master.d/reactor.conf reactor: - 'salt/beacon/redis*/service/redis-server': - salt://reactor/restart-redis.sls # File: /srv/salt/reactor/restart-redis.sls restart_service: local.service.restart: - tgt: 'redis*' - arg: - redis-server
Injecting security into the DevOps workflow isn’t as hard as it once used to be. As described above,“Dev,” “Sec” and “Ops” teams can come together to form a powerful DevSecOps combination and deliver a product that balances three different needs and yet serve a single goal of deploying a securely configured production application. Therefore when it comes to security, it’s time to shift left, not right.
Feature image via Pixabay.