I. Where We’ve Been
I’ve had a variety of experiences in my career, wildly different expectations based on ability level from the lowly intern to a principal engineer, as well as differences between small company/small team versus large company/large team. I am grateful for the opportunity to have started out in a position where I learned how to wear a few different hats. I learned the benefits of CI fairly early — almost 15 years ago! I was trained in Extreme Programming, which brought me into the world of sprint planning, pair programming, and retrospectives, seeing features and bug fixes all the way through from planning to release.
I thrived in this environment, but looking back, I can honestly say that our development pipelines were relatively simple compared to what I see today. At that time, I was never involved in anything that happened after release. And there was nothing security-related that I dealt with prior to that. I assumed this was in the hands of operations or security engineers at the tail end of the pipeline, perhaps even after deployment. If something were to be discovered, we would begin again with the planning stages of fitting an update into our development cycle. Seems a little late in the game, no?
Many developers have seen a lot of changes in the past several years as they move onto DevOps teams, and they should expect more to come. It feels like more and more responsibility is shifting our way. I don’t look at this in the same way as simply more work and higher expectations, but rather more empowerment to make better decisions about the software we develop — working smarter.
Developers are being pressed to break out of their silos. Gone are the days of throwing code changes over the wall and hoping for the best. Although the details of coding and software design will always be understood to be in the realm of our expertise, we also must acknowledge the details in the delivery and deployment process. This includes knowledge of our pipelines and of basic security concepts. Having a better understanding of the process our software endures as it hurdles toward deployment, we are better able to efficiently and effectively design the means to get there.
Several years ago, I participated in a security training program for developers. Much of this was rehashing responsible coding, taking charge of the code I wrote and ensuring I wasn’t building any obvious welcome mats for attackers. The training included defensive coding techniques for common attack vectors such as cross-site scripting, SQL injection and leaking credentials. There was some mention of watching out for packages and libraries that included known vulnerabilities, but looking back, this was not emphasized nearly enough.
Then came the Equifax breach of 2017 and then various dependency injection attacks, such as the SolarWinds hack, log4shell, spring4shell and rogue developers (to name a few) corrupting their own open source packages!
II. Where We Are
Mass amounts of information have been collected on individuals with the intent of serving the public with more efficient and performant applications — personal details abound on social media and logging into your bank account online to get an up-to-date balance is now the minimum expectation of good service.
The amount and detail of this type of information are attractive to the criminal element. As long as there’s a possibility of getting to it, the attention of attackers will not dissipate. Breaches in software are now heavily publicized and an embarrassment to organizations if it’s discovered that preventable measures were not prioritized or were ignored. The consequences to consumers have steadily increased over the last several years. To put it simply, there is now a very personal cost to developers, as we also take advantage of today’s technology and software to further enhance and enjoy our own daily lives.
Security breaches have become more and more common, or at least more frequently announced in the media. It has become apparent that much of our software is missing the bar when it comes to hardened security practices. And as pointed fingers fly around looking for who to blame, it’s expected that several are going to land in the direction of the developer.
What can we do? It is no longer enough to lounge in the satisfaction that the software we’ve developed works. We now need to make sure that it works responsibly.
First, let’s understand a few of the reasons we are in this predicament today. Along with the existence of masses of personal information, the following are also contributing factors:
- With our desire to decrease the time it takes to deploy, we automate our pipelines. This is not a negative at all. However, the desire for speed and automation do not and should not replace the effort it takes to test our applications. It seems we have lost some of the art of testing, and paths other than the “happy path” sometimes leave our application exposed.
- With the move to cloud native app development, developers are more involved with packaging applications into images. Tools have been created to make this process easier, but obfuscation of the details leaves some major attack vectors open when the use of open source and other third-party images come into play.
- Developers are encouraged not to reinvent the wheel. The availability of third-party libraries and the sheer number of them have led to an explosion of dependencies we regularly pull into our applications.
We have learned that paying attention to security defects earlier in our development process makes a huge difference. We might not be able to predict future vulnerabilities, but we can certainly use the knowledge gained from previous attacks to prevent repeated infiltrations due to the same issues. The adage “fool me once, shame on you; fool me twice, shame on me” comes to mind. We have no excuse when the information is available to us.
This does NOT mean the onus is entirely on developers. We rely heavily on our security engineers and on our operations personnel to not only help put safeguards in the appropriate places, but to help collect and curate security information to begin with. DevSecOps, anyone?
My main concern, however, is that as developers become more involved in building cloud native applications and packaging their applications into containers, we are multiplying the possibility of unintentionally packaging existing vulnerabilities. Not only are we accustomed to pulling in the frameworks and related dependencies that we have become comfortable with, but also pulling in parent and base images from public sources as well!
Worse, some of this happens automatically behind the scenes via plugins that intentionally hide these details. The intention is good, mostly an attempt to ease the developer’s workflow, but we really need to be more aware and careful about what we’re doing. My thoughts wander to that random flash drive innocently lying on the sidewalk.
III. Where We Need to Be
The security space has evolved and improved dramatically over the last several years. Vulnerability databases continue to grow and provide the information we need — sources like the U.S. government’s NVD and Risk-Based Security’s VulnDB, as well as other public security bug and CVE trackers, are invaluable.
Using the combination of these resources as well as increasing our awareness of how our software is built with regard to dependencies, open source and other third-party resources, will bring us a long way to improve our protections. A lot of this responsibility is finding its way directly in front of developers. We are in an excellent position to begin the vulnerability filtering and detection process right from our development environment!
Knowledge is power. This is undeniable. But it can also be pretty scary if you don’t know what to do with it. The next step after collecting information is to analyze it, and this is when the decisions that matter are made. The amount of data available to us now is overwhelming. Now it’s time to focus on curating this data and then make reasonable recommendations based on analysis.
When it comes to reviewing a list of vulnerabilities, for example, it is naive to think that we will be able to eliminate them all. It would be an unhealthy exercise to block every check-in or fail every build based on a zero-vulnerability policy. Instead, we need to be able to keep moving forward and make reasonable decisions based on answers to the following questions:
- Is the vulnerability applicable to my software, and what are the consequences of it being exploited?
- Is there a fix available for this vulnerability, and how much effort would it be to implement and/or make the necessary upgrades?
- Is this vulnerability severe enough to halt production?
I believe that some of these decisions are best made by security specialists rather than developers, and this is where the importance of solid security policies come into play. What I’m looking forward to as a developer is more guidance on when it is appropriate to sound the alarm. CVSS scores to help us measure severity are a good start, but these are a work in progress (CVSS v2 versus CVSS v3?), and there is much more to be done.
All in all, we are heading in the right direction. I see more and more vulnerability scanning tools that are intended for the furthest left regions of our pipeline — the developer. I’ll be embracing these tools that help me to make wiser decisions when building my software, especially those I can incorporate directly into my existing development environment.
Detecting vulnerabilities transparently and easily is a great first step. But now that I see those red lines warning me of danger… what should I do next?
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker, JFrog.