TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Security / Software Development

Security First! Strategies for Building Safer Software

The enterprise needs to execute an intentional cultural shift, which makes security part of everyone’s job. But what does that mean in practice?
Jul 10th, 2023 8:42am by
Featued image for: Security First!  Strategies for Building Safer Software
Image by Shutterstock.

“I honestly think life would be better for everyone if all security professionals were strangled at birth,” the head of software muttered to me as we left a particularly fractious meeting with the security team.

It was the late 1990s, and we’d just invested six months of effort on developing a new browser-based self-service data querying platform. It would, if we could only get it adopted, save the business a huge amount of money and free up time in the under-resourced management information team, allowing them to focus on far more interesting and strategic work. However, it relied on a Java applet for the client UI, which of course meant allowing Java to run in the browser.

We’d already completed a successful pilot from our London headquarters and were looking to expand it to the New York office, but the security team was point-blank refusing because, they said, Java represented too large a security risk. No amount of us explaining Java’s built-in sandboxing model, the additional measures we had put in place on top of that or our out-and-out pleading were going to persuade them otherwise.

Having security involved in the initial stages of a software development process always made sense, as with bug fixing, it is faster and cheaper to address security issues early on. But, particularly in larger enterprises, it was rarely done in practice. By the same token, individual development teams would tend not to invest in security if they saw it as the role of a dedicated security team and thus somebody else’s problem.

This pushed security to the right, as one of the things that happened between development and deploying to production, where security becomes more difficult and often less effective.

It also led to friction between the development and security teams, since the two groups had conflicting goals: Developers were under pressure to ship more features more quickly, and saw security as a gatekeeper, slowing down or even halting development to allow time to investigate issues. At its most extreme, developers felt, security’s ideal situation would be that nothing would be deployed to production at all — after all, if nothing is running, then nothing can get hacked.

Conversely, the security team was incentivized to keep systems and data safe and was under pressure to keep security risks to an absolute minimum. They would get yelled at by the senior leadership, or even fired, if a breach did occur and wondered why we software developers were such irresponsible cowboys.

Traditionally, security focused on a well-understood application perimeter, usually surrounding a single data center. Modern applications, however, are rarely monolithic; rather, they are composed of microservices running in multiple environments and communicating across multiple networks. They present a complex, broad attack surface that can’t be defended solely with the basics of code scanning and good programming practice, important though those are.

So the enterprise needs to execute an intentional cultural shift, which, as has become something of an adage, makes security part of everyone’s job. But what does that mean in practice?

Small, Medium or Large?

One thing to keep in mind is that it varies by the size of the organization. Small startups are unlikely to be able to afford all the specialists you’d ideally want — database administrators, security people, usability people and so on — so they have to make some tough decisions about who they do and don’t have on staff.

In this situation, technology consultant and O’Reilly author Sam Newman told The New Stack, “You either say you don’t care about it or you offload that work to somebody else. This is why very small shops should be using the public cloud because you are buying expertise at this point. Even if you are running managed virtual machines, the cloud provider is going to do a better job of patching those machines and monitoring them for foul play than you can.”

Startups do have the advantage of a clean slate though. At one where I was lead architect, we brought in a security person before the design phase, working together with developers, helping educate them, documenting security policies and best practices, and coaching everyone to adopt a security mindset. He also worked with the business analysts to, as he memorably put it, “encourage them all to think like devious weasels.”

Then, as we shifted our focus to building the system, he manually audited the code, which uncovered a number of issues, such as passwords being logged in plain text in debug mode (my fault, embarrassingly), and helped educate the developers on best practices.

This focus on best practices works well for smaller companies, Chainguard’s Adrian Mouat told us, and is now supported by some of the tools.

“As an example, if you look at something like docker init, which is currently in beta, I can use that and it will produce a new Go, Python or Node project, and the Docker file it creates already has some security best practices baked in,” he said.

The introduction of scanners for container images has allowed some of that manual auditing to be automated as well, and having the scan earlier in the process is a good idea, Mouat said.

“When scanners first came out, we thought we’d put the image scanner just before we deployed to production so anything with vulnerabilities doesn’t get deployed. The problem is there are so many vulnerabilities, that that doesn’t really work. You are much better having developers do it because they can realize they’ve added this vulnerability in their code, and then they can fix it.”

The cultural changes necessary to shift security left mean new ways of working for everybody involved, but “what it doesn’t mean is developers doing all the work,” said Sam Newman, consultant and author.

Security tools that are aimed primarily at developers, such as Synk and Docker Scout (currently in early access), can be really helpful here, since they not only highlight vulnerabilities but also provide some guidance on how to address them.

Of course, using an up-to-date minimal image with no known vulnerabilities, such as Chainguard Images is a good idea as well. Many vulnerabilities are found in the extraneous and unnecessary “clutter” of an image, and things like removing a shell from a base image close potential access points for attackers.

Shifting Left in Large Organizations

Useful though these tools are, in larger organizations it is the cultural aspects that tend to come to the fore, since so many factors, from entrenched processes, to budget, to corporate politics can get in the way.

Because of this, the cultural changes necessary to shift security left means new ways of working for everybody involved, but “what it doesn’t mean,” Newman told us, “is developers doing all the work.”

Mouat agreed, telling The New Stack, “There is a lot of security knowledge required even in quite basic stuff like setting up container images and Docker files, and because it is quite easy to get this wrong, having security people embedded right from the start is a big advantage.”

Ultimately, according to Newman, the focus for the security professional needs to be getting to yes — “I’m going to do my best to work out how we can do what you want to do in the safest way possible.”

However, this doesn’t happen if you aren’t incentivized to do it. “So,” Newman went on, “shifting left is about getting people involved early, having aligned objectives and aligned goals, but also having both hard and soft incentives aligned. Without that, I don’t think it works at all.”

SafeStack CEO Laura Bell suggested there are two dysfunctions that can happen with security professionals. The first is the one we started with (security person says no); the other is that a security person, with the best of intentions, suggests a tool for the CI/CD pipeline. However, since they are not necessarily a developer, or perhaps haven’t been one for a while, the tool is too heavyweight and blows the build time out, or provides huge amounts of information that the developers just don’t know what to do with.

Newman suggested that mitigation here might be to involve the developers in a threat-modeling exercise.

“I know that a threat model is typically quite a transactional activity,” he told us, “but as part of it, a security expert will look at what are our assets, threats and risks. There’s no reason why developers can’t be involved in that process, and coming out of that, they will have a better understanding of the kind of things that security people are worried about. Now when a security person says we shouldn’t do x, y or z, the developer will have a better understanding of the impact of getting it wrong, and they might also find other ways to mitigate those threats that fit better with their ways of working.”

Of course, even if you’ve done a threat-modeling exercise on a system before, there is no reason it can’t be repeated to help developers and security professionals gain shared context. This has other advantages. It is a bit of a generalization, but developers do have a tendency to focus on the new, shiny thing, so, Bell said, we might read in the press about state-sponsored actors hacking systems and will start thinking about protecting ourselves from those problems. The issue here is that in reality, security incidents are generally much more prosaic, and a threat assessment can help you understand this better.

If your development team and security team have different motivations, you need to spend some time trying to reach a common goal and set of values.

According to Newman, another issue is that developers tend to ignore four of the five NIST Framework functions, focusing solely on protection. This is because developers think this is the only thing they have any real control over. But actually, they can have an impact across all five functions. Bell’s advice here was to role-play a security incident with both techies, including developers, and non-techies.

Newman added, “As a developer in that room, you suddenly realize there are all these other things that need to happen, and we haven’t got the necessary stuff in place to spot these things in the first place.”

In terms of establishing best practices at scale, one option is analogous to how a platform team operates. In this model, security becomes an enabler of good practices, providing a golden path and guardrails that keep applications and infrastructure protected without putting barriers in the way of engineering delivery. But we must stress that this is not a substitute for having meaningful conversations.

“There has to be an ongoing conversation about what are the things I should do as an engineer, versus the things you should do as the expert. There isn’t a right answer for that for all organizations, and the answer will vary across an organization,” Newman said.

In other words, if your development team and security team have different motivations, you need to spend some time trying to reach a common goal and set of values. Typically in my experience, this comes down to being motivated by a desire to build a good product.

We should add that it is unreasonable to ask developers to take security seriously and simultaneously expect the same rate of feature delivery to be maintained. So at an organizational level, you have to be willing to slow down a little to focus on quality.

Finally, we should say that, in the same way that there is no such thing as software with zero bugs, there is no such thing as software with zero security vulnerabilities. So a part of this has to be a discussion around how many, and which, security issues are acceptable, as with an error budget in site reliability engineering practice.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.