Ask any bug hunter what the most annoying part of their job is and they’ll likely point to vulnerability disclosure and vendor notification. That’s not really a surprise because while finding flaws is mentally challenging, like solving a puzzle, reporting them is a bureaucratic process that can take weeks of back-and-forth emails.
And since few companies have dedicated product security teams, these communications are often with people who don’t immediately understand the reported problems and their security implications, which can be frustrating.
Coding bug-free applications is nearly impossible which is why data security strategies should focus on reducing risk rather than eliminating it entirely. The decisions companies take on how they handle security reports can make the difference between learning about vulnerabilities from ethical hackers and finding their customer data up for sale on the underground market.
Today, most companies are ill-equipped to get notifications about their own systems. Many only have a general-purpose contact email address on their website that’s used for all sorts of issues. In time, those mailboxes get so many useless messages that it becomes very hard to constantly sift through them. They’re not an appropriate way to receive time-sensitive security-related reports.
It’s not rare for ethical hackers to spend hours or days hunting down a company’s technical staff on social media platforms in order report issues directly because other attempts to contact the organization have failed. On top of that, there are companies and developers who get very defensive and simply dismiss security reports or, worse, respond with legal threats.
“Many well-intentioned people simply give up and don’t report serious security incidents when the effort is too high or the risk is too great” — security expert Troy Hunt
A recent survey of nearly 1,700 ethical hackers who participated in bug bounty programs through the HackerOne platform revealed that one in four have had cases where they eventually gave up on reporting certain vulnerabilities because the affected vendors didn’t have proper channels for disclosing the issues. And this wasn’t because of a lack of trying to contact those organizations.
“For companies that do not have a vulnerability disclosure policy (VDP) in place, which is a published process and channel that publicly states how a vulnerability can be safely reported and provides ‘safe harbor’ language for the hacker, the most common (and legally safest path) for a white hat hacker with knowledge of a vulnerability is non-disclosure — because there’s no way to disclose it,” HackerOne said in its report.
“Many well-intentioned people simply give up and don’t report serious security incidents when the effort is too high or the risk is too great,” security expert Troy Hunt said in a recent blog post about data breach disclosure. “That has to change.”
Hunt, who runs the HaveIBeenPwned.com notification service, testified in front of the U.S. Congress in November on the frequency and impact of data breaches. He followed that up with a series of posts on his personal blog on how the situation can be improved in this area, including the disclosure process.
Hunt gives Tesla’s vulnerability reporting policy as a good example to follow. The policy is short, easy to understand and touches on all the right points: it acknowledges the valuable work done by security researchers, it encourages responsible disclosure, it provides a contact email address specifically for security issues, it provides a PGP key for encrypted email communications, it provides a timeframe for response and it commits the company to not taking legal action against reporters if a few ethical guidelines are followed, including making a good faith effort to avoid privacy violations, destruction and modification of other people’s data and interruption or degradation of the company’s services.
“A security vulnerability reporting policy is an acknowledgment that it’s possible someone may find something in your online assets that needs to be reported,” Hunt said. “That much alone — simply acknowledging that people may want to report a security thing — is a massive step in the right direction.”
Such policies don’t have to be complicated — a couple hundred words will do — and there’s even an internet standard in the works that makes it easy to share your security policy and contact information with the world.
The standard was started by web developer and security researcher Ed Foudil, who is also the author of the Bug Bounty Guide. It’s based on a simple idea: create a file called security.txt in a specific path on your server that should, at the very least, contain a contact method for security issues.
“Security.txt is a text file that should be located under the /.well-known/ path (‘/.well-known/security.txt’) [RFC5785] for web properties,” the draft RFC says. “For file systems and version control repositories a .security.txt file should be placed in the root directory.”
The file can have several fields, called directives. The “Contact:” directive is the only mandatory one and its value can be an email address, a phone number, the URL of a security-dedicated page with more information, or all of them.
Other available directives include “Encryption:” which can be used to specify the location of a public PGP key for encrypted communication, “Signature:” with the location of a PGP signature for the entire security.txt file, “Policy:” with the location for the security policy, “Acknowledgement:” for a hall-of-fame-type page where security researchers are recognized for their reports and “Hiring:” for a page with security-related job positions.
There is a web-based tool for generating security.txt files on the Securitytxt.org website, but there are also plug-ins for various types of web frameworks and content management systems: WordPress, Drupal, Node.js Express, Laravel and Ruby.
The use of security.txt is not restricted to web properties. It can be added to internal hosts, domains, IP addresses, file systems and source code repositories, but one file should only apply to the specific system it exists in. This means that if you have multiple applications running on different domains or subdomains on the same server, each one should have its own security.txt file, even if the information within those files might be the same.
“A fundamental part of fixing data breaches is that we need to collectively strive to do better,” Hunt said. “We must all acknowledge that none of us are immune to security vulnerabilities and it must be one of our highest priorities to engage with those wanting to bring them to our attention.”
One piece of encouraging news is that bug hunters who responded to HackerOne’s survey feel that companies have become more open to security. Seventy-two percent of participants said that in their experience companies have become more open to receiving vulnerability reports in recent times.
While almost a quarter of ethical hackers said they choose their targets based on whether bug bounties are involved, 13 percent said they choose services they like and 10 percent said they choose companies based on their responsiveness to reports. Another 20 percent said they do it for the challenge and the opportunity to learn.
This means that even if you don’t offer monetary rewards, providing clear security contact methods and inviting external security contributions can increase your chance of learning about serious issues before they’re exploited by cybercriminals and result in data breaches.
Feature image via Pixabay.