Prisma Cloud by Palo Alto Networks sponsored this article.
A proliferation of ransomware attacks has created ripple effects worldwide. Such criminal attacks have since increased in scale and magnitude, as critical hospital and infrastructure targets were shut down.
The threat is very real for organizations with data assets and applications that they require to function — in other words, pretty much everyone. And the cost of paying ransom for such attacks is climbing rapidly.
The average ransoms paid have skyrocketed, from $115,123 in 2019 to $312,493 in 2020, representing a 171% year-over-year increase, according to a report by Palo Alto Networks and its threat-intelligence arm, Unit 42. During the period of 2015 to 2019, the highest ransom demanded totaled $15 million, compared to $30 million in 2020. The highest ransom paid last year doubled to $10 million, according to the report.
So far in 2021, the highest ransom paid stemmed from an attack on the logistics infrastructure in the U.S. and part of Australia belonging to JBS, a food-processing and meat-packing conglomerate, resulting in a payment of $11 million in Bitcoin required to restore operations.
Sadly, IT departments are generally ill-prepared to properly mitigate the attacks or apply damage-control processes when they occur.
“Had we collectively done our automation homework, this would be a much smaller problem,” Torsten Volk, an analyst at Enterprise Management Associates, told The New Stack. “But as most organizations are still relying on human security staff to manually handle all kinds of access requests, applying the principle of least privilege can be excruciating and not maintainable. Once you start making exceptions, it is a slippery slope toward inviting ransomware and other evils into the organization.”
So Many Holes to Plug
Not that DevOps team leaders and CTOs are bad people for failing to completely protect their organizations’ operations against ransomware attacks. Given the scale and magnitude of the threats, it is easy to feel helpless to tighten up security when there are so many holes to plug.
The day-to-day demands for the IT department can easily take precedence over ransomware security, as DevOps teams might be under relentless pressure to deliver software or remediate post-deployment fixes at ever-faster paces.
“Ransomware attacks are such a big threat to all of us as they mercilessly exploit our own negligence in terms of applying best practices for hardening software application stacks,” Volk said. “In reality, almost all of us are cutting a few corners when it comes to locking down our apps and data, simply because of the added workload it takes to then provide tightly controlled access for legitimate internal and external parties.”
However, there are mitigating measures that can be taken that are more accessible than you might think in order for your organization to become reasonably secure against ransomware attacks.
Best Practices to Implement Now
In June, the U.S. government released a memo that reflected the severity of ransomware threats, addressing how ransomware is a menace for business operations both in the U.S. and internationally. Among the key takeaways from the memo were steps to take in order to mitigate the threat:
These best practices include:
- Scheduled and consistent backups and regular testing.
- Update and patch systems promptly.
- Response plan testing.
- Checks by third parties of a security team’s practices.
- Network segmentation.
“The recent executive order from President Biden on improving the nation’s cybersecurity can play a role. Providing concrete, actionable guidelines is important and helpful not just for federal agencies, but for businesses in general,” said Scott Devens, CEO of Untangle, a security solutions provider. “Seeing how malicious actors exploit vulnerabilities, it’s good to see detailed guidelines, with actionable timelines, that all companies can follow to protect against attacks that can cost millions.”
Microsegmentation Means Zero Trust
Among the five best practices described in the White House memo — all of which are important — network segmentation can serve as a particularly efficient way to protect the applications and data that are the lifeblood of the organization.
Also known as microsegmentation — or Zero Trust Segmentation, as part of Palo Alto Networks’ Prisma Cloud offering — this zoning-off process of data and applications can serve to protect attack surfaces. It can also thwart lateral attempts by attackers to access critical data, once they have penetrated the network-security perimeter.
“Microsegmentation is verifying the connectivity that you have inside your data center, organization, or cloud and whether or not that connectivity should even be allowed, and if so, under what circumstances,” said Jason Williams, a product marketing manager for Palo Alto Networks’ Prisma Cloud. “The reason you want to limit that connectivity is so you can prevent lateral movement in the event of a breach.”
Some security decision-makers may balk at the microsegmentation concept, fearing that it is too complex to implement and manage. Others, such as DevOps teams, may have concerns about slowing down business processes, including software development, by blocking network access to certain data sets and applications.
“As soon as you block a network connection, you need to make sure you have 110% confidence in doing that, because if you block the wrong connection, you can take down an entire application, and then that can take down operations,” Williams said. “So, we want to help build confidence in our customers, because microsegmentation is proven to prevent lateral attacks.”
When Locking Down the Perimeter Isn’t Enough
Understandably, many organizations — as a kind of knee-jerk reaction — will seek to lock down perimeter security as a starting point. This attempt is not necessarily unreasonable given that, in theory, total firewall-like perimeter protection would thwart any intruder from entering the network and initiating ransomware attack.
But given the cat-and-mouse challenges of patching vulnerabilities without having complete knowledge of when and where all vulnerabilities occur, along with human behavior-associated risks, the consensus is that it is nearly or even impossible to rely on a lockdown of the network alone to prevent ransomware attacks.
“Had we collectively done our automation homework, this would be a much smaller problem.”
—Torsten Volk, analyst, Enterprise Management Associates
While developing the requisite culture and adopting sound policy to prevent ransomware attacks certainly helps, for example, all it takes is for an employee to become the unwitting entry point for a ransomware attack by clicking on a link in an email, said Oliver Tavakoli, CTO at Vectra, which offers an artificial-intelligence threat detection and response platform.
“Trying to achieve perfect security at the perimeter of an organization has proven to be impossible,” Tavakoli said.“It’s just that with the recent spate of ransomware, the potential end result of such intermittent failures has become more dire.”
It is also necessary to achieve a degree of resilience to attacks that have already gotten past your first line of defense, said Tavakoli, who advises: “Build the capability to detect them and to respond to them with a sense of urgency.”
Typically, ransomware or another kind of attacker enters the network through an unpatched security hole, a compromised user’s account, or other ways and can remain hidden in the network for weeks or even months until they can then find a way to access the real data they need to orchestrate a ransomware attack (described previously as lateral attacks). The assumption thus must be made that no network is completely secure — a concept which is also known as zero trust.
“While we advocate microsegmentation, we also urge the adoption of zero-trust policies: that is, to assume anything inside all of the network has the same level of trust as everything outside the network,” Williams said.
How Good Are Your Backups?
If and when an attack occurs, having microsegmented, protected data and application layers, and the ability to get back up and running if locked out of your company’s operations in the event of a ransomware attack, are critical. However, the stakes are high because backed-up data — whether it is systems of records of user data, past transactions, inventory or other information critical for the organization to function — represents a key and lucrative target for ransomware attackers.
Attackers thus focus on deploying ransomware to encrypt corporate data and systems to create system downtime for the victim entities, said Kevin Dunne, president at Pathlock, an access orchestration security provider. They also will exfiltrate sensitive corporate data for monetization on the dark web, often inflicting the additional pain on victims in the form of fines and lost revenue, Dunne said.
“Backups and data replication can help to minimize the effect of downtime,” he said. “However, they cannot prevent the exfiltration of corporate data, which is a major concern for large entities storing sensitive customer information.”
Additionally, distributed application architectures have increased the difficulty of checking backup integrity, Volk said. “There is nothing worse than seemingly having all the code and data restored, but then not getting it to run because of small details not being included in the backup.”
The best insurance against ransomware is the ability for an organization to revert back to a known and recent state of data and applications, Volk said.
“As much as it makes us all cringe, there is no alternative to automated regular test restores when it comes to proofing our organization against ransomware,” he said. “This is similar to the early days of high availability data, where it was often just too painful to buy application hardware twice, just in case failover would be necessary. But once the time came and the failover environment was needed, we learned that it was worth every penny we spent on it.”
Encrypted replication and viable disaster recovery platforms are thus critical, especially in distributed Kubernetes environments. This, Volk said, is because “Kubernetes relentlessly exposes our application’s weaknesses by following a completely policy-driven approach toward deployment, operations, scalability and upgrades.
“Unless we bake in replication and disaster recovery requirements at the policy level, our applications are universally exposed, wherever they may run.”
Learn more about protecting your organization from ransomware from a recent New Stack podcast:
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Real.