Reckoning with the Human Factor in Observability

Observability is widely misunderstood, but in an age of increased security breaches and more business being conducted online, it’s never been more important. How should organizations be thinking about their resources in multicloud environments? What strategies should they adopt to catch gaps in their security before hackers do? And also, what cultural changes might DevOps teams adopt to strengthen their observability?
In this episode of The New Stack Makers podcast, Maya Levine, technical marketing engineer and cloud native and cybersecurity evangelist for Check Point, joined co-hosts Alex Williams, The New Stack’s founder and publisher, and Heather Joslyn, TNS’s features editor, for a discussion of what observability means now.
Reckoning with the Human Factor in Observability
Observability, Levine told the Makers audience, isn’t just about foiling major attacks — or, as she put it, “catching the breadcrumbs that cybercriminals leave behind.” It’s about reckoning with the human factor in creating and perpetuating security vulnerabilities: For instance, organizations that grant permissions broadly in-house, a convenience that can easily turn into a security nightmare if a bad actor uses that open door to gain more access.
The popularity of multicloud and hybrid-cloud architectures adds additional layers of complexity to observability strategies. No longer, Levine told us, can teams assume they are “safe in the cloud.”
Aligning with this new reality, she said, requires a major shift in thinking about observability and security. “The challenge is that you need security for every step of it,” she tells the Makers audience. “And that’s why you need to move to implementing security as part of the development process.
While security needs to be embedded in application development from the start, Levine said, DevOps teams also need to be cognizant of how AI is being used as a tool by hackers.
If hackers are using AI to exploit your system’s vulnerabilities, Levine explained, the situation calls for a response that uses similar technology. “Manual response to automated attacks is always going to be too slow, especially in the cloud,” Levine said. “So the more protections that you can apply that get deployed automatically, without even human intervention, the better you’re going to be.”
On the other hand, she noted, AI can improve observability in and of itself, helping to catch aberrations sooner: “We have the ability to identify what is regular behavior versus irregular behavior.”