How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
API Management / CI/CD / Security

The New Stack’s Top Podcasts for 2021

The New Stack's top podcasts from 2021.
Dec 30th, 2021 6:00am by
Featued image for: The New Stack’s Top Podcasts for 2021

Our listeners have spoken: The New Stack’s most popular podcasts for 2021 were about security — particularly ransomware — and how to select the right tools to  manage today’s highly decentralized and distributed computing environments.

We all know ransomware — and especially those who profit and sponsor these attacks — are evil, but what we need to know is how to protect our data and react not if but when these attacks occur. This was reflected by how ransomware- and security-related podcasts were not only the four most-accessed episodes this year, but collectively represent a large segment of the collective audience share among the 10 most-popular podcasts.

In addition to the themes relating to security-protection and -management tools, the subjects of managing storage, APIs, and other challenges associated with the adoption of increasingly decentralized and distributed environments, including stateful data management, disaster recovery, etc. were very popular. And, of course, our readers and listeners have expressed a strong and popular interest in selecting the best tools to manage the explosion in data that in just a short matter of time will very likely be measured in zettabytes, instead of exabytes (1 zettabyte equals 1,000 exabytes, while tens of billions of video games would represent a single zettabyte of data).

Here are the 10 episodes among the most popular The New Stack Makers shows we produced this year.

1. What It Requires to Secure APIs for Microservices

Guest: ​​Viktor Gamov, principal developer advocate for Kong, an API-connectivity company. (Episode)

Gamov likened securing APIs to ships transporting cheese from one port to another. The security checks include making sure that the cheese is indeed cheese — and not contraband —  and monitoring the routes to check that the designated ship will arrive at the right port.

“You need to have some sort of secure channel or pathway” that prevents someone from accidentally disrupting or stealing microservices data, Gamov said.

Securing the API gateways involves protecting data in route between endpoints. This is “how we can ensure that someone who’s logged into our system is who they say they are,” Gamov said.

The management of identity access can both be allocated to a third-party security API provider that in turn can help to control and verify access to different APIs in microservices environments. “We can offload some of the work to Identity Providers [IdPs]” so developers don’t have to manage this process, Gamov said.

To use the cheese analogy again, an identification service  can check that an API is “bringing Parmesan cheese.” “This is where we’re checking everything, like can you bring this department… Roquefort cheese, gouda cheese or some other things,” Gamov said. “So, this is where we are expanding it [to determine] what you can do with this system.”

2. Ransomware Is More Real Than You Think

Guest: Jason Williams, product marketing manager for Prisma Cloud at Palo Alto Networks.

Ransomware attacks in 2021 exploded in both magnitude and severity this year, with the Colonial Pipeline and other attacks that show just how vulnerable the U.S. infrastructure, and ultimately, national security is to such brazen attacks. In the wake of the attacks, the U.S. government released the memo “What We Urge You To Do To Protect Against The Threat of Ransomware” in June to communicate the magnitude of the ransomware threat, while calling on organizations both in the private and public sector to take stricter measures. The memo also outlined specific ways for organizations to protect themselves.

Attackers blocking access to critical data are also asking for — and in too many cases — getting outrageous amounts of ransoms. According to a Palo Alto Networks’ Unit 42 report, the highest ransom in 2020 was $30 million, up from $15 million in 2019.

However, before a ransomware attacker can access critical assets that “the network allows them to,” it is possible for the organization to limit access by segmenting the critical data, Williams said. “You need to segment the network because not every application or every workload needs to communicate with each other just because it can,” Williams said.

The U.S. government’s memo on how to combat ransomware offered specific steps to thwart ransomware. They included improving backups, downloading security patches, implementing network segmentation and other protective measures. But, while all of these steps are important, patch management and isolating systems that are critical “away from systems that are less critical is particularly important,” Williams said.

3. Infoblox: How DDI Can Help Solve Network Security and Management Ills 

Guest: Anthony James, Infoblox vice president of product marketing, for network management and security provider Infoblox. (Episode)

Network connections can be likened to attending an amusement park, where Dynamic Host Configuration Protocol (DHCP), serves as the ticket to enter the park and the domain name system (DNS) is the map around the park. Infoblox made a name for itself by collapsing those two core pieces into a single platform for enterprises to be able to control where IP addresses are assigned and how they manage network creation and movement.

Infoblox’s name for this unified service is DDI, which is shorthand for DHCP, DNS and IPAM (IP Address Management — a repository for every device that gets an IP address).

“The way we think about DDI is that it is the basic foundational element for anyone to connect to a network, and then from the network outside the network, to places like the internet,” James said.

What can often happen is that organizations can struggle when IP addresses become separated and are not converged. “When [organizations] build a network, what usually happens is people take the services for granted,” James said.

Organizations might, for example, install and configure a Microsoft server and add an active directory for authentication, while “just inherently turning on DNS and DHCP,” James described. “That’s a common way to implement those services…What’s the challenge with that? There’s no coordination,” James said. “If you have a security incident where you get notified that an IP address has been possibly attacked, or that it went to a malicious website, now you’ve got to go and look at all those different services, the DNS logs and the DHCP logs that are now on two different infrastructures or on two different sets of infrastructure. It’s hard to figure that out.”

4. The DevSecOps Skill Sets Required for Cloud Deployments

Guest: Ashley Ward, technical director, office of the Chief Technology Officer, Palo Alto Networks. (Episode)

DevSecOps should apply throughout continuous integration/continuous delivery (CI/CD), with automation serving as the cornerstone of its implementation. But before deployments on the cloud begin, many might wonder if they have the requisite DevSecOps skill sets needed for the job. Many tools and process exist, but how does an organization determine if the team members are up to the task of successfully adopting DevSecOps best practices?

Unfortunately, when organizations make the shift to cloud environments, they’re often short on the requisite skillsets, according to the “2021 Ransomware Threat Report,” by Palo Alto Networks’ research arm Unit 42. “What I see people doing is they’re reaching out — they’re usually speaking to partners, they’re speaking to people to get information about what other companies are doing and how people are coping with the skills gap,” Ward said.

Where to start? “Whatever organization it is, you look at things that are going to give — not necessarily a quick win — but continuous improvement,” Ward said. “So, if it was me, because my background is in containers and container security, I would say start with container images — it’s an easy thing that we can pick off the shelf, and we can start immediately showing benefits and we can start immediately showing the consumers of the images that things are improving.”

5. Business Innovation Across Multiclouds

Guests: Dormain Drewitz, senior director of product marketing for VMware Tanzu; Mandy Storbakken, cloud technologist for VMware; Shawn Bass, chief technology officer for VMware’s end-user computing business; and Jo Peterson, vice president cloud and security services at Clarify360. (Episode). 

The shift to multicloud environments away from the traditional data center-center model can represent significant opportunities. But with these opportunities, come challenges, including many decisions to make about infrastructure, toolsets, and security before multicloud deployments even begin.

Making the shift also involves a shift in mindset, which among other things, allows for experimentation and accepts failure.

“Fundamentally, when you think about innovation, you have to put psychological safety int0 the conversation — and a lot of that comes down to whether you have created an environment where folks can fail as they’re trying things,” Drewitz said.

There’s a discussion around open source and especially in the enterprises, which “is a matter of getting capabilities into the hands of people that are trying new things very quickly, without having to go through a huge buying cycle or to buy enterprise software before they can even try something out,” Storbakken said. “Innovation comes from … giving people permission to try new things.”

There is also a movement “across the industry” from imperative to declarative infrastructure, Bass said. “There has been a shift happening over the last 5 to 10 years for companies to start thinking of desired state management — what is the configuration we want to deploy to a particular target — and let the capabilities on the endpoint automatically correct that device and bring it to that desired state management,” said Bass.

The adoption of new tools and processes to support innovation will almost inevitably involve adapting legacy infrastructure when making the shift.

“You end up in this situation where you’re building net new in the cloud, and you’re caring for legacy systems, and what you really want is a bridge between the two,” Peterson said.

6. When Is Decentralized Storage the Right Choice?

Guests: Ben Golub, CEO of Storj, and Krista Spriggs, software engineering manager at the company. (Episode)

The amount of data collectively generated in the IT industry continues to roughly double every year. The security and operation advantages of extending storage away from a data center or a cloud provider (the recent widespread Amazon Web Services outage underscoring the disadvantages of reliance on a single cloud provider), are straightforward. Decentralized storage, and especially, the supporting infrastructure, offers a host of security and operational benefits.

Security concerns alone make a strong case for decentralized storage, Golub said. “Anytime you’re storing data in clear text in a centralized location, it’s one mistake or one bad hacker away from being compromised,” he said.“And ultimately I think we probably don’t want a world where 80% of the cloud is controlled by three of the largest companies on the planet who happen to all be in the business of selling data.”

With data lakes, how valuable much of that data is unknown “until you have a question,” Spriggs said. “If you don’t store all of the data that you have available to store, you might be painting yourself into a corner where you can’t ask the type of question that you want to ask,” Spriggs said. “And you’ll never get that data back.”

Decentralized data storage, Golub said, is destined to become more widely adopted as organizations themselves grow more distributed. “If we look back in 10 years we’ll see that decentralized cloud in general, not just decentralized storage, makes sense — the same kind of sense that we realized that decentralized telecommunications did 20 years ago, when the internet came about right. It is inherently faster, inherently better, inherently more scalable and inherently more flexible.”

7. CNCF Assesses Tools for Kubernetes Multicluster Management

Guests: Federico Hernandez, principal engineer social media analysis provider Meltwater, and Simone Sciarrati, Meltwater engineering team lead. (Episode)

Kubernetes remains a work in progress, as DevOps teams often struggle in selecting the best tools and processes to make this shift. At the same time, many organizations are looking at managing not just a single cluster, but often multiple cluster deployments. But managing multiple clusters requires a new set of tools, ones that automate many routine and manual tasks. So, for its fifth Tech Radar report, the Cloud Native Computing Foundation surveyed the tools available for multicluster management, based on the input from its end-user community.

“If you start with one [cluster], then you might not consider that at some point, you’re going to have multiple clusters,” said Sciarrati. “And, so, which tools are going to be adaptable to that scenario? The CNCF is providing this kind of guidance and overview of what is available, and giving some kind of guidance of things you should think about.”

But one of the more interesting takeaways of the report was how organizations continue to rely on in-house developed tools — even those end-users already relying on Kubernetes managed service providers. “The reason for that is to adapt the Kubernetes building blocks to the internal ways of doing things to the applications that are being handled, and run by Kubernetes, and the entire ecosystem that lives inside that company.” said Hernandez. “So, you have a managed Kubernetes, but you need to somehow manage the managed Kubernetes, so it works the way you want it to in your company.”

8. CDRA Completes the CI/CD Software Development Lifecycle

Guest: Anders Wallgren, CloudBees vice president of technology strategy. (Episode)

Continuous delivery and release automation (CDRA) is a way for organizations to deliver better quality software faster and more securely, by automating digital pipelines and improving end-to-end management and visibility, according to Forrester Research. It can also be thought of as a support process, or even an extension of, continuous integration/continuous delivery (CI/CD).

In many respects, CDRA is very similar to CD, insofar as it is a foundation for committing, then delivering software, Wallgren said. “Value-stream CDRA is really about making your value stream executable and visible on an ongoing real-time basis. So it’s taking all of the things, all of the activities, all of the tools, all of the platforms, all of the software — everything that you do to build, test, qualify, deploy and release your software — and automating that,” Wallgren said.

“Ideally, our point of view is about one platform, over-the-top orchestration and tying together all the islands of automation that you already have,” he said.

Many organizations are already fairly accomplished in adopting continuous integration and automating the process. “However, there are about 58 other things that you have to do to your software before it’s ready to go live on the website, get burned into the chip, dropped into the box and those sorts of things,” Wallgren said. “CDRA is really just an acknowledgment that there are still release activities that most large mature software companies engage in, and we’re not yet quite living in a world where I compile the software on my laptop and then two minutes later, it’s in production.”

9. Data Persistence and Storage over Pancakes at KubeCon EU 2021

Guests: Itzik Reich, Dell Technologies vice president, and Nivas Iyer, Dell senior principal product manager. (Episode)

Storage used to largely be about hardware support and selecting the best network-attached storage (NAS) to store and manage data in data center environments. Now, that Cloud Native and Kubernetes clusters distributed across often multiple clusters and clouds have entered the fray, how to store stateful data across Kubernetes clusters represents one of the many challenges associated with Kubernetes adoption.

What DevOps teams especially want is simplicity in data persistence and storage management. Developers, for example, “want to treat storage just like we all treat electricity in our houses,” Reich said, in an interview at KubeCon+CloudNativeCon 2021. “We want to flip a button and, within magic that’s going to occur, you know storage will basically become persistent to those containers. And of course, in order for this to come, storage needs to be super smart but it’s also super simple to use,” he said.

Traditional architecture is usually three-tiered, with security out on the third tier managed by a storage or database administrator, Iyer said.

“Now you’re seeing that each individual small development team is managing their own databases inside the microservice, and so there isn’t a centralized data hygiene that’s being done,” Iyer said. “So now in Kubernetes, I see a huge opportunity… they’re looking at storage on demand, but, at the same time it also needs to be something that is available, resilient, reliable and recoverable.”

10. Which Comes First: Istio or Kubernetes?

Guests: Zack Butcher, part of the founding engineering team at Tetrate, a service mesh company, and Varun Talwar, co-founder at Tetrate. (Episode)

The use of a service mesh to manage and support Kubernetes environments is generally seen as essential. However, even for sandbox projects, what comes first? Deploying Kubernetes followed by implementing a service mesh — or first implementing a service mesh and then Kubernetes? In the case of Istio, which is seen as conducive to managing large-scale environments, the reasoning is that it is a good idea to start with the Istio service mesh.

Deploying a Kubernetes environment and then adding a service mesh to manage it all is not recommended. Nor is it a good idea for organizations to begin their digital transformations by implementing Kubernetes and service meshes, such as Istio, at the same time. While it may “run a little bit counter to what a lot of people think, I think you should start with Istio,” said Butcher.

“Maybe the shortest answer to why you shouldn’t do both at the same time is that it’s really hard to change the engine on the car and the tires at the same time — while we’re going down the interstate,” said Butcher. “There’s a lot of operational complexity and organizational learning that needs to go on to be able to adopt either technology.”

Meanwhile, Istio “at its core remains the same,” said Talwar. From the outset, Istio was created to help connect, secure and observe services,” he explained. “Istio is still the same — it does a fantastic job of doing that within a single Kubernetes cluster, and continues to do so,” said Talwar. “The direction where the project is going… is going to make it more reliable and more usable. So all the roadmap items are more around the usability and reliability of what it does. There is some work [taking place for] extensions and a bunch of those things, but at the core, it is the same.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Unit, The New Stack, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.