If the only thing you’ve heard about Helm 3 is that it removes the Tiller dynamic configuration file generation tool, you already know the most important thing about the new release of the definitive Kubernetes package manager: that the project has undergone a major rewrite to catch up with the state of Kubernetes and remove long-term security concerns. That’s a sign of maturity in the project that’s also reflected in the other key developments like better patch merging, big improvements to usability including release management, and the clear support lifecycle for Helm 2 charts.
Being an evolutionary update for security and stability rather than a checklist of new features with bells and whistles is a good sign, Bridget Kromhout, Helm principal program manager at Microsoft told The New Stack.
“Helm 3 being simpler and more secure might not make for a flashy headline, but these features are delightful for anyone who has operated systems at scale. Adventure, excitement: an on-call engineer craves not these things. Adding features upon features is an easy default, while a carefully-considered edit is a welcome sign of project maturity,” she said.
“How many projects do you know that remove large chunks of code? That’s a sign of maturity,” maintained Lachlan Evenson, principal program manager on Azure Container Compute and release lead for Kubernetes 1.16. “They said this is no longer what people want; let’s go ahead and get rid of it, let’s improve our security posture. Because what we heard in the Helm 2 community was, we love Helm but the security posture of Helm isn’t what we want in production. So we basically took that and said we’re going to deliver that in Helm 3. Let’s get the security posture, exactly where we need it to be.”
Taking out Tiller
Helm is something of a vintage project, started by Deis not long after Kubernetes itself, and announced at the very first Kubecon. Helm 2 was built in conjunction with Google’s Kubernetes Deployment Manager team and added the Tiller component and GRPC to handle installing and managing Helm charts, rendering the charts and pushing them to the Kubernetes API server.
Tiller allowed teams to share a Kubernetes cluster, with multiple operators able to work with the same set of releases, but as the Kubernetes API has evolved, Tiller hasn’t kept up, leading to some concerns about the broad range of permissions granted to anyone with access to Tiller.
“The concerns with Helm, in part due to how long it’s been around, were a lot around security, like how do you take off the shelf software and deliver that securely into a cluster?” explained director of Azure Compute Gabe Monroy whose team at Deis started the project. “Helm 3 has been rearchitected to remove Tiller and move a lot of this logic into more modern Kubernetes techniques like operators and a client-side template.”
Now that Custom Resource Defintions are available, Tiller is no longer needed to maintain state or be the central hub for information about releases deployed through Helm.
Removing Tiller was possible because since Helm 2 was released in 2016, Kubernetes has added important features like Role-Based Access Control and Custom Resource Definitions. “At the time, there weren’t CRDs, there wasn’t RBAC. God mode was just mode – there was no other way to do it,” Monroy pointed out.
Now that CRDs are available, Tiller is no longer needed to maintain state or be the central hub for information about releases deployed through Helm; all that information can be stored as records in Kubernetes. User authentication and authorization is now done by Kubernetes and Helm permissions are just Kubernetes permissions, so they use RBAC and cluster admins can choose the granular Helm permissions they want to use.
Instead of the client-server model of Tiller, Helm becomes a Kubernetes client; that removes layers of complexity but still gives operators the tools they need.
If you’re used to Helm 2, this is a major change in mindset, IBM senior software engineer and Helm core maintainer Martin Hickey told us. “With Helm 2, people talked a lot about Tiller; they worried about security, they worried about setting this thing up. You could go in and make Tiller secure, but we all take systems and use the defaults and very rarely do we change them. Security is now pushed to Kubernetes. We’ve pushed it to the place where it should be, the place that understands it, where the RBAC is.”
Helm 3 also reduces the complexity of setup and operation. “Essentially you were putting a proxy in there and that adds an extra layer of complexity.” It was possible to lock Tiller permissions, but especially for multitenant clusters the defaults (designed to make it easy to adopt) were just too open for comfort and could lead to confusion, Hickey explained.
“When you deployed a release [with Helm], it didn’t get stored in the namespace inside the cluster. It got stored in the namespace of Tiller. And by default, that was kubesystem. That’s your system namespace! Say somebody didn’t change the Tiller namespace, the next thing you saw was hundreds and thousands of config maps in kubesystem and you went holy moley! If you didn’t know how to search for that and that the owner was Tiller, you didn’t know where they came from.”
That no longer happens with Helm 3 — so if you’re used to the way Helm 2 works, the namespace improvements can be slightly confusing because you’ll have to look for your deployments in the right place, Hickey warns. “Because Tiller was running in god mode, unless you had reconfigured it differently, when you did helm -ls it returned everything. Now it’s going to go into the namespace of where you asked it to go — and yes, now you have to create the namespace.”
(If you don’t want to do that by hand, there’s a namespace plugin that can still create the namespaces for you.)
Releases, Configurations and Charts
Without Tiller, Helm needs a way to track the state of the different releases in the cluster. Helm 2 had the option of using secrets for release objects; now it’s the default.
The release information that’s stored in the namespace with the release is also very different: it includes both the Release instance, which is information about the specific installation of the chart, and the ReleaseVersion secret which stores the details of the upgrade, rollback or deletion of the release. If you install WordPress using a Helm chart, that creates a WordPress release and if you upgrade WordPress using Helm, that creates a new ReleaseVersion secret with the details of the upgrade operation and modifies the Release object to point to the new secret. If you need to go back to an older version, you can use the old ReleaseVersion secret to roll the release back to an earlier state.
Storing the release information in the relevant namespace also makes it easier to repeat names, if you need to install the same application in multiple places (like having separate WordPress instances as part of different systems running from the same cluster. With Helm 2 if you had a single Tiller instance running for the cluster, release names had to be unique.
With Tiller gone and Helm now talking to the Kubernetes API directly, helm init and home have also been removed because you don’t need them to create and store configurations; instead configuration files are now stored using XDG. “XDG is a standard way of where you put your configuration on a file system whether it be Windows, Linux, whatever, it’s platform-independent,” Hickey noted. “So we can now do lazy creation. When you need that configuration it can be created, on the fly, it doesn’t need to be done upfront. So you literally click on your binary and away you go; that’s blowing people’s minds!”
Despite Helm being basically rewritten to remove Tiller, Helm 2 charts will still work with Helm 3 (with a few exceptions that will mean they need to be updated). That was a very deliberate decision, said Monroy, because the value of Helm isn’t so much the engine as the wealth of community content.
“The architecture is very different, it’s much more modern but it still has all the same benefits of the content that was created out there in the ecosystem that gives you that great experience of typing helm install kafka and you have a Kafka cluster running, now just with added security. “
“One of the things we were very cognizant of is the Helm 2 chart should still render and deploy in Helm 3,” added Hickey. “Charts should still be able to run and if they’re not running, come talk to us. There are a few little nuances to that which are namespaces and CRDs and the reasons for that were to align more with Kubernetes itself.” Dependency management is also slightly different.
The changes around CRDs reflect the fact that the Kubernetes ecosystem is still deciding how to manage CRDs to avoid the equivalent of DLL hell. “An application can own a CRD or it can be owned by many applications, and we were trying to do management of the CRD with the chart. We’ve simplified it and redesigned it to just create CRDs; so no more modifications or deletions,” Hickey said. CRDs will now be automatically installed but if they already exist, they won’t be installed again, and there’s a command to skip installation. This fits with the increasingly common DevOps pattern to install CRDs with one chart and then to install applications afterward with their own charts.
If you want charts to work with both Helm 2 and Helm 3, make sure that they create the namespace and use both crd-install hooks and the crds/ directory; the hook will be ignored by Helm 3 with a warning but will still work in Helm 2.
But if you’re ready to move to Helm 3, there are new chart features you can take advantage of — and if you do that, you should mark your Helm 3 charts as using the Helm version 2 API. Again, that may be initially confusing: the chart API version was v1 for the first release of Helm and stayed v1 for Helm 2, so with Helm 3 the Chart API moves to v2.
“We apologize for that, but we bumped the API version number up to v2 so that we’re able to understand in the engine if it’s a Helm 3 chart or a Helm 2 chart,” Hickey explained. “The newer charts will not render in Helm 2 because the capabilities aren’t there.”
Those capabilities include library charts, which replace common charts in Helm 2 as a way to help you share and reuse snippets of code that you need in multiple charts for consistency (and to avoid typing and copy-pasting errors). Set the property field for a chart to be library rather than app and you can create a chart that doesn’t have any deployment objects and doesn’t create any release artifacts but can be referenced by other charts. This is a neater way of differentiating chart types and avoids the problem in Helm 2 where accidentally installing a common chart produced errors.
Requirements and dependencies move into the chart.yaml file, so they need to be specified differently for Helm 2 and 3. Charts can also now have JSON schemas for validating the install, upgrade, template and lint commands, to make it clearer what values need to be set.
When you use Helm to update releases, it now does a three-way merge that includes the live cluster state as well as using the most recent and the proposed chart manifests. Only looking at the manifests meant losing anything injected as a sidecar, like an Istio proxy or Linkerd service mesh, or through other out of band methods because it wouldn’t be in either of the charts. Those will now be preserved during upgrades.
Simpler, Secure and Ready to Graduate in 2020
Another result of the re-architecture of Helm 3 is that the Go SDK has also been restructured. The CLI is a wrapper around the SDK to make it easy to work with, but if you want to create more complex deployment patterns you want to work directly with the SDK and in Helm 2 that was tricky to do. “Now it’s been nicely extricated and the packages are better encapsulated away from the CLI part,” Hickey explained. If you want to make Helm part of a pipeline where you need to use part of what Helm does and insert your own steps between, you can call the packages in the Go SDK directly to do that. That’s actually how the plugin for migrating from Helm 2 to 3 works.
The emphasis on the maturity of both the Helm engine and the community chart content is a step towards the project graduating from its Cloud Native Computing Foundation incubation status, as is passing its prerequisite third-party security audit with flying colors.
“We want to move Helm into a graduated project and we wanted to make sure that this was a secure offering and supported the security posture of Helm 3,” said Evenson. “The TLDR was glowing; it was recommended for production use, and it was I think only second to linkerd in the topmost given code quality. It had one very minor issue that was patched the next day.”
In conjunction with the Helm security audit, Snyk also ran a security audit of charts in the public Helm repo, looking for vulnerabilities in the images those charts reference.
“We use Helm internally and we knew about the security audit but the scope of that was the project,” Snyk director of product management Gareth Rushgrove told us. ”Obviously Helm itself needs to be secure but Helm is a project where people use content. You’re not using it for the engine you’re using it because it’s you how to get to content and actually no-one’s looked at the security of that content.”
Addressing vulnerabilities in so convenient a way to get applications as a Helm chart will have a major impact on getting people onto updated versions of software. So while there were a number of charts referencing out of date images with vulnerabilities in, chart maintainers welcomed the audit and moved to fix issues. “I give them credit for their mature approach to being approached by a security company saying we’ve been looking at the security of your ecosystem.” The plugin Snyk used to check charts for vulnerabilities is now available in Helm.
Rushgrove also views the clearly defined scope of Helm 3 as a sign of maturity. “They could have got carried away; there was a risk of Helm 3 being like Perl 6. Then they said, let’s actually ship this, let’s focus on the main things we want to get out there.”
If it’s surprising that something as widely used as Helm — with more than 1 million downloads a month it’s the third-most-adopted technology from the CNCF — is still in incubation, it’s worth remembering that it used to be a top-level Kubernetes project and only became an independent project in 2018 after Kubernetes itself moved to the CNCF. Although the immediate next steps for Helm are about documentation and opening up to accepting feature ads, with the security audit complete, Hickey suggests that Helm is likely to graduate early in 2020.
Helm’s incubation status doesn’t reflect the size of the Helm Chart community either. “The community around Helm is massive and diverse and continues to grow at an incredible pace,” Monroy notes. “I think the value proposition of being able to install complex software with a single command is getting to a workflow that the Kubernetes and CNCF ecosystem is still struggling with. Helm enables a helm install pattern that the rest of the ecosystem is still trying to deliver.”
Being able to make major internal changes that keep the project relevant to current concerns without disruption to users is an impressive achievement and Helm is well-positioned to stay relevant despite the wide variety of tools for packaging and installation in the cloud native world.
“Platforms that stand the test of time they have an ability to evolve as the technology substrate changes” Monroy notes. “Kubernetes is demonstrating an ability to evolve in that way, and I think the value proposition that Helm brings is the same thing. People are still going to be using Helm now that it has undergone some serious re-architecture; the value of that as a key enabling technology is going to be very long.”
The Cloud Native Computing Foundation is a sponsor of The New Stack.