Catching up with the Founder and CEO of Portainer

Recently, I had the pleasure of meeting the CEO and founder of Portainer, Neil Cresswell. He’s passionate about his product (which, as I’ve made abundantly clear is one of the best GUI container managers on the market, in my opinion) and passionate about open source software.
For those who’ve never experienced Portainer, it’s a lightweight UI for both Docker and Kubernetes that makes deployment and management of both technologies incredibly simple. I’ve been using Portainer for some time now and have found it to be an essential tool for not only the management of containers but to learn both the Docker and Kubernetes technologies. It’s easy to deploy, simple to use, and makes life with containerized applications considerably more effective and efficient.
Cresswell spent 25 years in the roles of systems engineering, IT consulting, and more recently IT management. He started his career at IBM, where he spent 12 years, with leading roles in Server, Storage, and Virtualization systems engineering.
From IBM, Cresswell moved into self-employment with two business partners, who together created one of New Zealand’s leading VMware consulting practices, ViFX. During this time, Cresswell was based in Singapore and was responsible for the design and deployment of the largest and most complex VMware deployments in the region. He then moved into a CEO role at a start-up Cloud Service Provider, and later was a contract CIO at an eCommerce provider.
My exposure to Docker technologies came in 2016, when, at the helm of the Cloud Service Provider, he wanted to bring a “Containers as a Service” offering to market (well before many of the hyper-scales had such offerings). Cresswell attempted to find self-service portals that could be deployed by the service provider as a way to help his customers consume the service, but there were none available, so he had Portainer built.
Cresswell now runs the Portainer.io business and is responsible for the overall product strategy and the team entrusted to deliver on that strategy.
So, without further ado, let’s get to the interview.
TNS: What made you want to create Portainer?
Cresswell: I got exposed to Docker technology back in 2016, and at that time thought “this is the next big thing.” I could foresee the future transformational impacts of the technology, but at the same time, just how “raw” it was. Whilst Docker itself was quite easy to get my head around (due to my Linux background), the orchestration layer that sat on top of Docker (at that stage Swarm Mode) was quite complicated, and really did require someone with a brain the size of a planet to understand what it was doing and how it worked.
I was, however, sure that Docker with a cluster/orchestration layer would become the de facto standard, and replace VMs as the primary way apps would be deployed and operated across enterprises.
At the time, I had seen the Virtualization wave from the very early days. I knew that VMware initially struggled to get widespread adoption, but once [VMware] cracked it, adoption picked up thick and fast. I could see the similarities between Docker and the early days of VMware.
I asked myself, why did It take VMware 10+ years to really crack into the mainstream, and start to make a real impact in the data center? What was the thing that unlocked the mainstream for VMware? My answer to that was: Mainstream was being held back due to constrained access to suitably skilled people to deploy and operate the complicated tech, and that mainstream adoption really took off once there was tooling that allowed in-house IT to deploy/manage/use the VMware platform, without first having to become VMware certified experts.
In order to deploy early versions of VMware, you needed highly skilled (and highly paid) engineers. No way could anyone without a VCP deploy VMware safely, heck even VMware promoted VCDX as the level of skill required to build an enterprise platform).
Mainstream IT pushed back on VMware (and the market) and demanded tooling that allowed them to safely gain access to tech without needing to use experts. They wanted their internal IT teams to be able to deploy and manage the tech, safely.
Once VMware released the expanded tech stack, which included the required tooling, deployments of even the most complex environments became “wizard” driven. Further, the tooling required to successfully operate “day 2 onwards” was also bundled, and easy to use. This removed the resource constraints, and bam.. rapid adoption (it’s also the moment the “value/salary” of a VCP and VCDX decreased dramatically).
So, I thought to myself, if this was the pivot point for virtualization, then it’s highly likely that container technology will also follow this same trajectory. So what happens if I can provide, for containers, the same kind of simplicity tooling? Can I bring forward the mainstream adoption of the technology? And this is how Portainer was born… trying to make the underlying technology, Docker initially, and then later Kubernetes, as simple to deploy and manage as VMware VMs were at the time.
What can Portainer do for a company that standard Docker and/or Kubernetes cannot?
Portainer is an overlay management layer. Portainer isn’t designed to replace Docker or Kubernetes. What Portainer does is makes the technology significantly easier to understand and use.
There is no need to memorize the hundreds of CLI commands that Docker/Kubernetes needs to operate. At the end of the day, if you don’t know a command or capability exists, how can you know to go look for it?
We make the capabilities of the platform immediately obvious. Want to see the performance of a host, or a container/pod? No problem, click on the “stats” button to see it.
Want to deploy a stateful application? No problem, follow our UI prompts and you can get something live in under a minute. Want to see the logs of your app, easy, click the logs button… The list goes on.
Basically, we make the technology available to your internal IT teams, without them needing to first become highly trained and certified. We enable them to be successful with the tech right now, regardless of their skills right now.
Docker, and Kubernetes specifically, has such a steep learning curve, that it would take your average IT Admin/Operator months to become technically competent in the technology, to a level where they would be able to use it in critical production scenarios.
That’s too long, and we turn months into days. It’s not uncommon to hear stories of companies investing 6+ months to design and build a Kubernetes platform, problem is, they thought it would take a month, and so their projects are behind schedule. Worse, they realize through the six months just how complex the tech is, and how much ongoing effort it’s going to need to keep it maintained and delivered to service level agreement.
What are the biggest hurdles for container deployments at the moment?
Two things.
The first is the misunderstanding around the actual requirements to “go live” in production with Docker/Kubernetes. So many organizations have a false sense of understanding that the technology is easy, and to be honest, it is easy to get something stood up and running. The problem is, what you have just deployed is not secure, scalable, and likely, not supportable.
There are a lot of tools out there that help you build a cluster, and these tools are pretty easy to use. This gives a false sense of simplicity.
Problem is, these tools don’t help you to configure or support the tech post-deployment. Docker and Kubernetes, by themselves, are almost useless. They are not a platform, they are a component of a platform. To use either of these, you will end up deploying tools to help with deployments, observability, monitoring/alerting, security, and logging.
Go take a look at the Cloud Native Computing Foundation landscape, and understand there are over 2000 projects vying for your attention and wanting to be part of your “platform.” How do you know which to choose? And once you decide on your stack, are you sure they are all interoperable? What about when updates come out, how quickly do the tools you chose update themselves?
This is the real hurdle, understanding that genuinely delivering a production-grade platform is significantly more complex than just spinning up a cluster.
The second thing is access to skilled engineers. Right now, without using Portainer, you really do need Kubernetes trained/certified engineers to safely deploy and manage the technology. Problem is, these people are in extremely limited supply. The good ones are being vacuumed up by Silicon Valley (yes, they hire internationally, as their need for engineering talent surpasses what’s available IN Silicon Valley).
Not sure if you saw, but Apple apparently hired 8,000 Kubernetes engineers last year. This skills shortage really comes apparent in emerging markets, where there may only be a handful of skilled engineers.
Why did you choose to open source Portainer?
We open sourced the product because we believe strongly that Portainer is an enabler for the mainstream, and we want the mainstream to have no barriers to initial adoption. By making the product open source, we let companies get started, without any commercial hurdles/commitments. It de-risks the entire project for them and also reduces the product research needed, as they can just “give it a go” and see firsthand the benefits the product brings.
What has been the biggest challenge in creating a cloud native or container-centric company?
I think it’s the adoption velocity of the technology we build upon. It’s fair to say that it’s taken quite a while for containers to be generally accepted in enterprises, and to become widespread enough that there is sufficient market demand for a product like Portainer. It’s only now, with ISVs (independent application vendors) defaulting to shipping their apps as containers, that the mainstream (which we built Portainer for) have a real motivation to build Docker/Kubernetes platforms.
What feature of Portainer do you think would benefit companies and developers the most?
This is a hard one, as there is not really one feature per say, is the UX we have elected to deliver that is the biggest benefit. We purposely decided NOT to expose the raw technology lingo to our users, we wanted to ensure that people without knowledge of the lingo could still use Portainer, as after all, we are trying to enable the masses, without them needing deep experience in the tech we overlay. If I had to choose one feature, it would be our no-code deployment experience… which is a massive enabler for those that want to get their app running, but don’t possess the knowledge of how to write a YAML deployment definition file.
What advice would you give new container developers or companies wanting to add containerization into the mix?
For developers, containers are 100% the best way to ship your apps to your users. It gives so much deployment consistency and really does remove the “works on my machine” paradox. You need to get comfortable developing into containers and building container images. The investment will be worth it.

Image
For companies, when you are provided software, either from your devs, or your ISV, if it’s a container image, you know that whatever they have provided will work, regardless of your environment.
This consistency is gold. Also, containers enable absolute portability, so they unlock hybrid/multicloud possibilities, allowing you to deploy your software where it makes the most sense for you.
What (if anything) prevents teams and companies from adopting a GUI for container deployment?
There is a lot of industry pressure to do “everything as code,” be that Infrastructure as Code, full CI/CD, or GitOps… these are the buzzwords, and apparently if you are not doing this, then you are doing containers wrong… This simply increases the barriers to adoption, adds more tooling for your teams to learn before you can go live, and further narrows the available engineers capable of creating all this code.
Whilst we agree that “as code” is aspirational, right now you can gain the benefits of containers with a GUI, whilst your team learns how to define apps as code in the background, allowing you to evolve to it over time.
There is also a lot of peer intimidation, where anyone that uses a GUI is seen as somehow inferior to someone that uses the CLI…
In reality, there is nothing stopping companies from adopting a GUI for container deployment, so get going!
What has been Portainer’s biggest win to date?
I think it’s the ongoing love we get from our users. We have a consistently growing community of users, and even with the hundreds of new tools on the market, we continue (after six years) to attract new users, and retain existing ones. This is a big win.