Part one of a two-part interview with Mitchell Hashimoto, who, at the age of 25, has been around for quite a long time in the tech world, building open source projects. At the time of the inteview, Hashicorp had just announced $10 million in funding and the launch of the service that they call Atlas.
I’m Alex Williams, here with Mitchell Hashimoto of Hashicorp. Mitchell has been around for quite a long time in the tech world, building open source projects, and he has just announced some funding and the launch of the service that they call Atlas. I thought we would spend a little time getting to know you, Mitchell. Thank you for joining us; I appreciate it.
Mitchell: Thanks for having me.
Can you tell us where you grew up, and about your first introduction to computers or software?
Mitchell: I grew up in a beach city, in Southern California, literally on the beach.
Mitchell: Redondo Beach. I grew up within walking distance from the beach and I had a good time there.
I started programming at the age of twelve. It was in middle school. I got into it by accident, but I wanted to know how to make cheats for video games. I had this realization one day where I was using all these cheats for video games and then I said, “Wait a minute – someone had to have made these, and why can’t that person be me?” So, I googled how to make an EXE, or something along those lines, and that brought up terrible results, but it kind of led me in the right direction.
I started with Visual Basic. I did that for a long time and then got into PHP and did some web stuff. I think a pretty standard language is good for beginners, and it kind of led from there, but I always had a lot of focus on automation.
Even from that start you were looking for ways to make it more automated?
Mitchell: I didn’t make this realization until a few years ago, and then I blogged about it. What I’ve always found fascinating about computers is that they could do things for people: things that you used to do manually, they could do for you. So, my cheats for video games weren’t about hacking or doing anything illegal in that way. It was about, “How do I make the cheat pretend to be a person and actually play the game so I don’t have to play the game?” That’s what it was always about. I was never the person finding hacks in the game.
Moving past that, I started a website that automatically built community forum software, with open source software: you entered the details and I would install it for you. I didn’t understand anything about security then, so it was horribly insecure and it was bad in a lot of ways, but I just progressed and continued this trend of automating things.
Why did you choose the community forum, and was that related to all those community forums you’d been seeing?
Mitchell: One of the first things that I found when I googled how to make an EXE was forums for programmers, and forums for programmers specifically making video game cheats. So I was always into forums and I wanted to make it easier to make forums. They slowly patch along all the projects I’ve worked on.
How did you first start to learn about open source?
Mitchell: I think the first major impact open source had was back when I used Visual Basic, which of course isn’t open source. But when I used Visual Basic I was having a really hard time — as a twelve-year-old would, I guess — I was having a really hard time grasping a lot of these programming concepts and figuring out how to do basic things — now, in hindsight, really basic things — like loading a web page and parsing the contents to find the user name and things like that.
A few of my “friends” — internet friends on the forums — they would just send me their source code. Then they started making the source code public on the forums to help other people learn. So I started doing the same thing with my cheats. The ones that had been caught and didn’t work anymore — I’d make those open so people could see how they worked and learn from it. I don’t think I could have learned how to program without these people showing me how certain things are done. That was my first introduction to what could kind of be considered open source.
That same ethos is very much alive and well today. What is the difference? Is it a larger community? Is it just more people? Obviously, the number of people that are coding now is far more than that time. What was unique about that time for you, looking back?
Mitchell: All of that still exists today, but a lot of open source — especially open source that I’m involved in today — is viewed as “high-stakes” open source. People feel like it has to have an impact. They feel pressure with their open source for some reason. It wasn’t about that then. I mean, there were bad things because it wasn’t about that — you would just put up the .ZIP file with your source and you would never touch that again. But it was more about, “Here is how you do this thing,” so that you can do it in the future.
It was a lot more about learning rather than trying to build some core part of someone’s infrastructure or software.
It was a much smaller world then. It didn’t have as much of these market dynamics, either.
Mitchell: The oldest person on that forum must have been 18 or something, so we were all kids. We all wanted to make money. But to us, making money was like finding a way to make $100. That would have been huge. We didn’t do this for money, really. It turned into that: a bunch of us from that forum started a subscription service for our cheats, so we found a way to make money from it. But at its core, it wasn’t that.
What are those people doing today?
Mitchell: I only keep up with one anymore and he is an engineer for Palantir. I think they all turned into engineers or business folks and they’re all doing pretty well. They’re good guys.
You went to the University of Washington. Did you go there to study Computer Science?
Mitchell: I applied there specifically because of their Computer Science program’s reputation.
Why did you choose Washington?
Mitchell: It wasn’t my top choice, actually. My top choice was Carnegie Mellon in Pittsburgh. I actually got in, which I still think is a miracle, but I got in. I visited the campus. On the same trip I visited Pittsburgh, I visited Seattle for the first time, and I also visited one other campus. I visited Pittsburgh and I didn’t like the city that much and I also didn’t like the feel of the campus. It was too small, I think. Carnegie Mellon is a very small college, and that’s great for some people, but I didn’t like the feel of it. And when I went to Seattle, immediately, it just felt right. It’s a huge campus, it’s really pretty, the environment looks nice, and I knew they had a good program, too. Carnegie Mellon is known for having one of the top, top, top programs, but I think UW was a better balance for me.
What was your first taste of distributed systems when you were at the University of Washington?
Mitchell: Well, my first taste of distributed systems where I knew it was distributed systems? My first taste without knowing was a job I had where we were doing infrastructure, basically. I was a consultant at a developer shop and we had to scale websites – those are inherently distributed systems.
The first time I actually knew I was working on distributed systems and had to think about distributed systems was a research project that I joined in my second year of college. It was kind of like Folding@home, in which a code would run on your computer when your computer was idle, to help science. It was kind of like that, except that students could run arbitrary code that they wrote, on millions of computers worldwide that were volunteered. So it was a little trickier because we had to figure out, “How is that safe? How do we make sure they’re not doing anything illegal? How do we make sure they’re not accessing files they shouldn’t be?” We did it across Mac, Windows and Linux, to millions of computers worldwide. We also used Linux containers, so that was a really new concept for me, and that’s when I was introduced to Linux containers. It was 2008, I guess.
How were Linux containers to use at that time?
Mitchell: Pretty terrible to use, compared to today. They’ve come leaps and bounds in terms of usability.
Why were they using containers?
Mitchell: The way the security model, or, isolation model, worked in that project was defense in depth, and so it was like an onion layer. Wherever we could get another layer of security, we would just wrap it in another layer of security. Linux containers were known to be insecure at that time, but they did help isolate certain things, so we just wrapped it in that layer as well as multiple others.
What did you learn, now that you look back, from using those first generation distributed architectures?
Mitchell: One of the big takeaways from that project and that experience was, obviously, being on a research project, just thinking about things academically; you don’t do Wild West things so much when you’re coding. You could really think through the problem and solve it on paper before you get to an editor. Also, understanding the difficulties of deploying things in a production-like environment that millions of people are relying on. It was made up of so many components, and knowing that if you deploy something and it goes down, then it’s not good. Or if you deploy something, and during the duration it’s unavailable, it’s not good, so, thinking through all these problems…
The last part, which I wouldn’t say is from the research project, but was from my job around the same time, was having a really large appreciation for user experience. There are a lot of things in academia that are fundamentally sound. You could read proofs on them, and the research, and they’re going to work, but getting them to work is just a terrible experience. Nothing’s documented and the output is not regular. Sometimes it does this, but sometimes it does other things when you think they should be uniform. Wanting things to work the way a person would expect them to work became very important to me.
Why is that? Is it just a result of an academic environment?
Mitchell: I don’t want to throw academia under the bus or anything, but I would say that’s not a high-value item for them. I think they value the results and the fundamentals behind it much more, which is very important, but other things suffer because of it.
When did you realize distributed systems were actually distributed systems?
Mitchell: That research project. It was a tough love thing. The research professor would assign me a task and I would solve it. I would give him the solution, and I guess he had higher expectations, because he would look at what I did and say, “This can never work; this is stupid. This is what happens when the network fails. This no longer works at all.” And then I’m realizing, “Oh, yeah, you’re right, nothing works,” and that made me think harder about these things. We’d have ten computers and I treated it like, “Let’s just pretend it’s one – but ten single computers – and that’s distributed.” But there’s so many more aspects to it; I had to discover that the hard way during that process.
So when did you start to see the intersection of distributed systems with open source?
Mitchell: The eye-opening experience for me with distributed systems and open source was when I first used this database called Riak, from Basho. After I graduated college, I stopped thinking about distributed systems for a couple of years. I became an operations engineer and I wrote Chef and Puppet all day, and I really liked that, but I hadn’t really thought about it in the formal way that I did in academia, for a while. Then I deployed Riak into our infrastructure as a solution to a problem, and it was a distributed system.
It was a distributed system with a really good user experience, too. The way it just auto-healed, and data was distributed properly around this ring, and the way that they documented very clearly the failure scenarios and the functioning scenarios and its limits – things like that really appealed to me in the sea of GitHub-sort-of-software that had a readme, and you had no idea what its stated goal was, or where it stops and where it started.
After that, when we started this company and I was talking to my co-founder, Armon, who was my best friend in college as well, I told him, “Well, if we design production software, let’s make it as nice to use as Riak was.” That led us backwards to our research in undergraduate and thinking through things in that way.
So tell us about how Vagrant came to be then?
Mitchell: While I was doing this research project, I mentioned here and there that I had a full time job. I had a full time job at this consultancy as a developer and we would see a lot of projects; every six-ish weeks we would get a new project. Between those six weeks we would often get pulled off to do maintenance work on previous projects, and it was all across the board. We were using the latest and greatest technologies, and getting those running on your machine was really difficult; I got frustrated with having to fight my Mac with running every website, and I wanted to isolate them all in some way. Vagrant was kind of my attempted solution at that, and I just got lucky that it worked. It was my first try and it worked the first time, but it was really to solve this problem of repeatable development environments.
You guys were starting to think broader at that time too, weren’t you? You weren’t just thinking about Vagrant. We covered this when we talked at DockerCon. Can you talk about the philosophy you were starting to develop at that point?
Mitchell: Concurrently with Vagrant, as Vagrant was 0.1, I was working on this other slated-to-be-open-source-but-never-reached-that-point project with Armon, who is now my co-founder, and it was called Cockpit. It was this idea of getting code from your laptop and then doing all the difficult tasks of getting it to production. It was designed around the use cases of deploying this research project we were on, which was made up of a bunch of services that were across all these operating systems. Some were kernel modules, some were desktop applications, some were backend services; it was everything you can imagine.
So we designed this thing called Cockpit and we just realized that the problem was very difficult. The problem’s scale was actually pretty massive. We didn’t really have the tools we wanted to solve everything the way we felt they should be, so we started breaking down that problem into “what do we need?” Luckily, Vagrant was a pretty key part of that.
One of the things we needed was a consistent way to get a development environment, because from a consistent development environment you could consistently push to production. If you have all of these different development styles it’s hard to create a single command to get to production. That became a key part of it, but we also mapped out a bunch of other things we needed. Over time, those were what became the open source projects we have today.
Now you’ve developed Atlas, which is the unification of those different things. Can you tell us what Atlas is exactly?
Mitchell: Atlas is that idea of getting from development through production on any infrastructure, on any operating system, with one tool and one workflow. The idea is that you could be writing Python, you could be writing a kernel module in C, you could be making a .NET application; and it could be going to a work station, it could be going to a huge server, it could be going anywhere. But you want to be able to deploy, configure how deploys work, see where the deploys are, see who deployed, see all this information in one dashboard. Atlas is the manifestation of that idea.
It actually is Cockpit: it’s what we built called Cockpit, but it’s done in a much better way, because we broke down the problem into individual open source projects and now we’ve unified them. That made the problem a lot simpler, by lowering the surface area we had to solve in each individual thing.
What are some of the core complexities that come from the data center that you are trying to help solve for the customer?
Mitchell: The core complexity we’re trying to solve is that getting from development to production hits a lot of common steps, but in each of these common steps there’s a lot of choice. There is usually a CI step; there are a lot of different CIs. There’s a config management step; there are a lot of different config management systems. There is the step where you choose whether you want a container or a VM. It goes to multiple environments: staging, production, per developer environments. It goes to different infrastructure providers.
Maybe development and staging is on your machine or on a local server in the office, whereas production is out in your data center, so there’s that complexity. Then there’s the complexity, more prevalent now, of distributed systems, of how to monitor these things, how to keep them up, how to orchestrate what order they come up in and how they communicate.
With Atlas we’ve broken down this whole pipeline into five distinct stages: dev, build, artifact registry, deploy, and monitor & maintain. We’ve put our open source projects into each step, so we could guide the whole deployment process through using our open source software, but then use Atlas as the single pane of glass to view into that. Each of our open source software integrates with all of the matrix complexity of that stage.
So, for config management, we integrate with Chef and Puppet and Ansible and Salt and all those guys. For infrastructure, for deployment, we integrate with AWS and OpenStack, physical providers, like your own custom stuff. What we like to say is that we build our tools to sit at the workflow layer and not the technology layer; the idea is that the workflows will stay the same – the end goal of what you want to achieve stays the same – but the technology and software underneath that will evolve and improve, and that’s how we designed this thing.