TNS
VOXPOP
Favorite Social Media Timesink
When you take a break from work, where are you going?
Instagram/Facebook
0%
Discord/Slack
0%
LinkedIn
0%
Video clips on TikTok/YouTube
0%
X, Bluesky, Mastodon et al...
0%
Web surfing
0%
I do not get distracted by petty amusements
0%
DevOps / Operations / Software Development

System Initiative Could Be Lego for Deployment

Adam Jacob, who previously created Chef, has a new DevOps 'power tool' called System Initiative. Could it be Lego for the deployment world?
Jul 22nd, 2023 4:00am by
Featued image for: System Initiative Could Be Lego for Deployment
Image via Unsplash

Listening to Adam Jacob talk on the Changelog podcast, I was interested in his reference to how Unity (the game developer platform, and competition to Unreal) uses a GUI hierarchy model to tie together code, configuration and components. He wanted to do something similar to help DevOps make more sense. Adam Jacob is the guy behind Chef, so he knows about infrastructure woes.

So what is System Initiative (SI)?

System Initiative is a collaborative power tool designed to remove the paper cuts from DevOps work.

This isn’t a very arresting elevator pitch, initially.

Now Mr. Jacobs’s theory is that with so many “best in class” systems that have no common glue between themselves, big projects tend to get knotty fast. He talks about ‘paper cuts’; these are the many small annoyances that add up to frustrated deployment paths. For example, one tool stores data in JSON, another in YAML. One tool allows scripting in Ruby, another in Python. But deeper than that, they choose to reveal state at different times, making overall conclusions about a system difficult — think of all the “dashboards” that have come about just to make simple assertions about your deployment.

As a consultant, I’ve seen devs try to write all-encompassing toolchains, with the result that they slowly lockout existing solutions; sometimes, solutions that are used elsewhere in the same organization. Mr. Jacob correctly states that most organizations don’t deploy more than once a week — although that is almost certainly down to political theatre. Rapid deployment means rapidly cutting people out of the loop, which is one less thing for said people to write in their performance reviews. So people are probably DevOps-ing hard enough. But the complexity of operations does affect basic agility. This is where SI could gain headway.

Because loosely coupled systems have, by design, no knowledge of each other, it tends to be hard to make them collaborate or share intelligence. Feedback loops can be long, and context switching between different tools can be mind-numbing. Strangely, this setup is not too dissimilar to the siloed departments DevOps was designed to break down!

You can see the arguments laid out in a historic context in this previous article.

The video at the SI site shows you the diagram and asset sets that the user manipulates. The eventual example deployment is a website for a dubious cat delivery business; but at the end of the day we are all just delivering cats.

System Initiative chooses to go down the digital twin route — this allows a nice diagrammatic representation of common components (and their relationships) to represent their backend real equivalents. Creating all the “real objects” like EC2 assets can be done when the frontend model is ready. So SI allows you to connect your frontend twin components via a diagram and a library of common assets. Assets can be things like EC2 images, security groups, regions, etc. Because of the connections between these frontend components, SI can make intelligent guesses about names and intentions. For example, a Docker image’s exposed port can be connected to an AWS Ingress component with a single-line connection.

An important point comes out early on — that both the twin and the “real object” have to be considered sources of truth. The trick is to keep them in sync, or inform the user quickly when they are not. Differences are referred to as “qualifications”. Sometimes the designed frontend design is “ahead” of the actual backend. Think of this like differences in git.

The standard lifecycle of a digital twin in the industry is probably not quite the same as here. For instance, if you have a twin of a wind turbine, clearly the real object is considerably more expensive than the digital version, but the real-world environment can be reasonably modeled. If sensory equipment on the turbine reports something anomalous, this can be compared with the expected modeling at the twin. You can see how these facts are quite different from, say, an AWS security group not working. If Jeff Bezos’s operation does something “anomalous”, you won’t send out a helicopter full of engineers to save your asset.

There is an interesting trend with dev guys finally learning from game devs to take UI seriously, and what would have been a poorly written diagram app a few years ago now looks fairly slick. And with powerful laptops, managing powerful frontends is getting much less precarious.

There is a beta available, which I applied for. In order to check flexibility, I first wondered if they supported, say, Digital Ocean. And there in the beta sign-up was:

The system is in closed beta, but anything I experienced there just helps to give an added hands-on sense of reality for the project as a whole.

The mapping of SI to Unity development is probably as productive as thinking about digital twins. In Unity, you maintain a hierarchy of game objects and components, behind which you write code in C# to control the behavior. (I think SI is mainly written in Rust with JavaScript components.) Initially, I was a bit reluctant not to control everything within the code, but then it dawned on me that was a fairly hidebound way to control all the various data-defined objects. Dragging an image into a frame was not some type of demonic sacrifice to an evil system, it was just convenient. Sometimes the Unity model has difficulty holding together lighting, sound, physics, animation and all the other multifarious objects needed in even a modest game. DevOps components by contrast have many fewer degrees of freedom, allowing for a much better understanding of user intention.

Inevitably a project like SI can get “captured” by the makers of its most required components. Nobody would be surprised if AWS bought them up as they matured. But to escape that, it might make a wide spectrum effort to maintain an ecosystem of as many components as possible — maybe the Lego of the deployment world. As with Lego, the large set of interconnecting components defines the business. Either way, those cats need to be delivered.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.