The New Stack is two years old today. We launched April 29, 2014 with several posts, including a story about Docker as a symbol of the new stack.
We have published more than 1,350 stories, produced close to 200 podcasts, and now serve pancakes (and breakfast tacos) at events. Today, we officially launched our third ebook about the Docker and container ecosystem. Life is good.
It’s that story about Docker’s symbolism that we return to, from time to time, to get our bearings. But now it is a bit different. It’s less about Docker, even though its impact and that of containers will be apparent for the next several years. Really, even more so, it’s more about automation and the still emerging thinking about its effects on:
- Application development and management at scale.
- Identity and how automation affects the roles people play in the new stack ecosystem.
- Constructs and how language about the new stack is evolving to understand the abstractions that automation provides.
- Open source ecosystems.
Topics that relate to automation will become increasingly thematic in our coverage over the next year as we explain and analyze matters related to complex distributed platforms and the corollary to application development and management at scale.
For example, we came to the OpenStack Summit here in Austin with hopes of better understanding how technologists view “the new stack of networking and storage.” We were reminded about how changing roles and responsibilities reflect a bit on how technologies are abstracted and complexity shifts. A network administrator may have an identity crisis when it becomes apparent that the systems administrator is managing the network, as Lee Calcote, a contributor to The New Stack, explained to us this week. With software abstractions, running networks is manageable for administrators and developers. Roles are changing.
But abstractions and their resulting automations only move the complexity elsewhere, which only leads to more skills needed to become a new stack organization. Noting Adrian Cockcroft, squeezing a balloon pushes the air somewhere else. And with software abstractions, in a container context, the complexity is now apparent in the orchestration platforms where networking now plays a vital role.
And here’s what we find noteworthy about this complexity. It’s only now, for our second birthday, does it make sense for us to write about topics such as networking and storage. It’s just more relevant as thinking about application development and management at scale becomes more important to people and the need for clarity about the impacts of automation affects us all. It speaks to how containers will have a lasting impact due to how they serve both developers and people in operations roles, as IBM’s Jason McGee pointed out to us this week at OpenStack. We’ve seen how containers impact the way developers package applications. Now we are seeing new ways containers are changing the context for networking and storage — the plumbing, so to speak, of the new stack infrastructure. And that, in turn, merits explanation and analysis.
This all means there’s a need for deeper coverage to new approaches, with attention to how the constructs are changing. For instance, constructs — such as IP and ethernet — are well understood, but the existence of new distribution mechanisms means there are now new constructs to consider, such as dynamic addressing , said JR Rivers, co-founder and CTO of Cumulus Networks. In the past, network providers defined the new networking models. Now, all over the place, people develop their own frameworks to connect systems. They develop frameworks to run on servers, which makes for malleable and flexible ways to manage networks. They are all software-based, and each framework has its own approach and its own constructs to describe how the network functions. Through this change in construct comes the need for more understanding of the complex automations that people want to better understand.
Out of this all comes a surge in interest for open source. There is just no way to go without it when scaling a service and managing it accordingly. It is from these open source communities that we expect to find much of our source material for measuring the impact that automation has on the new source ecosystem.
What This Means For Our Coverage
We’re a technology-first tech site, of course, with an emphasis on the architectural ideas governing the technology, the business value these technologies bring, and also the people driving the technologies forward.
Over time, we’ve narrowed down what technologies to cover — those that help the organization scale up operations. This requires, we’re finding, scale-out capabilities (meaning, adding capacity should be super-easy) and, most of all, automation. Anything with a human in a loop does not scale.
We’ve narrowed our focus to a select set of technology areas — areas we feel we cover better than pretty much any other site out there. “The New Stack,” if you will. Container and container orchestration are our sweet spots, and, of course, areas we feel that are the best way forward for businesses. We’re watching the serverless space closely, as well. Scalability requires statelessness — at least at the computational level.
We also keep a close eye on the infrastructure tools for scaling out operations. We keep tabs of the cloud providers and the infrastructure and platform services they offer. Of course, we are more interested in the tools that will allow enterprises to bring these cloud operations in-house, or at least manage them in-house, most notably OpenStack and Cloud Foundry. Add to this the oft-overlooked science of API management. And, as stated, over the next year you’ll see a lot more from us on the emerging crop of scale-out networking (SDN, NFV) and storage technologies (parallel file systems).
And this need for scalability and agility is spilling into other areas of IT as well, some of which we are keeping an eye on. We’re not a big fan of the “big data” buzzphrase (we wonder how much being done on Hadoop could actually be done on a beefy PostgreSQL server), but the fields of databases and business intelligence software are being disrupted by their own sets of scaled-out tools. New NoSQL databases and stream data processing engines seem to be popping up weekly that address range of new use cases that standard SQL is ill-suited for. And we’re following the work going on in artificial intelligence and machine learning that, again, relies on scaled-out infrastructure, and one day, we bet, will even be used to help manage scaled-out systems themselves, particularly around security and resource management.
And because everyone needs some downtime, we’ll turn our weekend coverage to offering up some inspiration and instruction for building your own DIY hacking projects, as well as give you a peek at how others are pushing the limits of IT to extreme (and sometimes preposterous) new heights. When walking along the cutting edge, it helps to intuitively understand the fundamentals, which is to say, the hardware and the code that makes it all happen.
Cloud Foundry, Docker and IBM are sponsors of The New Stack.