It’s difficult to imagine that all software that may ever be provided as a service through a network will eventually evolve to the scale of Twitter. Yet, there is one emerging fact about Apache Aurora — the microservices architecture Twitter is developing, and which it is contributing to the open source community — that is impossible to ignore: nothing about it suggests or mandates it must be deployed at Twitter’s scale to be useful to someone else.
This is why Aurora has become an open source project, and why it may yet change the way every data center works, everywhere.
In Part 1 of our series on Twitter’s stewardship of the Apache Aurora project, The New Stack introduced you to Bill Farner. In prior years, as a Google developer, Farner observed the first microservices architecture Google was building internally, called Borg. Particularly, he took note of its shortcomings, and applied those lessons to his later work as Aurora’s lead engineer at Twitter.
In the past decade, the “fail whale” became symbolic of the deficiencies of monolithic architectures as they attempt to the scale of… well, of Twitter. So in 2010, Farner led the effort to move Twitter into a completely new frame of existence: one where the tables are turned, and the server is subservient to the job.
At Internet scale, servers are more like ants. They don’t contain a job, but rather contribute to it. Jobs are far bigger than any one server or server cluster. When servers fail, the job compensates for their loss and the work goes on. So managing a process as huge as Twitter becomes an exercise in herding ants.
If you’ve ever studied anthills in North America and anthills in South America, you’ll have noticed that no two facilities provide the same performance profile.
“A typical Aurora/Mesos cluster is localized within one physical facility, where the intra-datacenter latency is assumed relatively uniform,” Farner explains in an e-mail exchange with The New Stack. “This allows users of the platform to make homing decisions based on their upstream and downstream service dependencies. In other words, there is a ‘human scheduling’ decision that happens at the global level.”
So even though the machines in Twitter’s various facilities will not be carbon copies of one another, Farner goes on, it’s up to the owners of the services running within Twitter to select the clusters they’ll use and the capacity they’ll consume in each one. “It is the responsibility of capacity planners and product forecasting to ensure that we [Twitter] have sufficient capacity to not stall product development,” he says.
Still, manufacturers such as Intel can certainly change the performance profiles of processors. And competitors such as ARM can add new architectures to the mix of servers. Even though high-volume purchasers such as Facebook may provide guidelines for how much they’ll allow these profiles to vary, server hardware technology evolves much faster than the service lifetime of any single server. So some software, somewhere, must mitigate for performance differences, because even the concept of “plain vanilla,” with respect to servers, changes radically within a three-year timeframe.
What used to mitigate these variations from generation to generation was the operating system. If you wrote a piece of software that was tightly coupled to the OS, then when that OS migrated to a new generation of hardware, the software didn’t break.
Today’s microservices environments, including those running on Apache Mesos with scheduling and orchestration provided by Aurora, invert the entire relationship between software and hardware. A conventional operating system in this environment, such as a Linux, may still keep each server unit functional. And another Linux may maintain the operating environment inside the container for applications and services. But an OS such as Linux or Windows Server is no longer a job control system. In this inverted world, no single server runs a domain.
“One thing that has been very important to me while building Aurora,” pronounces Twitter’s Farner, “is that we do not become tightly-coupled to external systems.
“Aurora currently assumes that all cores are equal, which is absolutely not the case,” he continues. “It’s helpful to minimize the number of hardware profiles, but in large clusters there will always be an old platform in use while a new platform is replacing it. The strategy to combat this is extremely resilient software that can handle individual hosts being slow (since that is a reality regardless of platform differences), and a preference towards horizontal scaling. While this is effectively leaning on the law of large numbers, it is important that we [Aurora] do not allow our developers to build software that is coupled to the hardware platform.”
It’s similar to the problem every technical writer faces when being published for a global audience: the message has to translate to all eyes and ears. Resilient software in a Mesos-driven environment, as Farner perceives it, treats servers and clusters as non-distinct units, and perceives individual tasks as stateless services.
The concept sounds more austere than it actually is. If you’ve developed for the Web, you know that an API call expects a discrete package of data to be exchanged in a prescribed, often uniform, manner. A client program never expects to have to care about the state of the server function handling the call — whether the last client it serviced changed it somehow, or whether it utilizes certain registers that may impact the quality or reliability of the output in some unforeseen way. The state of the server doesn’t matter to the client, and that’s what makes REST protocols work over the Web.
If state mattered in a microservices architecture then, as Farner tells us, those services would not be relocatable between machines or between operating systems. However, if developers designed their microservices architecture to be truly stateless, and utilized a service registration and discovery system (Twitter uses Apache ZooKeeper) to provide an abstraction layer between services and the differently configured systems they run on, then the process of migrating these services onto a Mesos/Aurora platform, he says, would be “a no-op.”
As you might imagine, it wasn’t such a no-op for Twitter right at first, as Farner recalls:
“In reality, many of the services we [Aurora] on-boarded early were following best practices, but they were also trying to minimize their hardware cost before moving to Aurora. This meant that they attempted to have each process (or instance) consume the entirety of CPU and RAM on their machines. In some unfortunate cases, this actually influenced the architecture of the software (which, again – is problematic as moving to a new platform would be difficult for them in any case). For a platform like Aurora, we strongly prefer that instances be sized at a fraction of a machine, as this allows Aurora to perform better bin-packing and achieve greater global hardware utilization.”
Twitter’s principal function in the world is exceedingly simple to explain: the dissemination of short messages to groups of people.
Every function Twitter performs is an extension of this principal job. Yet this simple job, at the scale Twitter has attained, is impossible with the monolithic architectures devised for computing in the 20th century. Mesos and Aurora are as different from the first distributed operating systems as the Saturn V rocket was from the Titan II. But this tremendously different way of working may not have been conceived had Twitter not encountered the fail whale one too many times.
Even then, Farner believes, the distinguishing factor for the Aurora project is not intrinsic to Twitter. “The biggest differentiating factor, of course,” he says, “is that our platform is open for anyone to take advantage of and contribute to.”
Featured image via Flickr Creative Commons.