InfluxData sponsored this post.
A popular plotline for science fiction films is the notion of an autonomous machine, perhaps a robot or supercomputer, developing a human-like level of intelligence and a sentient persona. Because this is a movie and audiences want an exciting plotline, the machine develops a renegade consciousness with the intent to defy its human creators. The storyline makes for a great summer blockbuster.
At the risk of sounding cliché, real life is beginning to imitate art.
The end goal of many human-designed systems is, in fact, autonomy; that is, the ability of the system to learn and act independently of its human creators. In today’s AI-enabled, software-driven world, that end goal is increasingly realistic. But fortunately for us humans — at least for the time being — these devices aren’t likely to become sentient and go rogue any time soon. Rather, they learn from our algorithmic coaching to operate completely independently.
Industries such as manufacturing, automotive, healthcare, information technology, and others are racing toward implementing these autonomous systems. Perhaps the best-known examples today are autonomous vehicles, or self-driving cars. They are quickly becoming a reality, with companies such as Tesla, Uber, Waymo and Apple striving to bring fully autonomous vehicles to the open road. Meanwhile, partially autonomous cars are all around us, with useful features such as automatic emergency braking and hands-off self-parking boosting safety and convenience.
While driverless cars sound really cool, autonomous systems are important in other industries as well. In the information technology space, the big trend is toward application containers and virtualization, where servers and microservices are autonomously spun up and down as needed to support varying workloads. In manufacturing, “Industry 4.0” is the new industrial revolution that utilizes automation and data exchange of cyber-physical systems to create a smart factory that includes machines making decisions for themselves based on production parameters. For example, a product on the production line that diverges from quality specifications (i.e., has defects) can be removed from the line without human intervention.
These are just some early examples of what will turn out to be the defining force in the next generation of economic growth, taking us beyond the current Information Economy. Let’s call it the “Automation Economy.” I think that term captures the inevitable movement from simple automation to sophisticated control systems, and eventually to self-learning, fully autonomous systems that are fueled by the rise of inexpensive sensors coupled with exponential gains in compute power. While the base technologies are available and will improve over time, it’s the expertise and skill in developing these systems that will differentiate companies and countries. Fundamentally, the process and the steps therein are almost always the same: instrument, observe, learn, automate, repeat.
The Steps to an Autonomous System:
For these types of systems, instrumentation is the first step toward autonomy. This means putting sensors or gauges on all sorts of physical and virtual elements to capture point-in-time data. Measuring and understanding what is going on over time with thousands or millions of individual elements of these systems is vital. How fast is the car moving? How close it is to the vehicle in front? Is the road surface wet or dry? These and millions of other data points are critical inputs for a self-driving vehicle. What’s more, the data must keep flowing in as long as the machine is operating.
In the virtual or software world, instrumentation of the infrastructure, microservice or container is just as important. How much disk space is available? What is the queue depth? How much RAM is free? What is the response rate of the microservice? These and millions of other data points are critical inputs for creating a self-healing and self-scaling system.
The next step is to observe. Observation of systems in real time is critical as complex systems by nature behave and fail in very unpredictable ways. We observe the system, identify what we are seeing, and if we have instrumented well, are collecting the right kinds of data, and then make sense of that data, we then have the foundation for learning. Do we care if the car in front is blue? What does it mean if the tire pressure is low? Does queue depth really matter? Do things change based on time of day or week or month? Observation is key to determining what to do with the data. We apply the insight we monitor from the instruments and observe the impact the metrics have. Then we need to learn from the observations and try to take some actions from them.
In the learning process, we want to teach the system what the data means. We observe the trends and provide learned (or expected) behavior from these observations, but after a while we want the system to learn for itself. Self-learning is a key factor in autonomy, and thus machine learning is an essential technology within the system. For example, we want the vehicle to learn for itself how much pressure to apply to the brakes if the road surface is slick and the tires are under-inflated. Humans can’t be involved in this type of split-second decision.
Autonomy or automation is a journey. A system that can scale up based on IP requests is one level of autonomy, but we always want more. We want the system to be self-healing and self-learning. In the physical world, self-driving is still a goal but anti-skid technology is one form of autonomy that is pretty mature.
Of course, all this data needs to be ingested quickly so that it can be analyzed, and actions can be taken while they still matter. A truly autonomous machine — especially one making life or death decisions such as whether to brake or hit a concrete wall — has to learn quickly about its current conditions, decide what action to take, and then take it.
The Platform of Record in the Drive to Autonomy
As you can imagine, there are some sophisticated technology requirements underlying the move to autonomy. There are purpose-built platforms designed to handle the very explicit needs of an autonomous system which can’t be properly addressed with general purpose tools.
In designing these systems, one must first consider what data is important to store, and the frequency that you want to collect the data: For instance, if you need the autonomous system to autoscale on queue depth you need to be storing this data. This allows the ability to store many measurements for each discrete time element. The frequency of recording is also very important, we call this precision, we support a nano-second level of precision which is used primarily in network monitoring and some algorithmic trading applications. But if you want to provide automation on a 99.999 percent uptime then your precision has to be at least at the second level because you can only afford 5 minutes of downtime a year. Naturally, you need to worry about clustering this functionality to provide redundancy to support high availability.
With every aspect of an autonomous system instrumented with sensors and gauges, the technology platform must be able to handle incredible volumes of data generated at very fast rates. This requires an infrastructure that can immediately ingest and store the data. What’s more, the platform must be capable of compressing the data so it doesn’t quickly consume all the storage capacity.
Time is a fundamental constituent of any platform built to enable autonomy. Every data point has a time stamp so the system can understand precisely when something was measured or when an event occurred. Applications need to support time-based functions, such as calculating the rolling average of the data, or comparing how a data point differs now compared to the same measurement taken in a different time period.
The system must be able to down-sample the data; that is, get rid of some (but not all) of the data points after a period of time. It might be important to look at the freshest data at a very granular level now, but over time, the value of so much data diminishes. The less acute data is fine for observing trends over time, and removing unnecessary data points saves on storage space and cost.
The platform must be able to interpret and analyze data in real-time in order to take action while it’s still meaningful. Waiting ten minutes to apply a self-driving car’s brakes to avoid a collision is not viable; action must be taken immediately, as soon as the data indicates a collision could be imminent.
And finally, the platform must be designed for control type functions. You want to use your critical time-stamped data in order to do things, such as apply the vehicle brakes to avoid a collision. Having visibility into a situation is only useful if you are able to control what happens next.
We are on the cusp of a new economic era—that of the Automation Economy. Mechanized systems across a wide span of industries are quickly evolving toward autonomy through the processes of self-learning, self-healing and self-acting. One needs to consider how large a time window is wanted to provide data across. In auto-scaling systems for example, the load on Black Friday might be higher than any other day, but you need the historicals from one year ago to make precise decisions.
The fundamental information technologies and the human knowledge and expertise are converging to lead the development of autonomous machines intended to make our lives better and to do things that people simply cannot do; for example, explore the planets around us, or go to the depths of this planet’s oceans. Renegade robots aside, the drive to autonomy looks very promising indeed.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real.