When Microsoft started building the system that became open-source .NET library Trill, processing streams of data and messages to detect and handle a trillion events a day wasn’t a common requirement. Whether it’s IoT, telemetry, web advertising, financial markets or machine learning, dealing with massive amounts of data is now very common. Trill gives you a temporal query language that can deliver real-time analytics by looking for complex patterns in streaming data, at an enormous scale.
In fact, the trillion events a day it was named for might be conservative for the latest version, with some operations taking place at memory bandwidth speeds of billions of events per second.
But Trill can also look at historical data (and it processes those offline relational queries with the throughput you’d get from a columnar DBMS like SQL Server), making it a single tool that scales across the full set of analytics needs. Developers might want to use logs to develop the queries they’re going to run on real-time data streams, or compare current and historical data; Trill lets you do that with the same engine and data model (and performance).
Trill is widely used inside Microsoft, from Exchange and Azure Networking to analyzing telemetry in Windows. It’s what powers the Azure Stream Analytics service (which took only ten months to build thanks to the ability to compile SQL queries to Trill expressions). It’s used for telemetry, game monitoring and debugging for Halo (hosted in Orleans for scale). When Bing adopted Trill in 2015, analyzing petabytes of data went from taking 24 hours to near real-time. Turning clicks and impressions into reports for customers about Bing Ads campaign performance and ad keyword bids was taking 30 minutes to an hour; Trill dropped that to only a few minutes.
It’s already getting adopted externally by some software developers; financial machine learning platform Financial Fabric uses Trill for real-time portfolio and risk analytics on investment data. But what’s so different about it?
Microservices and Abstractions
- Componentized so you can use whatever distribution fabric you want to scale, like Orleans or Kubernetes.
- Architecturally simple by splitting a well-known query language from the commands to get the chunk of data you want, which can be based on simple time windows or complex windows based on some property of the data.
- Crazy fast thanks to extremely efficient coding, making the data into columns under the covers.
- Looking at the shape of the data to find the most efficient way of working with it.
- Compiling the queries you write so they're as efficient as the built-in methods. --M.B.
The reason that Trill is both so effective and so easy to work with is down to the simplicity of the API and the fact that it’s a component rather than an entire analytics system. Microsoft already had a system called StreamInsight, which was a monolithic application that can be installed on a server. Instead of lift-and-shifting that to Azure, the Trill team refactored it to create a one-pass in-memory streaming analytics engine as a single-node .NET library that developers can integrate into different systems (using it as part of an machine learning pipeline instead of an analytics system, say) and use by calling the IStreamable abstraction – Trill’s equivalent of IObservable, the standard .Net interface for pushing data..
If you need to scale it out beyond a single node, Trill can be used with a distribution fabric like Orleans or YARN; it also has a checkpoint/restore feature for high availability. There’s an experimental distributed Trill implementation in development using Microsoft’s Quill platform (quadrillion tuples per day) and the Common Runtime for Applications, a library to simplify creating distributed applications with Kubernetes and YARN.
Many streaming systems are monoliths that give you a full end to end experience, rather than components that can be reused and redistributed, so this makes Trill more flexible for developers to build into their solutions explains Microsoft’s James Terwilliger, who worked on Trill and helped open source it.
“Want to run Trill on Azure? Cool. Want to run it on an on-prem server? Awesome. Want to run it on your edge device running Linux? No problem. As it turns out, the fabric piece itself makes for a pretty good component on its own, on Orleans, or Jonathan Goldstein’s new project Ambrosia, Terwilliger said, referring to the microservice fabric built using CRA by one of the original developers of Trill. “The most that Trill will do thread scheduling by itself is on a single node when given a thread scheduler — in that case, Trill will distribute some of the load of aggregation or join operations across threads. Beyond that, Trill expects the fabric to do the work of scaling out.”
To make it simpler to write queries, Trill separates the query logic. Trill-LINQ uses standard C# LINQ queries — including lambda expressions, and operates over any arbitrary .Net type, nested or un-nested — from the management of the temporal data you’re analyzing, with separate operations for grouping and aggregating data from the window of time you care about. Lifetime operations can be used to set which events the query operations run over and Trill then runs the query over all the data in that snapshot.
Every event in your data has an associated lifetime, represented either as an interval or a start and end time. If data arrives out of order, it needs to be put into temporal order as it’s ingested into Trill; that means the operators can be simpler and more efficient because they don’t have to spend time checking the order.
If you want a standard tumbling window (taking, say the last hour of data every hour), there’s a built-in operation to do that but it works by calling a general AlterEventLifetime method that changes the lifespan of each event. Similarly, if you want to create a hopping window (where the snapshots of data overlap, like looking at the average number of sales in the last ten minutes every five minutes), or a sliding window of data, there are built-in operations for both but again they use that same underlying AlterEventLifetime operator method to change the life span.
And you can use AlterEventLifetime or the built-in macro ProgressiveQuantizeLifetime to create your own, more complex windows based on data rather than time, like windows where you recalculate the sum at the boundary of every tumbling window. You could clip event lifetimes; for example, collecting ad impressions and setting their lifetime to be either ten minutes or the time of the first click after the ad is seen (whichever is shorter). That gives you a more granular set of results that most streaming systems couldn’t deliver, Terwilliger explains.
That makes Trill extremely flexible and extensible. “All of the usual relational operations — join, aggregation, filtration, projection — are supported, plus some other more complicated operations such as advanced pattern matching.” You can group data, apply queries to the group and select a value. with the GroupApply operation (essentially doing Map-Reduce).
Much like in SQL Server, user-defined aggregations are made up from the same primitives used by the built-in aggregates: initializers that create objects, accumulators that add events to the object, de-accumulators that remove them, differentiators that compute the difference between objects and result constructors that convert the object to fit the expected output. “There are some clever ways to use this framework to do things like implement rules engines and pattern matching, though we also have a separate feature that does advanced pattern matching as well in a more regular-expression-like way.”
Trill can also be used as a “progressive” query processor, giving early results from partial data.
You don’t always need to see all the results from a query to get an answer. That might be because the partial results tell you that the query isn’t correctly written, so you can save time and resources by stopping the query after a few results. Sometimes you don’t need to process all the data to get the answer: if you’re querying for the maximum temperature of all sensors when what you actually want to know if whether you have any outliers reporting temperatures above 35 degrees, the first result that shows a higher value is enough to answer the question.
“Sometimes the question we are trying to answer isn’t exactly the same thing as the query we send to the database, and that a partial answer will give us all the information we need,” Terwilliger points out. “This is a feature that would be useful in any environment, temporal or not, relational or not.” More interactive ways of doing partial results can take more memory and reduce throughput, so Trill again manipulates data lifespans. “The way that you can get progressive results using Trill is by being clever with timeline management — after all, “time” to a computer is nothing but a monotonically-increasing value.”
Columns and Compilation
Part of the way Trill gets its impressive performance is by letting you pick the latency you need — how long you’re willing to wait to get the best results from your data — and batching up the stream of data to deliver that latency. Larger batches would give you better throughput but smaller batches of data give you lower latency: the default size is 80,000 rows per batch.
Trill takes a single pass over the data being handed, partly to avoid the overhead of repeatedly loading and unloading large amounts of data, but also to avoid depending on the performance and availability of the endpoints that data streams from. “Internal to Trill, we just expect the data to be ephemeral. If something like indexing can be utilized to speed up query processing, that can be done, just it’s done outside of and oblivious to Trill,” Terwilliger said.
The batches of data are also converted into a columnar format for everything except complex types.
Even though data is transformed into columns, users write queries as if the data was in rows and those queries are rewritten into tight loops with minimal memory access and no method calls in the loop to make them more efficient. If you’ve written your own aggregation methods they will also be rewritten to be more efficient on columns. “Trill tries to be as efficient as possible to only keep as much state around as is needed,” Terwilliger said.
Co-locating data for a specific field or property can speed up a query by keeping the current operation in the instruction cache, and some expressions — like selecting a field — are more efficient when you have columnar data — because you can do the selection by just picking the column that holds that field and putting it in the output batch. Trill also looks at the shape of the data in order to pick the most efficient operators for the rewritten queries. That could be typical database optimization, like whether there’s a property that serves as a key within each timestamp, or it could be more specific to timestamped data, like whether the lifespans of the data all have the same length or fall on specific time boundaries.
All of those small benefits add up, and there are more optimization planned, like using SIMD to run a single computation over a whole vector of data at once.
The Future of Trill
Other work in progress includes managing operator state with Microsoft’s recently open-sourced FASTER key-value store for managing large amounts of sate efficiently, and new ways of handling out of order data that allow users to specify multiple levels of latency. Currently, Trill deals with out-of-order data by either stamping late-arriving data with the time it arrives at, by throwing it away or by throwing an exception. The plan is to use Impatience sort, a sorting technique for handling streaming data sets that are almost but not quite in order — a network delay or a hardware failure may result in a few out-of-order results that can be rearranged. TRLLDSP will add the kind of signal processing usually done in numerical frameworks like R, but the code is still at the prototype stage.
Currently, Trill is a .NET library, but there’s no reason it couldn’t work with other languages, Terwilliger says. “F# is something that is definitely on our radar. Its syntax lends itself well to streaming data. We do have a colleague looking into making an API wrapper that make the F# usage much more idiomatic. That said, there is very little about the architecture that is specific to .Net. There are a few features, such as LINQ expressions, unsafe code, and the Roslyn compiler that make implementing Trill actually fun. But looking into implementations in Python or Scala may become higher priority depending on community interest. We know that Scala has some of the expression management and manipulation features that would be needed for good performance.”
Now that it’s an open-source project, the direction will depend on developer adoption and community interest, but Trill has already had a major impact. “Many of the key insights from the original research publication and tech report have been subsequently added to other systems, such as code-generated processing in Spark. Going forward though, I think there are a few places where Trill fills a gap in the existing competitive landscape with opportunities to influence. The clean separation between data operators and timeline management makes for a very simple API once users get a hang of the temporal model. And the regular expression and signal processing features (the former already in the product, the latter in alpha state) have real potential to solve big problems. In the short to medium term, I think you will see some other open-source projects within Microsoft, maybe a few outside Microsoft, take a dependency on Trill now that they can.”
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Real.