Cloud Native / Machine Learning / Monitoring / Sponsored

Ruxit’s AI Engine in a Dynatrace Culture Used to Living by the Rules

16 Oct 2015 7:35am, by

Dynatrace CEO John Van Siclen stood before an assembled group of 900 people Wednesday morning at the keynote address of his company’s Perform 2015 conference in Orlando, Florida. He was giving a signal to the group that the company is making a transition to a future that puts weighted importance on new platform called Ruxit and a term we hear a lot about lately: “cloud native.”

“Now, I talk to a lot of C-level executives … and they all have a challenge on their hands: sort of a bifurcation of their application and infrastructure landscapes,” Van Siclen explained. “Of course, they’re spending most of their time on their core applications of record and engagement — the applications that many of you help them manage, develop, architect. But there’s also another world that’s being driven top-down in most of these organizations, where they’re being told, ‘In the next three to five years, you need to have 80 percent of your new application development over on the right,” he said, pointing to a slide behind him depicting cloud-based applications as the smaller of two orbs.

151014 Dynatrace 002 (blob chart)

 

The reference to cloud-native architectures speaks to a shift affecting so many companies that have built their companies over the past 20 years by offering on-premises technology. The Dynatrace business is application performance monitoring (APM), specifically the use of agents injected into web applications that report key events (e.g., page load intervals, time-to-fetch resources from multiple URLs, rendering errors) generated by web browsers (or rendering engines), and the JavaScript frameworks that may accompany them. The original Dynatrace JavaScript Agent is an injectable JS module that reports web browser-generated events back to a server.

It’s in contrast to Ruxit, a completely new system from Dynatrace, based on a single agent that would replace the current Dynatrace system. It utilizes a sophisticated means of network auto-discovery that uses “quality” data that has semantics applied to the data monitored by the agent. The sophisticated analysis is designed to provide a monitoring environment that extends from the user, to the application stack and the underlying infrastructure.

Monitoring in the Web World

What first made Web applications viable in the enterprise and in e-commerce was the depths of their capability to be monitored. This capability was not native to web architectures, and for a great many years, was not and could not be standardized. JavaScript code has never had a precise order of execution for instructions; instead, developers have relied upon the events that browsers generate, as a way of timing the execution of functions in response. For most of the Web’s existence, whether the events recognized by web browsers resembled one another to any degree, depended upon the good will of their manufacturers … one of which was Microsoft.

No distributed application platform has ever really been good at natively reporting its own performance. HTML5 has sought to give browsers at least a standard mechanism for recording the most ordinary metrics — for example, with the User Timing API. But the harvesting of even these standard signals from browsers has required a fairly sophisticated backend platform, capable of making sense of those signals in real time. Thus far, Dynatrace’s platform has been comprised of a JavaScript agent attached to web pages, and a backend console that receives the signals sent by thousands of these agents at any one time, processing those signals to reveal detailed timelines.

In more recent years, JS frameworks — especially jQuery — added richer events to the mix. But even these frameworks were discretionary add-ons. Only in the recent era of HTML5 has there ever been a set of generally common events that was both reliable and rich.

What the Dynatrace Agent does (and what the various Dynatrace plug-ins for hosted apps do) is echo these events to a server that acts as listener. Under the wing of its former parent Compuware, Dynatrace built a kind of visualization system that has come to resemble an analytics platform.

Its transaction monitor is indeed quite sophisticated. With it, an IT/DevOps pro creates a series of rules that aggregate the event signals it processes in real-time. Those rules, based on any number of correlated conditions, fire off their own events that represent metrics that may pertain more directly to the business, such as completed transactions or positive customer feedbacks. The process of creating these rules does not resemble scripting at all. In fact, anyone who’s more familiar with the Microsoft System Center style of managing servers, or using Outlook to manage multiple email accounts, will recognize the “Wizard” process of assembling conditions and responses from drop-down lists.

Ruxit and its AI Engine

Rather than rely upon IT/DevOps to write sophisticated rules for how to traverse corporate networks and hybrid cloud platforms, Ruxit uses auto-discovery methods being described with AI terms. Once deployed within nodes strategically placed in corporate networks (the strategy here has been alluded to thus far, but not explained in detail yet), the Ruxit agent detects not only the topology of the network and its many dependencies, but the overlaying topology of software components in the distributed applications stack. This is said to include Docker containers and the kinds of components that run in those containers, such as database managers and web servers.

The Ruxit environment bears essentially zero resemblance to Dynatrace as we know it today. Some of the platform’s advocates here, including Dynatrace’s early adopter partners, speak of it as a kind of “fire-and-forget” system, with one fellow suggesting it could actually eliminate many of the jobs that attendees listening to him are actually being paid to do.

Dynatrace / Ruxit CTO Bernd Greifeneder taking the stage Thursday morning, repeated several times to attendees that the move to Ruxit was not a migration. Don’t think of it as a migration, Greifeneder told them, pointing to a slide where the word “migration” was crossed out, but more of a gradual unification.

Dynatrace wants to make a play to be part of a community that has been defining the story about what cloud-native really means. It speaks to the presence at the conference of company executives from NGINX, NodesSurce and Splunk, all companies that are representative of a new stack reality.

Dynatrace has the potential to fit with this community. The entire technology stack is changing. More than once we heard people here talk about the “software-defined” nature of this new technology stack.  The emerging players in this new world would be suited to partnering with Dynatrace because of its deep customer base and new AI platform for monitoring a full technology stack. But that potential for partnership can only be done if Dynatrace can adapt to a culture of open ecosystems that defines the companies of these new times. It will also mean clearly articulating and demonstrating what they mean by a “unified,” platform. For it is those longtime customers that represent the foundation of the Dynatrace business.

The New Stack’s Alex Williams contributed to this story.

Dynatrace is a sponsor of The New Stack.

Feature image: “Florida Fall Sunset” by Olin Gilbert is licensed under CC BY 2.0.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.