AI-Aided Coding
On average, how much time do you think you save per week by an AI-powered coding assistant such as GitHub Copilot or JetBrains AI Assistant?
I don’t use an AI coding assistant.
Up to 1 hour/week
1-3 hours
3-5 hours
5-8 hours
More than 8 hours
I don’t save any time, yet!
Containers / DevOps / Kubernetes / Observability

Cloudlets, Platform Analytics and the New Generation of Cloud App Management

Sep 2nd, 2016 9:28am by
Featued image for: Cloudlets, Platform Analytics and the New Generation of Cloud App Management

Brian Wheeler
Brian Wheeler oversees the technology team at Morpheus Data. Prior to Morpheus, Brian founded a software development consulting firm which designed and developed solutions for a variety of industries including power grid management, ticketing systems, online trading, social networking and gaming, SOX compliance, and e-commerce. Brian holds a bachelor’s degree in Chemistry from Pomona College.

Everybody talks about how wonderfully scalable the cloud is. That’s true when you have simple n-tier applications with a few dozen instances on an AWS cloud, as Timothy Prickett Morgan writes in The Next Platform. However, in coming years, this will be a decidedly atypical cloud scenario.

In the real world, applications will continue to scale up, adding layers of complexity and interdependence. The tools currently used to manage the growing tangle of interrelationships tend to specialize in particular aspects of stateless and stateful apps and data.

Orchestration is intended to stitch all the tools together into a seamless whole. Considering the ever-growing range of systems and devices represented in the cloud – especially IoT – amalgamating such a panoply of unique, standalone resources is, practically speaking, nearly impossible.

Continuous Deployment Applies to the Platform as well as to Applications

All the talk about busting apps and data out of the silos they inhabited in the traditional IT model neglects to acknowledge the management benefits of having all of a system’s key components present and accounted for. You knew where the resources lived, how well they functioned, and (usually) which other components they interoperated with.

Now that your applications, data, and other digital resources have been liberated from the data center, managing them becomes a game of hide-and-seek. It is left to open-source tools such as Chef and Puppet to locate all the pieces that comprise an application or database from wherever they may reside in the cloud, and then stitch them together for delivery on demand to the customer.

In an August 2016 article on TechTarget, analyst Tom Nolle describes how orchestration now encompasses all four of the distinct toolsets and processes that comprise app management: on the development side are version control and dev management, and on the operations side are deployment and application lifecycle management. Orchestration automates and integrates what were formerly four separate processes.

On the front end, apps and data have to be agile to accommodate any platform or network type, and they must be personalized to the unique circumstances of the customer in terms of device, context (time and place), and present need. On the back end, the systems have to support the fully virtualized, microservice-based components that comprise modern cloud-native applications.

Where Chef and Puppet Come Up Short

Both Chef and Puppet, the two most popular orchestration tools, now support modular declaration of resources, despite Chef being inherently imperative and Puppet being declarative, as TechTarget’s Nolle points out. Each orchestrator allows component and service deployments to be defined virtually and modularly. This is a capability second-generation tools such as Red Hat’s Ansible and SaltStack offered from the start.

It’s inevitable that rather than focusing on tools that integrate and moderate all these separate DevOps processes and services, the industry will move toward a single, continuous lifecycle flow, which Nolle claims is the only way for app development to keep pace with constantly evolving business processes.

The results of RightScale’s most recent State of the Cloud report indicate Docker’s popularity among IT managers, 35 percent of whom plan to use Docker for DevOps in the coming year. This compares to the 19 percent who intend to use Chef, and the 18 percent who will go with Puppet. Also growing in popularity, according to RightScale’s findings, are the container orchestration tools Kubernetes, Swarm, and Mesosphere.

Reimagining Component Orchestration from the Ground Up

As more organizations plan to go all-in by transferring their entire IT operation to the cloud, a piecemeal approach to managing the many elements and resources that comprise modern applications simply won’t do. Several efforts are underway to create a method/platform for seamless management of the burgeoning continuous lifecycle flow.

Three such proposals are those from Fugue, a new company founded by former AWS executives; from Carnegie Mellon researchers in the form of “cloudlets”; and from the Linux Foundation, which recently adopted the open-source project called Platform for Network Data Analytics, or PNDA.

The most ambitious of the three concepts is Fugue, which is intended as “a true operating system for the cloud,” according to The Next Platform. Fugue co-founder Josh Stella learned from his years working at AWS that as applications scale up and interdependent layers are added, they become more complex. Fugue makes all cloud components programmable, so entire systems are developed, updated, and deleted just as software applications are.

The tools to accomplish this goal comprise the Fugue cloud management stack: deployment; lifecycle and virtual infrastructure management; and monitoring application instances to ensure no unnecessary instances are running. Fugue doesn’t replace Linux or Windows Server because it manages APIs rather than devices.

Creating a ‘Cloudlet’

Much has been made of the benefits of cloud-native apps running in a centralized hyperscale public cloud. Cloud-native apps share three characteristics, as Oracle executive Robert Shimp writes in an InfoWorld article: they’re written as microservices, packaged in containers, and orchestrated to deliver the finished app to the customer.

Not all applications are suited to this centralized cloud approach, though. Some apps thrive in distributed environments on the edge of the network, particularly those that entail engagement (people rather than processes) and control (in real time among intelligent devices). The key is for organizations to station themselves as close as possible to the network endpoints to better capture the people and devices that connect there. It’s harder to engage people and connect to devices from the center of the cloud, according to Shimp.

This is where cloudlets come into the picture. Researchers at Carnegie Mellon developed the concept to serve as the middle tier in a three-tier system linking intelligent devices in the top tier to the cloud in the bottom tier. Shimp identifies four key cloudlet attributes:

  • A maintenance-free appliance design that’s small, inexpensive, and based on cloud standards
  • Secure, connectable, and powerful
  • Intended only for microservices and containers, so only soft state is maintained
  • Housed near the network edge to facilitate communication with devices

The cloudlet concept makes it possible for a business to have thousands of temporary, mobile outposts on the edge of the network. This is something like having an on-demand presence that you can situate as close as possible to wherever your customers – and their many devices – may be at any given time.

Cloudlets would also serve as an added layer of redundancy to protect against outages: Should any portion of the network fail, your organization’s other cloudlets would automatically reconfigure themselves to recover the lost connections.

Big Data Analytics Gets a New Open Source Platform

Just as cloudlets are designed to move businesses closer to their customers, the Linux Foundation proposes to move analytics directly to the underlying network infrastructure through PNDA, according to In an article from Datacenter Dynamics, on the release of the initial PNDA implementation.

The PNDA engine is based on the Apache components Spark, Kafka, ZooKeeper, and Grafana. It combines log data, metrics, and network telemetry, all of which it stores in “the rawest form possible,” according to Smolaks, to facilitate analysis of streaming data in real-time and batch modes. PNDA is seen by its supporters as a way to complement software defined networks, network functions virtualization, and such network orchestration projects as OpenDaylight and OPNFV.

Of course, ou can realize many of the benefits promised by such projects as Fugue, cloudlets, and PNDA without the wait by migrating your applications to our Morpheus cloud application management platform. Morpheus lets you provision databases, apps, and app stack components on any server or cloud in just seconds, whether they’re located on-premise or in a private, public, or hybrid cloud.

There’s no such thing as vendor lock-in with Morpheus, which supports API connectivity to any app, any database, and any cloud through a single intuitive dashboard. Morpheus is the simple, efficient, and scalable solution to your cloud management needs.

Feature image by Kalen Emsley via Unsplash.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.