TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Edge Computing / Tech Life

Vapor IO and The IoT Cloud

Mar 10th, 2015 4:46pm by
Featued image for: Vapor IO and The IoT Cloud

There’s a new cloud on the horizon. It’s an Internet of Things cloud that will bifurcate the cloud ecosystem and tie the hardware more closely with increasingly complex workloads across a network of urban embedded micro data centers. That’s at least the belief of Vapor IO, a startup created by Anso Labs Founder Cole Crawford, who has his sights on the IoT cloud.

The company launched today at the Open Compute Project (OCP) Summit with a reciprocal licensing model. With it, OCP Summit is offering Open Data Center Runtime Environment (Open DCRE) and Vapor Open Core Operating Runtime Environment (Vapor CORE), which provide ways to create deeper integrations between workloads and modular hardware environments.

Open DCRE provides the capability to measure factors such as humidity and apply variables correlating to a workload. Vapor CORE offers analytics to optimize workloads across data centers by offering an intelligence layer on top of Open DCRE.

The licensing model is critical for this to work, said Crawford, who is also one of the founders of OpenStack and the former executive director of the OCP initiative. Contributions go back to the community, a tenet of the Linux community. The licensing is based on the patents, not the copyright. That puts it more in line with the IP in the hardware. The rationale: If you stop at the hardware, it becomes implausible that a data center can be “software-defined,” Crawford said. You need to look lower into the critical environments so workloads can be better managed.

The issue becomes critical when considering the lack of integrations in data center environments; for instance, how different server makers run different versions of the Intelligent Platform Management Interface (IPMI). The IPMI incompatibility often means a lot of work for the data center operator to integrate different environments and correlating workloads.

For example, it becomes a problem to know the real power consumption of different workloads across multiple data centers. Without that understanding, power gets wasted due to the lack of visibility into these lower level environments.

Today’s general purpose data center costs millions to build. They require miles of cabling wire and high overhead costs to keep maintained. But what happens when there are 40 billion connected devices and 40 zettabytes of data, as is projected by 2020? That IoT cloud will require a different architecture that Crawford and his team see as being far more local than what we have now.

This new cloud is built into the urban infrastructure we have already built, Crawford maintains. Low-powered silicon and lightweight instruction sets will be the rule (think ARM mbed). In the world of the IoT cloud, the high-rise data center in downtown cities should instead be returned to office space. The data center will be modular, as close to the data as possible.

As I wrote in TechCrunch in 2013, “The interactions between humans and machines creates a new set of relationships that we are just beginning to understand. We are becoming more machine-like, which changes the way we co-exist with each other. Data, once impeded, now flows in countless ways between people, machines and the infinite abstractions that force us to re-examine everything in our lives.”

The shift is going to disrupt how we view any workload imaginable, including containers. The container is far more symbolic of the general purpose data cloud. But containers also have more similarities to compute processes than traditional hosts, which signals a new way we think about compute and the patterns they will take to power workloads. It’s a paradigm that will change how we view the way data gets transported and stored to fulfill the tasks that come with a network fabric of connected devices.

Netflix, for example, caches data that the Internet Services Providers then deliver to the customer as a streaming service. That’s a model we can expect employed from all forms of devices. The connections and the networks will require the data center to be closer to the data than ever before. So instead of a server farm 200 miles away, there might be a micro data center in your office or even your apartment complex.

Feature image via Flickr Creative Commons.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.