Cloud Native / Cloud Services / Containers

TidalScale Creates a Single Ocean of DRAM for Large-Scale Applications

9 Aug 2017 9:00am, by

Focused on clients that need huge amounts of memory, such as for computational genomics and large-scale simulations and analytics, TidalScale has created a platform that aggregates commodity server hardware into a single software-defined supercomputer without the accompanying expense.

TidalScale’s software-defined servers just got a point-and-click control panel to make it easier to create and effectively allocate resources to servers on the fly.

The WaveRunner control panel makes it possible to dynamically control server, storage and network infrastructure from a single interface. The company also introduced a new RESTful API design that allows TidalScale to be run with third-party REST-compliant testing and data integration platforms and tools, such as Jenkins, Docker and more.

“The thing we’ve tried to address is the need for a flexible data center. The types of data and the content analysis run on them, those are exploding and they’re unpredictable. Things are changing so quickly that you really can’t statically define your servers and their performance,” said TidalScale product manager Chuck Piercey.

“A software-defined server lets you take commodity hardware and [set up] a server that’s exactly the right size for whatever your workload is.”

The Campbell, Calif.-based company was founded in 2012 by Ike Nassi, who was chief scientist at SAP when it developed its HANA in-memory database.

In what it calls “inverse virtualization,” TidalScale creates a single ocean of DRAM (dynamic random-access memory) computing power. By virtualizing the CPU, memory and I/O resources, it makes the combined resources available to workloads as needed. Those resources can flow in and out like the tide, hence the company’s name, Piercey explained.

The technology is based on the FreeBSD hypervisor bhyve.

It employs a thin layer of a distributed hypervisor, which it calls a HyperKernel, which runs on each node of a cluster of computers, or software-defined server, which it calls a Tidalpod. The HyperKernel uses machine learning to optimize the flow to scale and share CPUs, memory, storage and I/O across physical servers.

Applications don’t require any code changes and aren’t even aware that they are running across multiple servers.

The HyperKernel boots a single system image (SSI) as a guest on that cluster. It takes a non-uniform memory access (NUMA) cluster, but through that thin layer of the HyperKernel running across the cluster, it presents a uniform memory architecture to the guest.

Reverse Virtualization

In scaling out, rather than having multiple instances of the operating system and database, the HyperKernel makes that into just one OS and one database.

Internally, TidalScale uses Docker as just one container, Piercey said, but one of its university clients allows students to deploy their projects as Docker containers within the resource pool that TidalScale produces.

Users can work with the tools they are used to, such as Hadoop, MongoDB, Spark or MySQL. It’s just a big computer. Developers don’t need to spend time worrying about sharding or processor locality because the HyperKernel takes care of that under the hood, Berman said.

Last summer, the company created a 15TB cluster on IBM Bluemix using an unmodified CentOS 7.2 operating system at IBM’s Cloud Data Center in San Jose.

“At [The University of Texas at San Antonio], we are implementing software-defined servers so we can provide computing and storage cloud capacity to our researchers in an agile and responsive way as their workload demands change,” said Jeff Prevost, University of Texas assistant professor of electrical and computer engineering. “The new capabilities in TidalScale’s software will help us achieve the flexibility and granular control we need to scale and re-provision resources on demand.”

Software-defined servers also are used as a reference implementation of a modern cloud data center for research institutions in the university’s Open Cloud Institute.

In March, TidalScale announced a partnership to provide hosted software-defined servers on the OrionVM Wholesale Cloud Platform, providing capacity from 1.3TB to 13TB of DRAM and up to hundreds of cores for enterprises struggling with big data, analytics and memory-intensive computing challenges.

In naming the company among the Cool Vendors for 2017, Gartner analyst Stanley Zaffos wrote that “TidalScale’s claims of delivering near linear performance scalability to compute and/or memory-intensive applications are credible, if not fully proven — initially taking into account the nature of the applications likely to use this technology.

Among its attractions, he cited its ability to shrink the demand for large (greater than 4 sockets), expensive servers and its ability to productively redeploy older servers.

He also noted the security implications of being able to use software-defined servers with on-premises storage resources, which eliminates the need to move data into direct-attached storage (DAS) or to the cloud before running applications.

At the same time, he noted:

“TidalScale’s primary value proposition of spreading an application across loosely coupled physical servers, while scaling performance and throughput linearly, defies the experience of many IT professionals who understand the difficulty of writing multithreaded code and are aware of the performance bottlenecks associated with loosely coupled multiprocessor (MP) configurations (also called MP factors).”

Feature image by Thierry Meier on Unsplash.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.