Technology

Varnish Now Offers Cache Persistence for Large Datasets

23 Mar 2016 9:02am, by

With the explosive use of Web-based video and photo sharing, companies have been finding that scaling memory is inefficient. With this in mind, Varnish Software recently launched the Varnish Massive Storage Engine (MSE) 2.0.

Varnish is an open source HTTP accelerator for large Web-based sites, and the new MSE module provides a way to both cache large amounts of data, that, should the system crash, can be quickly reloaded.

“Video sucks up memory,” explained Pers Buer, founder and chief technology officer of Varnish Software, speaking at the recent Varnish Summit in San Francisco. Additionally, memory allocation designed for gigabyte workloads is unreliable under high pressure when a system moves into terabytes or petrabytes of storage.

There is a big change in the marketplace, according to Buer. Varnish customers and others in the market are now moving away from content distribution network management companies (CDNs) to managing their own content distribution.

Varnish is helping them make this switch. For example, Twitch, which provides massive video distribution, is distributing videos in-house. Tesla and Pinterest, who both presented at the Summit, run their own CDNs.

Varnish works for both traditional CDN and for companies moving into this space. “If there is a CDN war, you don’t want to be in the war, you want to be the arms provider,” explained Buer.

Building on the Varnish API, Varnish engineers discovered that scaling issues with traditional file and malloc backends resulted in severe performance deterioration. Enter MSE, which focuses on three basic areas:

  • Fragmentation proof allocation algorithm
  • Higher cache hit rates due to LRU (Least Recently Used) replaced with LFU (Least Frequently Used)
  • Optionally persistent datastore

File-based memory has performance and fragmentation issues; it uses memory maps, which hold the process to synchronous read. Buer calls the synchronous read “a complete waste,” because it limits the processing time to the read/write capacity of the system.

Varnish created a process called Hole Expansion. “Instead of implicitly writing using memory map, Varnish uses explicitly write code, then use watermark system in kernel for allocation,” he explained. This changes the limiting factor I/O capacity, not read/write capacity, allowing for faster response times.

“Put in front of application server, it is super simple, therefore it is also 200 – 1000x faster. So every time you move data from caching layer to application server, Varnish will supply the data in 30-40 microseconds as opposed to typical cache which is 10 – 20 milliseconds,” Buer said.

Multiple that by millions of video or photo views a day.

How Hole Expansion works

The algorithm combines the free space into a hole large enough for the new cache before inserting the new object.

The algorithm combines the free space into a hole large enough for the new cache before inserting the new object.

Varnish engineers created an algorithm to consolidate fragmented space, and malloc virtualizes allocation, then relies on the kernel to do the work. This creates a contiguous space for the new object to be inserted whole instead of fragmenting the new object.

This allows systems using Varnish to run for years without accumulating memory fragmentation, Buer said. “Fragmented space slows things down.”

To accomplish the holy grail of performance, Varnish does not use SendFile, considered an industry standard in scaling, which brought them some scathing criticism a while back. Buer shrugged off the criticism. Varnish doesn’t use SendFile because it’s not necessary because everything is already mapped into memory. Using SendFile would replicate existing functionality and significantly slow the system down.

LRU vs. LFU

Traditional memory caching uses LRU over LFU.

Screen Shot 2016-03-21 at 12.26.37 PM

 

Equipped with ‘least frequently used/least recently used hybrid’ cache eviction algorithm, providing smarter selection criteria within the cache to automatically evict the least-accessed objects when space is needed.Screen Shot 2016-03-21 at 12.20.01 PM

Persistence

New with MSE 2.0 is the option to use persistence. It adds little overhead, said Buer, but the benefits in crash recovery time are impressive.

“If a server should crash, rebuilding content in memory can take a lot of time. While it is building up the content, performance suffers,” said Buer. “We’ve added Persistence to Varnish Massive Store Engine 2.0 to ensure that our users can repair and maintain their sites as quickly as possible.”

Be warned, this is not a one-size fit all solution. Buer said it only makes sense for medium and larger clients. “It’s like a local bakery,” he said, “you just don’t need that much software if you are not managing terabytes of data.”

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.