How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?

How Redis Fits with a Microservices Architecture

Why the Reddis NoSQL database is a perfect fit for microservices architectures.
Dec 20th, 2019 11:33am by and
Featued image for: How Redis Fits with a Microservices Architecture
Feature image from Redis.
This blog post was adapted from our new book “Redis Microservices for Dummies” by Kyle Davis with Loris Cro. Here’s how to download the complete volume, or request a physical copy.

Kyle J. Davis
Kyle is head of developer advocacy, Redis Labs. Kyle is an enthusiastic full-stack developer and works frequently with Node.js and Redis, documented in his long-running blog series on the pair. Previously, Kyle worked in higher education, helping to develop tools and technologies for academics and administrators.

Many of today’s widely used database systems were developed in an era when a company adopted a single database across the entire enterprise. This single database system would store and run all the functions of the enterprise in one place. You can probably picture it: a room full of refrigerator-sized machines, many sporting oversize reel-to-reel tape drives.

But Redis evolved differently than many other popular database systems. Built in the NoSQL era, Redis is a flexible and versatile database specifically designed not to bother storing massive amounts of data that will be mostly idle. A microservices architecture has related goals: each service is designed to fit a particular use — not to run everything in the business.

Redis is designed to store active data that will change and move often, with an indefinite structure with no concept of relations. A Redis database has a small footprint and can serve massive throughput even with minimal resources. Similarly, an individual service in a microservices architecture is concerned only with input and output and data private to that service, meaning Redis databases can back a wide range of different microservices, each with their own individual data store. That’s important, because the very nature of having many services means that each service must perform as fast as possible to make up for the connection and latency overhead introduced by inter-service communication.

Describing a Redis-Powered Microservice Architecture

Loris Cro
Loris, a developer advocate for Redis Labs, studied Bioinformatics in Milan and participated in 2014's Google Summer of Code, where he worked on tools that make use of NoSQL databases to support analysis of NextGen sequencing data. After that, he worked for a year in a criminology research centre as data engineer. More recently he moved to a back-end developer role where he makes use of his previous experiences to build high performance services using Go and Redis.

A key characteristic of a microservices architecture is that each individual service stands on its own — the service is not tightly coupled with another service. That means microservices must maintain their own state, and to maintain state you need a database.

Microservices architectures can comprise hundreds or even thousands of services, and overhead is the enemy of scale. An infrastructure that consumes lots of resources just to run would dilute the benefits of a microservices architecture.

Ideally, the service data would be completely isolated from other data layers, allowing for uncoupled scaling and cross-service contention for slow resources. Since services are specifically designed to fill a single role (in terms of business processes), the state they store is inherently non-relational and well suited to NoSQL data models. Redis may not be a blanket solution for all data storage in a microservices architecture, but it certainly fits well with many of the requirements.

Once you have built a service, it needs to talk to other services. In a traditional microservice environment, this occurs over private HTTP endpoints using REST or similar conventions. Once a request is received, the service begins processing the request.

While the HTTP approach works and is widely used, there is an alternate method of communicating where services write to and read from log-like structures. In this case, that’s Redis Streams, which allows for an asynchronous pattern where every service announces events on its own stream, and listens only to streams belonging to services it’s interested in. Bidirectional communication at that point is achieved by two services observing each other’s stream.

However, even in services that do not use Redis for storage or communication, Redis can still play a vital role. To deliver a low-latency final response, each individual service must respond as fast as possible to its own requests, often outside the performance threshold of traditional databases. Redis, in this case, plays the role of a cache, where the developers of the service decide where data is not always required to be retrieved directly from the primary database, but instead can be pulled from Redis much more quickly.

Similarly, external data services that need to be accessed through an API will also likely be far too slow, and Redis can be used here to prevent unneeded and lengthy calls from impacting the system’s overall performance.

For more detail on how Redis is specifically used in the microservices architecture, download a copy of our e-book or request a physical copy.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.