How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?

Google’s gRPC: A Lean and Mean Communication Protocol for Microservices

Sep 9th, 2016 7:30am by
Featued image for: Google’s gRPC: A Lean and Mean Communication Protocol for Microservices
Feature image via Pixabay.

First, a Short History of Remote Execution

From the time the industry discovered networking by interconnecting machines, the quest for most optimized remote communication mechanism has begun. While operating systems like Unix, Windows, and Linux had internal protocols for remote communication, the challenge was to expose a framework to developers.

During the 1990s, when TCP/IP protocol matured to become the gold standard for networking, the focus shifted to cross-platform communication, whereupon one computer could initiate an action on another computer across a network of some sort. Technologies such as CORBA, DCOM, and Java RMI created a developer-friendly abstraction layer over the core networking infrastructure. These technologies also attempted to promote language-agnostic communication which was essential for the client/server architecture.

The early 2000s saw the evolution of the web, and gradually HTTP evolved as the de facto standard for the communication. HTTP combined with XML offered a self-descriptive, language agnostic and platform independent framework for remote communication. This combination resulted in the standardization of SOAP and WSDL that promised interoperability among various runtimes and platforms.

The next wave to hit the internet was the programmable web. Many developers found the combination of HTTP and XML defined as the SOAP standard too restrictive. This was the time when JavaScript and JSON started to become popular. The Web 2.0 phenomenon, where APIs played a key role, saw JSON replace XML as the preferred wire-format protocol. The lethal combination of HTTP and JSON resulted in a new unofficial standard called REST. SOAP was confined to large enterprise applications that demanded strict adherence to standards and schema definitions, while REST was a hit among contemporary developers.

HTTP, REST, and Microservices

Thanks to the rise of JavaScript frameworks, Node.js, and document databases, REST has become wildly popular among web developers. Many applications started to rely on REST even for internal serialization and communication patterns. But is HTTP the most efficient protocol for exchanging messages across services running in the same context, same network, and possibly the same machine? HTTP’s convenience comes with a huge performance trade-off, which takes us back to the issue of finding the most optimal communication framework for microservices.

Enter gRPC, the modern, lightweight communication protocol from Google. It’s a high-performance, open-source universal remote procedure call (RPC) framework that works across a dozen languages running in any OS.

Within the first year of its launch, gRPC was adopted by CoreOS, Netflix, Square, and Cockroach Labs among others. Etcd by CoreOS, a distributed key/value store, uses gRPC for peer communication. Telecom companies such as Cisco, Juniper, and Arista are using gRPC for streaming the telemetry data and network configuration from their networking devices.

What is gRPC?

When I first encountered gRPC, it reminded me of CORBA. Both the frameworks declare the service in a language-agnostic Interface Definition Language (IDL), and then generate language-specific bindings.


Both CORBA and gRPC are designed to make the clients believe that the server is on the same machine. Clients invoke a method on the Stub, which gets transparently handled by the underlying protocol. But the similarities mostly end with that.

gRPC’s secret sauce lies in the way the serialization is handled. It is based on Protocol Buffers, an open source mechanism for serializing structured data, which is language and platform neutral. Similar to XML, Protocol Buffers are verbose and descriptive. But they are smaller, faster, and more efficient than other wire-format protocols. Any custom data type that needs to be serialized will be defined as a Protocol Buffer in gRPC.

The latest version of Protocol Buffer is proto3, which supports code generation in Java, C++, Python, Java Lite, Ruby, JavaScript, Objective-C, and C#. When a Protocol Buffer is compiled for a specific language, it comes with accessors (setters and getters) for each field definition.

When compared to REST+JSON combination, gRPC offers better performance and security. It heavily promotes the use of SSL/TLS to authenticate the server and to encrypt all the data exchanged between the client and the server.

Why should microservices developers use gRPC? It uses HTTP/2 to support highly performant and scalable APIs. The use of binary rather than text keeps the payload compact and efficient. HTTP/2 requests are multiplexed over a single TCP connection, allowing multiple concurrent messages to be in flight without compromising network resource usage. It uses header compression to reduce the size of requests and responses.

Getting Started with gRPC

The workflow to create a gRPC service is simple:

  1. Create the service definition and payload structure in the Protocol Buffer (.proto) file.
  2. Generate the gRPC code from the .proto file.
  3. Implement the server in one of the supported languages.
  4. Create the client that invokes the service through the Stub.
  5. Run the server and client(s).

To get familiar with gRPC, we will create a simple calculator service in Python. It will be consumed by both a Python client and a Node.js client. These steps are tested in Mac OS X.

You can clone the GitHub repository to access the source code and build the sample in your machines.

Set up gRPC

Configure Python

Install gRPC and gRPC Tools

Create the directories for the Protocol Buffer, Server, and Clients

Create the Protocol Buffer (Proto/Calc.proto) in the Proto directory

Generate Python client and server code and copy it to the directories

Create the Server (Server/

Launch the Server

Create the Python Client (Client/Python/

Launch the Python Client

The above command will show the output of 30, 10, 200, and 2.0 confirming that the client is able to invoke the methods on the server.

Create the Node.js Client (Client/Node/Calc_Client.js)

Launch the Node.js Client

Note that the Node.js client doesn’t need a stub to be generated. Assuming that the Protocol Buffer file is accessible, it can directly talk to the server.

The client and server can be run on separate machines as long as the network ports are open and accessible.

In the upcoming articles, I will walk you through the steps of using gRPC with microservices running in Kubernetes. Stay tuned!

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.