Google’s gRPC: A Lean and Mean Communication Protocol for Microservices

First, a Short History of Remote Execution
From the time the industry discovered networking by interconnecting machines, the quest for most optimized remote communication mechanism has begun. While operating systems like Unix, Windows, and Linux had internal protocols for remote communication, the challenge was to expose a framework to developers.
During the 1990s, when TCP/IP protocol matured to become the gold standard for networking, the focus shifted to cross-platform communication, whereupon one computer could initiate an action on another computer across a network of some sort. Technologies such as CORBA, DCOM, and Java RMI created a developer-friendly abstraction layer over the core networking infrastructure. These technologies also attempted to promote language-agnostic communication which was essential for the client/server architecture.
The early 2000s saw the evolution of the web, and gradually HTTP evolved as the de facto standard for the communication. HTTP combined with XML offered a self-descriptive, language agnostic and platform independent framework for remote communication. This combination resulted in the standardization of SOAP and WSDL that promised interoperability among various runtimes and platforms.
The next wave to hit the internet was the programmable web. Many developers found the combination of HTTP and XML defined as the SOAP standard too restrictive. This was the time when JavaScript and JSON started to become popular. The Web 2.0 phenomenon, where APIs played a key role, saw JSON replace XML as the preferred wire-format protocol. The lethal combination of HTTP and JSON resulted in a new unofficial standard called REST. SOAP was confined to large enterprise applications that demanded strict adherence to standards and schema definitions, while REST was a hit among contemporary developers.
HTTP, REST, and Microservices
Thanks to the rise of JavaScript frameworks, Node.js, and document databases, REST has become wildly popular among web developers. Many applications started to rely on REST even for internal serialization and communication patterns. But is HTTP the most efficient protocol for exchanging messages across services running in the same context, same network, and possibly the same machine? HTTP’s convenience comes with a huge performance trade-off, which takes us back to the issue of finding the most optimal communication framework for microservices.
Enter gRPC, the modern, lightweight communication protocol from Google. It’s a high-performance, open-source universal remote procedure call (RPC) framework that works across a dozen languages running in any OS.
Within the first year of its launch, gRPC was adopted by CoreOS, Netflix, Square, and Cockroach Labs among others. Etcd by CoreOS, a distributed key/value store, uses gRPC for peer communication. Telecom companies such as Cisco, Juniper, and Arista are using gRPC for streaming the telemetry data and network configuration from their networking devices.
What is gRPC?
When I first encountered gRPC, it reminded me of CORBA. Both the frameworks declare the service in a language-agnostic Interface Definition Language (IDL), and then generate language-specific bindings.
Both CORBA and gRPC are designed to make the clients believe that the server is on the same machine. Clients invoke a method on the Stub, which gets transparently handled by the underlying protocol. But the similarities mostly end with that.
gRPC’s secret sauce lies in the way the serialization is handled. It is based on Protocol Buffers, an open source mechanism for serializing structured data, which is language and platform neutral. Similar to XML, Protocol Buffers are verbose and descriptive. But they are smaller, faster, and more efficient than other wire-format protocols. Any custom data type that needs to be serialized will be defined as a Protocol Buffer in gRPC.
The latest version of Protocol Buffer is proto3, which supports code generation in Java, C++, Python, Java Lite, Ruby, JavaScript, Objective-C, and C#. When a Protocol Buffer is compiled for a specific language, it comes with accessors (setters and getters) for each field definition.
When compared to REST+JSON combination, gRPC offers better performance and security. It heavily promotes the use of SSL/TLS to authenticate the server and to encrypt all the data exchanged between the client and the server.
Why should microservices developers use gRPC? It uses HTTP/2 to support highly performant and scalable APIs. The use of binary rather than text keeps the payload compact and efficient. HTTP/2 requests are multiplexed over a single TCP connection, allowing multiple concurrent messages to be in flight without compromising network resource usage. It uses header compression to reduce the size of requests and responses.
Getting Started with gRPC
The workflow to create a gRPC service is simple:
- Create the service definition and payload structure in the Protocol Buffer (.proto) file.
- Generate the gRPC code from the .proto file.
- Implement the server in one of the supported languages.
- Create the client that invokes the service through the Stub.
- Run the server and client(s).
To get familiar with gRPC, we will create a simple calculator service in Python. It will be consumed by both a Python client and a Node.js client. These steps are tested in Mac OS X.
You can clone the GitHub repository to access the source code and build the sample in your machines.
Set up gRPC
Configure Python
1 2 3 4 |
python -m pip install virtualenv virtualenv venv source venv/bin/activate python -m pip install --upgrade pip |
Install gRPC and gRPC Tools
1 2 3 |
python -m pip install grpcio python -m pip install grpcio-tools npm install grpc --global |
Create the directories for the Protocol Buffer, Server, and Clients
1 2 3 4 |
mkdir Proto mkdir Server mkdir -p Client/Python mkdir -p Client/Node |
Create the Protocol Buffer (Proto/Calc.proto) in the Proto directory
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
syntax = "proto3"; package calc; service Calculator { rpc Add (AddRequest) returns (AddReply) {} rpc Substract (SubstractRequest) returns (SubstractReply) {} rpc Multiply (MultiplyRequest) returns (MultiplyReply) {} rpc Divide (DivideRequest) returns (DivideReply) {} } message AddRequest{ int32 n1=1; int32 n2=2; } message AddReply{ int32 n1=1; } message SubstractRequest{ int32 n1=1; int32 n2=2; } message SubstractReply{ int32 n1=1; } message MultiplyRequest{ int32 n1=1; int32 n2=2; } message MultiplyReply{ int32 n1=1; } message DivideRequest{ int32 n1=1; int32 n2=2; } message DivideReply{ float f1=1; } |
Generate Python client and server code and copy it to the directories
1 2 3 4 |
python -m grpc.tools.protoc --python_out=. --grpc_python_out=. --proto_path=. Calc.proto cp Calc_pb2.py ../Server cp Calc_pb2.py ../Client/Python cp Calc.proto ../Client/Node |
Create the Server (Server/Calc_Server.py)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
from concurrent import futures import time import grpc import Calc_pb2 _ONE_DAY_IN_SECONDS = 60 * 60 * 24 class Calculator(Calc_pb2.CalculatorServicer): def Add(self, request, context): return Calc_pb2.AddReply(n1=request.n1+request.n2) def Substract(self, request, context): return Calc_pb2.SubstractReply(n1=request.n1-request.n2) def Multiply(self, request, context): return Calc_pb2.MultiplyReply(n1=request.n1*request.n2) def Divide(self, request, context): return Calc_pb2.DivideReply(f1=request.n1/request.n2) def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) Calc_pb2.add_CalculatorServicer_to_server(Calculator(), server) server.add_insecure_port('[::]:50050') server.start() try: while True: time.sleep(_ONE_DAY_IN_SECONDS) except KeyboardInterrupt: server.stop(0) if __name__ == '__main__': serve() |
Launch the Server
1 |
python Calc_Server.py |
Create the Python Client (Client/Python/Calc_Client.py)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
from __future__ import print_function import grpc import Calc_pb2 def run(): channel = grpc.insecure_channel('localhost:50050') stub = Calc_pb2.CalculatorStub(channel) response = stub.Add(Calc_pb2.AddRequest(n1=20,n2=10)) print(response.n1) response = stub.Substract(Calc_pb2.SubstractRequest(n1=20,n2=10)) print(response.n1) response = stub.Multiply(Calc_pb2.MultiplyRequest(n1=20,n2=10)) print(response.n1) response = stub.Divide(Calc_pb2.DivideRequest(n1=20,n2=10)) print(response.f1) if __name__ == '__main__': run() |
Launch the Python Client
1 |
python Calc_Client.py |
The above command will show the output of 30, 10, 200, and 2.0 confirming that the client is able to invoke the methods on the server.
Create the Node.js Client (Client/Node/Calc_Client.js)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
var PROTO_PATH = 'Calc.proto'; var grpc = require('grpc'); var calc_proto = grpc.load(PROTO_PATH).calc; var params={n1:20, n2:10}; function main() { var client = new calc_proto.Calculator('localhost:50050', grpc.credentials.createInsecure()); client.divide(params, function(err, response) { console.log(response.f1); }); client.multiply(params, function(err, response) { console.log(response.n1); }); client.substract(params, function(err, response) { console.log(response.n1); }); client.add(params, function(err, response) { console.log(response.n1); }); } main(); |
Launch the Node.js Client
1 |
node Calc_Client.js |
Note that the Node.js client doesn’t need a stub to be generated. Assuming that the Protocol Buffer file is accessible, it can directly talk to the server.
The client and server can be run on separate machines as long as the network ports are open and accessible.
In the upcoming articles, I will walk you through the steps of using gRPC with microservices running in Kubernetes. Stay tuned!