How to configure `MAX_CONNECTION_AGE` on Python gRPC server? - grpc

I noticed that a maintainer of gRPC suggested using MAX_CONNECTION_AGE to load balance long lived gRPC streams in their video, Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google. I also found an implemented proposal which only lists C, Java and Go:
Status: Implemented
Implemented in: C, Java, Go
In practice, how do I use MAX_CONNECTION_AGE in a python server?

You can use grpc.server()'s options argument:
options – An optional list of key-value pairs (channel_arguments in
gRPC runtime) to configure the channel.
You would use GRPC_ARG_MAX_CONNECTION_AGE_MS, defined in grpc_types.h:
/** Maximum time that a channel may exist. Int valued, milliseconds.
* INT_MAX means unlimited. */
#define GRPC_ARG_MAX_CONNECTION_AGE_MS "grpc.max_connection_age_ms"
For 30 days as suggested in Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google, GRPC_ARG_MAX_CONNECTION_AGE_MS should be 2592000000 (30*24*60*60*1000)

Related

Get gRPC service metrics using java-grpc-prometheus

I am trying to get some metrics using the java-grpc-prometheus library.
I would like to get some metrics like below
gRPC sessions,
no of calls,
respective duration,
no of API calls been made towards internal apis,
timeouts,
interface resets
My question is we are using bi-directional Streaming API and I read that it is setup on a single TCP session which is reused by the clients. How do I know how many client sessions have been initiated?
Get all root channels using Channelz.GetTopChannels() which will give you all client sessions.
A short introduction to Channelz

Adding more http/2 connections in a grpc channel

There's a property called SocketsHttpHandler.EnableMultipleHttp2Connections in ASP.NET's grpc library which enables a channel to create additional http/2 connections when the concurrent stream limit is reached. Is there anything available in Go which could help me achieve the same?
In grpc-go library's documentation, there's no details of how to create grpc channels also.
There's no existing API in gRPC-Go for the same functionality.
The closest you can do is to make a resolver or balancer to create multiple connections. But it won't know about the stream limit.
Documentations and examples are available in the repo:
https://github.com/grpc/grpc-go/tree/master/examples
https://github.com/grpc/grpc-go/tree/master/Documentation

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

What does "transport-based" mean?

On the engine.io website it says:
Engine.IO is the implementation of transport-based
cross-browser/cross-device bi-directional communication layer for
Socket.IO.
What does "transport-based" mean? I presume simply that it uses TCP?
It means the ability to use different underlying transports to support the Socket.IO api. The two core transports that it uses are polling: XHR / JSONP polling transport, and websocket: WebSocket.
From the docs:
The main premise of Engine, and the core of its existence, is the
ability to swap transports on the fly. A connection starts as
xhr-polling, but it can switch to WebSocket.
The central problem this poses is: how do we switch transports without
losing messages?
Located here

What is the equivalent of RPC in new Unity Networking?

Unity has upgraded its Networking system and called the old one as legacy networking.
So how do we change our RPC calls into the new Unity Networking?
What is the equivalent of this approach?
Should we write our own methods for it? (Sending byte arrays etc.)
[ClientRpc] is the equivalent in the new Networking system.
See here for more information - http://docs.unity3d.com/Manual/UNetActions.html
In response to your comment:
Exactly. You [Command] from clients up to the server and [ClientRpc] from the server down to all clients.
In addition, you can send messages to individual clients using the Send() function on the connectionToClient of a NetworkBehaviour.
http://docs.unity3d.com/ScriptReference/Networking.NetworkConnection.Send.html

Resources