I noticed that a maintainer of gRPC suggested using MAX_CONNECTION_AGE to load balance long lived gRPC streams in their video, Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google. I also found an implemented proposal which only lists C, Java and Go:
Status: Implemented
Implemented in: C, Java, Go
In practice, how do I use MAX_CONNECTION_AGE in a python server?
You can use grpc.server()'s options argument:
options – An optional list of key-value pairs (channel_arguments in
gRPC runtime) to configure the channel.
You would use GRPC_ARG_MAX_CONNECTION_AGE_MS, defined in grpc_types.h:
/** Maximum time that a channel may exist. Int valued, milliseconds.
* INT_MAX means unlimited. */
#define GRPC_ARG_MAX_CONNECTION_AGE_MS "grpc.max_connection_age_ms"
For 30 days as suggested in Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google, GRPC_ARG_MAX_CONNECTION_AGE_MS should be 2592000000 (30*24*60*60*1000)
I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.
Does the MPI standard implement the request-reply communication pattern?
Reading about MPI I found that there are point-to-point routines like:
Synchronous send
Blocking send / blocking receive
Non-blocking send /non-blocking receive
Buffered send
Combined send/receive
"Ready" send
Maybe a developer can implement the request-reply communication pattern using these routines, but it seems that MPI does not directly implement it.
Edit: For clarifying purposes the request-reply (request-response) is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client–server architectures. It may be synchronous or asynchronous.
This is not available as-is.
That being said, this is trivial to implement.
The requestor can MPI_Sendrecv() and the replier can MPI_Recv() the request and then MPI_Send() the answer.
I am trying to develop my own online game for training purpose.
I have some questions, and I can't find any answers that satisfy me.
Does it exist an online protocol to exchange data between two players? I don't mean TCP or UDP, but a high level protocol. I am looking for a WebService or Remoting with events. I use for now Protocol Buffers, but I need more flexibility (like events). I can develop my own protocol, but I think it exists already a network protocol with events.
I will use the "Command" design pattern or Flex/Bison to parse the query. Is there a better way ?
EDIT
For protocol, I use "protocol buffers".
So I have two options :
Translate my custom protocol into events and callback.
Use a protocol/tool that have already events. Is there such a tool ?
Thank you for your responses.
Regards
Romain
I think you're mixing protocol with protocol implementation. A protocol defines what the messages you send mean and how they are serialized. A message can generate an event at a client but that's an implementation detail.
The idea is to allow to peer processes to exchange messages (packets) over tcp as much asynchronously as possible.
The way I'd like it to work is each process to have an outbox and an inbox. The send operation is just a push on the outbox. The receive operation is just a pop on the inbox. Underlying protocol would take care of the communication details.
Is there a way to implement such mechanism using a single TCP connection?
How would that be implemented using BSD sockets and modern OO Socket APIs (like Java or C# socket API)?
Yes, it can be done with a single TCP connection. For one obvious example, (though a bit more elaborate than you really need) you could take a look at the NNTP protocol (RFC 3977). What you seem to want would be similar to retrieving and posting articles.