What is the equivalent of RPC in new Unity Networking? - networking

Unity has upgraded its Networking system and called the old one as legacy networking.
So how do we change our RPC calls into the new Unity Networking?
What is the equivalent of this approach?
Should we write our own methods for it? (Sending byte arrays etc.)

[ClientRpc] is the equivalent in the new Networking system.
See here for more information - http://docs.unity3d.com/Manual/UNetActions.html
In response to your comment:
Exactly. You [Command] from clients up to the server and [ClientRpc] from the server down to all clients.
In addition, you can send messages to individual clients using the Send() function on the connectionToClient of a NetworkBehaviour.
http://docs.unity3d.com/ScriptReference/Networking.NetworkConnection.Send.html

Related

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

What is the difference between DEALER and ROUTER socket archetype in ZeroMQ?

What is the difference between the ROUTER and the DEALER socket archetypes in zmq?
And which should I use, if I have a server, which is receiving messages and a client, which is sending messages? The server will never send a message to a client.
EDIT: I forgot to say that there can be several instances of the client.
For details on ROUTER/DEALER Formal Communication Pattern, do not hesitate to consult the API documentation. There are many features important for ROUTER/DEALER ( XREQ/XREP ) that have nothing beneficial for your indicated use-case.
Many just send, just one just listens?
Given N-clients purely .send() messages to 1-server, which exclusively .recv() messages, but never sends any message back,
the design may benefit from a PUB/SUB Formal Communication Pattern.
In case some other preferences outweight the trivial approach, one may setup a more complex "wireing", using another one-way type of infrastructure, based on PUSH/PULL, and use a reverse setup PUB/SUB, where each new client, the PUB side, .connect()-s to the SUB-side, given a server-side .bind() access-point is on a known, static IP address and the client self-advertises on this signalling channel, that it is alive ( keep-alive with IP-address:port#, where the server-side ought initiate a new PUSHtoPULL.connect() setup onto the client-advertised, .bind()-ready PULL-side access point.
Complex? Rather a limitless tool, only our imagination is our limit.
After some time, one realises all the powers of multi-functional SIG/MSG-infrastructure, so do not hesitate to experiment and re-use the elementary archetypes in more complex, mutually-cooperating distributed systems computing.

Understanding websockets in terms of REST and Server vs Client Events

For a while now I have been implementing a RESTful API in the design of my project because in my case it is very useful for others to be able to interact with the data in a consistent format (and I find REST to be a clean way of handling requests). I am now trying to not only have my current REST API for my resources, but the ability to expose some pieces of information via a bidirectional websocket connection.
Upon searching for a good .net library to use that implements the websocket protocol, I did find out about SignalR. There was a few problems I had with it (maybe specific to my project?)
I want to be able to initialize a web socket connection through my
existing REST API. (I don't know the proper practice to do this, but
I figured a custom header would work fine) I would like them (the
client) to be able to close the connection and get a http response
back (101?) to signify its completion.
The problem I had with SignalR was:
that there was no clean way outside of a hub instance to get a user's connection id and map it to a external controller where the rest call made affects what piece of data gets broadcasted to the specific client (I don't want to use external memory)
the huge reliance on client side code. I really want to make this process as simple to the client and handle the majority of the work on the server side (which I had hoped modifying my current rest api would accomplish). The only responsibility I see of a client is to disconnect peacefully.
So now the question..
Is there a good server side websocket library for .net that implements the latest web socket protocol? The client can use any client library that adheres to the protocol. What is the best practice to incorporate both web socket connections and a restful api?
ASP.NET supports WebSockets itself if you have IIS8 (only Windows 8/2012 and further). SignalR is just a polyfill,
If you do not have IIS8, you can use external WebSocket frameworks like mine: http://vtortola.github.io/WebSocketListener/
Cheers.

How to integrate the ASP.NET thread model and ZeroMQ sockets?

I'm building an ASP.NET service (a simple aspx) that requires a REQ call to a ZeroMQ REP node.
So I've to use the REQ/REP pattern, but I can't figure out the proper way to initialize the ZeroMQ context in the ASP.NET pipeline.
Moreover, can I share a single connection among the different ASP.NET threads and if so how?
edit: After some study it looks to me that an inproc router in a dedicated thread should be the way to go, since it would handle sincronization.
But more questions arise:
The other end of such an inproc node should be a DEALER? If so, should it connect to the REQ node? Or it should bind to a tcp port and I should code the REP server node to connect to it (the latter would be a bit cumbersome, since I could have different servers exposing the service)?
As an alternative, is it correct to build an inproc node bound to a ROUTER socket at one end, and connecting with REQ on the other? If so, should I code the node so that it handles a manual envelop of each message just to be able to send responses back to the correct requesting thread?
Is Application_Start the correct pipeline point to initialize the thread handling such router?
At the moment a ROUTER/DEALER inproc node that connect to the REQ server looks like the best option, but I'm not sure that it's possibile to connect from a DEALER socket. But this is still just a speculation and could be entirely wrong.
The zmq_socket manual states:
ØMQ sockets are not thread safe. Applications MUST NOT use a socket
from multiple threads except after migrating a socket from one thread
to another with a "full fence" memory barrier.

in Vxworks, under what circumstances Send() api will get stuck or take more time?

i am using two m/c A and B, both are having same vxworks image as well as hardware. but only change is application. suppose M/c A is server and M/c B is client. while communication over ethernet client M/c is not able send the data. it's getting stuck send() and task state will be Pend.
wState = send(vstCCEUSerSocket.wCCEUAcceptFD,(char* )vstCCEUAppTask.rgubyCCEUTxPkt,sizeof(vstCCEUAppTask.rgubyCCEUTxPkt),0);
/*logMsg("\nTrmtd = %d\t",wState);*/
if(wState == ERROR)
{
perror("write");
Close the Fd
}
From the VxWorks OS Libraries API Reference
Page 497/498 you can find info about the connect() but there's also a connectWithTimeout()
Page 1203/1204 you might find some interesting items for TCP sockets. For example the KEEP_ALIVE
If you rely on a quick connection time, and you want to keep control you can combine connectWithTimeout with the keep alive.
It can take another day for me to recall old code to check how I ever solved this in one of my projects.
VxWorks 5.5 Network Programmers Guide - Stream Sockets

Resources