GRPC address prefix - grpc

When I use WCF's NETTCP binding endpoint address looks like: net.tcp://localhost:51111/MyService/.
When I use Websockets endpoint address looks like: ws://localhost:port/Esv/ocp and for secure connection wss://localhost:port/Esv/ocp.
Is there any common prefix for gRPC services? Or only 192.168.1.1:51111 is OK since called method is binded to GRPC server by:
ServerServiceDefinition.CreateBuilder().AddMethod(_myCommunicationMethodName, CallProcessingMehtod).Build();

In a grpc service, the URL of the underlying HTTP2 request is determined by the name of the of the service and the method name (as found in the .proto definition). Besides that, there is no "prefix" and using one would be against the recommendations in the gRPC wire protocol spec (more on that in github.com/grpc/grpc-dotnet/issues/110).
Note that under normal circumstances, you don't need to know the exact URL used by a gRPC method call as this is something that's encapsulated by the gRPC client and server logic.

Related

Why grpc-go can run grpc server and http server at the same address and port, but grpc-node cannot

I had read this answer: https://stackoverflow.com/a/56943771/6463558, it says that there is no way to run gRPC server and HTTP server at same address and port using grpc-node package.
But I can create gRPC server and HTTP server at same address and port(e.g. both using localhost:3000) using grpc-go package. Here is an example: https://github.com/mrdulin/grpc-go-cnode/blob/master/cmd/server/main.go#L79
So, why grpc-node and grpc-go behave inconsistently. Does this make sense?
The result I expect is that no matter what language is implemented in grpc, the behavior should be consistent. So the grpc server should be able to share the same port with the server created by Node's standard library http in same system process.
It is all about implementation. Each language has its own implementation for gRPC. There are many differences from each language implementation, some due to language capability but also due to the maintainers. Each project is a different project.
In this case, we can not really say that gRPC and HTTP servers are sharing the same address. There is only the HTTP server running. However, Golang implementation for gRPC server has an option to serv the gRPC through HTTP.
Calling
server.ServeHTTP()
instead of
server.Serve()
That is possible because, under the hood, gRPC server is built on top HTTP2
This snippet from the link you shared make what I said clear
if request.ProtoMajor != 2 {
mux.ServeHTTP(writer, request)
return
}
if strings.Contains(request.Header.Get("Content-Type"), "application/grpc") {
grpcServer.ServeHTTP(writer, request)
return
}
If you want to do the same in Node, you need to check in the grpc-node implementation if there is such a thing available
Your example uses http.NewServeMux(), which is provided by the Go standard library. The Node standard library does not provide an equivalent feature, so you can't share the port that way.

grpc - is TLS necessary if https enabled?

I'm newbie of grpc and have played with simple grpc clients of java, go, and python. I know basic http and https but not familiar with protocal details. So this question may be rediculous to you but I didn't find any explaination online.
I know grpc has insecure(go: grpc.WithInsecure(), python: grpc.insecure_channel, java: usePlaintext()) and secure mode(TLS). and grpc is based on httpv2, and http has security mode(https).
So what if use insecure grpc with https? Is the overall data transfer safe?
And what if use TLS grpc with https? Is there performance overhead(becuase I think the messages are encrypted twice)?
Thank you for any answer, any exsiting webpages explaining such topic that will be best!
Insecure implies http. And TLS implies https. So there's no way "to use insecure grpc with https", as at that point it is then http.
There is no double-encryption. The gRPC security mode is the same as the HTTP security mode.
Using gRPC over TLS is highly recommended if you gRPC server is serving requests coming from outside(external network). For example you're creating front end app in javascript serving user requests. Your javascript app make call to your gRPC server for APIs your server provide. Your javascript communicate to your gRPC server through stub created in javascript end. At the end of your gRPC server, you need to set tls mechanism to secure communication between your javascript app and your gRPC server(because requests coming from outside).
gRPC somehow mostly used for internal services communication inside internal network in microservice architecture. You don't need to set tls for internal network usage since requests coming from your own environment from within your watch.
If you want to apply something like "gRPC over HTTPS", then you need something like gateway to map your http call to your gRPC server. Check this out.
You need to compile your proto file as gateway service definitions as well using provided tools. Now you can create your normal http server with tls enabled through something like http.ListenAndServeTLS(...). Dont forget to register your grpc server to the http server using the service definitions compiled from the proto file. With this all your requests to are encrypted with tls to your http server like normal rest apis do, but get proxied to gRPC server you defined. There's no need to enable tls at your gRPC server since it has been enabled in your http server.

What is gRPC programming surface?

In the GPRC Concept document
http://www.grpc.io/docs/guides/concepts.html
The gRPC programming surface concept is mentioned without a definition. Anyone knows exactly what is gRPC programming surface?
Quoting their documentation:
Starting from a service definition in a .proto file, gRPC provides protocol buffer compiler plugins that generate client- and server-side code. gRPC users typically call these APIs on the client side and implement the corresponding API on the server side.
On the server side, the server implements the methods declared by the service and runs a gRPC server to handle client calls. The gRPC infrastructure decodes incoming requests, executes service methods, and encodes service responses.
On the client side, the client has a local object known as stub (for some languages, the preferred term is client) that implements the same methods as the service. The client can then just call those methods on the local object, wrapping the parameters for the call in the appropriate protocol buffer message type - gRPC looks after sending the request(s) to the server and returning the server’s protocol buffer response(s).
In short, the surface is the contract between the client and the service and its realization as the clientside layer (the stub) and the serverside implementation.

Apache Camel and Netty as a TCP sticky balancer

I'm trying to load balance TCP connections over multiple backend servers via Apache Camel and Netty.
I want to make each connection to the backend mapped to each connection to Camel. Something like this:
Client connects to Camel.
Camel selects a backend server and connects to it.
Client sends something to Camel.
Camel sends it to the associated backend server.
Backend server replies to Camel.
Camel sends it back to client.
...
My protocol is stateful and the connection between client and Camel will stay open. I also need messages starting from backend and going to client.
So far, so good. This is working quite nice.
My problem starts when I connect a new client that goes to the same backend server, it looks like Camel reuses the connection that is already open, for the backend server it looks like the first client sent the message, it doesn't receive a new connection request.
I've looked at Apache Camel Netty Component documentation and didn't find anything to configure this behaviour.
Is it possible to do this?
Sidenote: I'm using Camel because I need to inspect the messages in the protocol to select a backend server, i.e. I need a custom loadbalancing strategy. The problem occurs using any loadbalancing strategy provided by Camel, so it's not related to my code.
Camel has a sticky loadbalancer, you just need to setup an expression to tell camel to decide which object hashcode it need to be use.
from("direct:start").loadBalance().
sticky(header("source")).to("mock:x", "mock:y", "mock:z");

Comparison between HTTP and RPC

RPC protocol uses TCP as an underlying protocol and HTTP again uses TCP as an underlying protocol. So why is HTTP is widely accepted?
Why does SOAP use HTTP as an underlying protocol - why not RPC?
Remote Procedure Calls (RPC) is not a protocol, it's a principle that is also used in SOAP.
SOAP is an application protocol that uses HTTP for transport (so it won't have to think about encoding, message boundaries and so on). One of the reasons to use SOAP over HTTP is that for HTTP you usually don't need firewall rules and that the HTTP infrastructure is mature and commonly rolled out.
RPC does not require HTTP. Basically, RPC describes any mechanism that is suitable to invoke some piece of code remotely. The transport mechanism used to perform the RPC could be SOAP over HTTP. It could also be a REST call returning some JSON data over HTTP.
SOAP can also be used via Mails, and AFAIK (not sure here) the BizTalk Server should support this scenario. But even something exotical like trying SOAP over Avian Carriers can also be considered an RPC, although the latency of the latter may not be sufficient for real-world applications.
Think of an RPC as sending somehow some kind of message to a destination, in order to initiate a specific action and (optionally) getting some information back after the action has been completed. What prticular technology you choose to transmit these messages does not really matter.

Resources