Is it safe to create multiple Kestrel instances inside 1 process? - asp.net

We are building an orchestrator within a micro-service architecture. We chose websockets as the RPC protocol, to set up a streaming pipeline which can be scaled by a websocket capable server like Kestrel. This orchestrator will primarily be running on Linux servers (dockerized).
For admin and monitoring purposes, we plan to use http://dotnetify.net/ to build a reactive Web Admin portal (which could show the number of calculations and clients in semi-realtime, with push notify).
DotNetify uses SignalR, and we can't use the SignalR layer on top of Websockets. We require minimal overhead on top of the TCP protocol. Websocket in itself is a beautiful standard, and lightweight enough, but SignalR adds support for things we don't really need for this (LAN, microservices). We do consider WAMP, but in the proof of concept we will use a plain and simple custom handshake within the websocket bus. Another reason is: our main backend is IBM AIX, and the RDBMS process engine is a commercial prebuild binary, so it's very cumbersome (near impossible) to implement the SignalR protocol over there. But we don't have to, because we don't want to.
A possible solution to having [A] "pure" and [B] "signalR" websocket servers within 1 process, is starting multiple Kestrels. I tried this (on windows and ubuntu) and it seems to run without problems. I simply used a Task.Run() array, followed by Task.WaitAll(backgroundTasks). One Kestrel with SignalR, one without, running on separate ports.
Note: I could not find a proper way to use multiple ports in one Kestrel, and exclude SignalR from one port
My question is: Although this seems to run just fine, can anybody confirm that this is safe? Especially with libuv and os signal handling?

You can use SignalR as normal, and just listen for Websocket connections on a specific path for talking with your AIX (and other back end) boxes. Just do something like this (Taken from Microsoft Docs ):
app.Use(async (context, next) =>
{
if (context.Request.Path == "/ws")
{
if (context.WebSockets.IsWebSocketRequest)
{
WebSocket webSocket = await context.WebSockets.AcceptWebSocketAsync();
await Echo(context, webSocket);
}
else
{
context.Response.StatusCode = 400;
}
}
else
{
await next();
}
});
I don't see any reason why you would need to start two Kestrel instances. Obviously replace the /ws portion of the path above with whatever endpoint you want to use for hooking up your WebSockets for your backend service.

Related

How does communication between 2 microservices with grpc work?

Let's say you have an application like a bookstore, and you split it into two simple microservices in the backend ->
Microservice 1: Book purchasers (with Accounts)
Microservice 2: Book list.
Let's say you make a request from the front end, goes into a reverse proxy and the request goes to microservice 1.
How exactly can you visualize how microservice 1 communicates with microservice 2?
Do you containerize microservices, and inside it you have a grpc client and server?
Does the client communicate with microservice 1's server and also microservice 2's server?
In this image here it looks like you containerize the client and server separately...?
How exactly does gRPC communicate between microservices?
IIUC you are asking how the gRPC server implements the functionality described by the protobuf?
I think you're referring to this example
The protobuf compiler generates client and server stubs that you must implement. You can implement these in any language implementation. When you implement the server, you are entirely responsible for ensuring that e.g. ListBooks() (for a shelf) returns any books added to the shelf by CreateBook().
The implementation is independent of gRPC.
rpc ListBooks(ListBooksRequest) returns (ListBooksResponse) {}
rpc CreateBook(CreateBookRequest) returns (Book) {}
gRPC conceptually simply ensures that your client(s) think they're calling a local method: CreateBook() when, in fact, they're calling a local stub that transfer the request over a network to the remote server that receives the CreateBook() request and does something about it.
So, let's focus on the server, it will likely use some form of persistence to record shelves and books. In practice this will be some of database:
type Server struct {
db db
}
func (s *Server) CreateBook(r *pb.CreateBookRequest) {
shelf := s.db.Get(r.get_shelf())
shelf.Add(r.get_book())
}
func (s *server) ListBooksRequest(r *pb.ListBooksRequest) {
shelf := s.db.Get(r.get_shelf())
for _, book := range shelf.Get() {
fmt.Println(book)
}
}
NOTE In the above, the server implementation of the gRPC service includes a database connection that, the gRPC methods use to interact with the database. This could represent some other micro-service too...turtles all the way!
So, to answer your question, somewhere in the bowels of your micro-services, there's some form of shared state (e.g. database or similar) where e.g. books are persisted (in shelves).
Whether the clients and|or servers are containerized, while probably good practice, is irrelevant to the question of how communication occurs.

What is the equivalent of RPC in new Unity Networking?

Unity has upgraded its Networking system and called the old one as legacy networking.
So how do we change our RPC calls into the new Unity Networking?
What is the equivalent of this approach?
Should we write our own methods for it? (Sending byte arrays etc.)
[ClientRpc] is the equivalent in the new Networking system.
See here for more information - http://docs.unity3d.com/Manual/UNetActions.html
In response to your comment:
Exactly. You [Command] from clients up to the server and [ClientRpc] from the server down to all clients.
In addition, you can send messages to individual clients using the Send() function on the connectionToClient of a NetworkBehaviour.
http://docs.unity3d.com/ScriptReference/Networking.NetworkConnection.Send.html

Understanding websockets in terms of REST and Server vs Client Events

For a while now I have been implementing a RESTful API in the design of my project because in my case it is very useful for others to be able to interact with the data in a consistent format (and I find REST to be a clean way of handling requests). I am now trying to not only have my current REST API for my resources, but the ability to expose some pieces of information via a bidirectional websocket connection.
Upon searching for a good .net library to use that implements the websocket protocol, I did find out about SignalR. There was a few problems I had with it (maybe specific to my project?)
I want to be able to initialize a web socket connection through my
existing REST API. (I don't know the proper practice to do this, but
I figured a custom header would work fine) I would like them (the
client) to be able to close the connection and get a http response
back (101?) to signify its completion.
The problem I had with SignalR was:
that there was no clean way outside of a hub instance to get a user's connection id and map it to a external controller where the rest call made affects what piece of data gets broadcasted to the specific client (I don't want to use external memory)
the huge reliance on client side code. I really want to make this process as simple to the client and handle the majority of the work on the server side (which I had hoped modifying my current rest api would accomplish). The only responsibility I see of a client is to disconnect peacefully.
So now the question..
Is there a good server side websocket library for .net that implements the latest web socket protocol? The client can use any client library that adheres to the protocol. What is the best practice to incorporate both web socket connections and a restful api?
ASP.NET supports WebSockets itself if you have IIS8 (only Windows 8/2012 and further). SignalR is just a polyfill,
If you do not have IIS8, you can use external WebSocket frameworks like mine: http://vtortola.github.io/WebSocketListener/
Cheers.

Ensure that root user is running the client program that is trying to connect the server program

I have a server program which listens on a particular port.
I have a requirement where client program that tries to connect to my server must be initiated by a root user.
How do I ensure this in the server program?
How do I ensure [anything about the
client program] in the server program?
You can't. If your security model requires the server to know whether client is root, you don't have security.
Let's consider one possibility: your network protocol includes a notification like this:
My-Uid-Is: 0
Your client, the perfectly secure version that you wrote, might implement this notification like this:
fprintf(socketFd, "My-Uid-Is: %d\n", getuid()); // send server my identity
But, my client, the one what I wrote without your knowledge or consent, will implement the notification like this:
fprintf(socketFd, "My-Uid-Is: 0\n"); // lie to server about my identity
Pop quiz: how can your server know whether it is talking to your truthful client, or my lying client? Answer: it can't. In fact, if you generalize this concept, you realize that the server can't rely upon the validity (whether that means the truthfulness, the format, the range-checking, etc.) of anything the client says.
In this specific case, using the clients source port number is as unreliable as any other choice. Yes, many operating systems require root privileges to bind to low-numbered source ports. But my PC might not be running your favorite operating system. I might be connecting from my own PC running my own OS which doesn't have that feature. Remember: you can't trust anything the client says.
There are techniques involving public-key encryption that can be used to guarantee that the program you are talking to has access to specific secrets. That, assuming that the secrets are adequately protected, can be used to guarantee that a specific person, computer, or account generated the request. I'll let someone else discuss PKI and how it might apply to your situation.
The client should bind to a port below 1024 before connecting. This port range is reserved for root.

how to intercept and modify HTTP responses on server side?

I am working with a client/server application which uses HTTP, and my goal is to add new features to it. I can extend the client by hooking my own code to some specific events, but unfortunately the server is not customizable. Both client and server are in a Windows environment.
My current problem is that performance is awful when a lot of data are received from the server: it takes time to transmit it and time to process it. The solution could be to have an application on server side to do the processing and send only the result (which is much smaller). The problem is there is not built-in functions to manipulate responses from the server before sending them.
I was thinking to listen to all traffic on port 80, identifying relevant HTTP responses and send them to my application while blocking the response (to avoid sending huge data volume which won't be processed by the client). As I am lacking a lot of network knowledge, I am a bit lost when thinking about how to do it.
I had a look at some low-level packet intercepting methods like WinPCap, but it seems to require a lot of work to do what I need. Moreover I think it is not possible to block or modify responses with this API.
A reverse proxy which allows user scripts to be triggered by specific requests or responses would be perfect, but I am wondering if there is no simpler way to do this interception/send elsewhere work.
What would be the simplest and cleanest method to enable this behavior?
Thanks!
I ended making a simple reverse proxy to access the HTTP server. The reverse proxy then extracts relevant information from the server response and sends it to the server-side processing component, and replaces information extracted from the response by an ID the client uses to request the other component to get the processing results.
The article at http://www.codeproject.com/KB/web-security/HTTPReverseProxy.aspx was very helpful to make the first draft of the reverse proxy.
Hmm.... too much choices.
2 ideas:
configure on all clients a Http Proxy. there are some out there, that let you manipulate what goes through in both directions (with scripts, plugins).
or
make a pass through project, that listens to port 80, and forewards the needed stuff to port 8080 (where your original server app runs)
question is, what software is the server app running at,
and what knowledge (dev) do you have?
ah. and what is "huge data"? kilobyte? megabyte? gigabyte?

Resources