How many threads does Unary gRPC use? - grpc

I'm new to gRPC, and their documentation (for C#) is so annoying and outdated.
I'm trying to build a single threaded client/server application.
The server has an network API, where the client can execute some RPCs from the server's API (unary).
I don't understand how the server handles threading. I read that Unary gRPC can be sync or async.
Can I configure any of them to be single-threaded? If not, then is it possible to make the thread pool size 1?

Threading and unary rpcs are two completely different pieces of the system.
Unary just means that you get back a single response and not a stream
I don't think it is possible to set the server thread pool to size 1 - that is the wrong use case for grpc- maybe you can use a mutex to lock every request so it executes sequentially
Or use something like vertx which can give you single threaded networked applications

For C#, I found the following to work.
GrpcEnvironment.SetThreadPoolSize(1);

Related

Asynchronous GRPC?

I am working on designing a new system that will take a an array of hashes of car data and then use this data to call a separate API that returns a Boolean, after which I will return to the original caller the car model and either true or false.
The system needs to be callable from other applications so I am looking into GRPC to solve the problem. My question revolves around how to implement this solution in GRPC and whether or not something like RabbitMQ would be better?
Would it make sense to make a bidirectional streaming GRPC solution where the client streams in the list of cars and then on the servers end I spawn off say a delayed job for each request on the server? And then when each delayed job finishes processing I return that value to the original caller in a stream?
Is this an elegant solution or are there better ways to achieve my goal? Thanks.
The streaming system of gRPC is typically designed for asynchronous communication, so it should fit your usage case neatly.
The general design philosophy in this case is to consider each individual message sent in the stream as independent. Basically, make sure your proto message contains all the information it needs to be parsed and processed by your application without needing any context from previous calls.

how to have both gRPC server and client at same process to achieve bi-directional communication(not server/client streaming)

I am pretty new to gRPC. I am thinking of using gRPC(Java) to do inter node(server) communication in my use case:
I have my own app logic to do some bookkeeping work on each node;
a node would need to communicate with others to reach some consensus(part of app logic) and this means a node need to both have client and server;
so how could I achieve this? server seems to be blocking after I call server.awaitTerminate(), right? but do we also have the async version of the gRPC server in java? I bet yes, but I am not yet sure how could I leverage it.
for example, I have node A, B, C. I will need to have gRPC serverA, serverB, serverC start first, and for each server say A, I need client to connect to B and C. and in addition to communication part, app (say in node A)logic would be able to send out msg to other nodes(say B and C) via corresponding clients(to server B and C) if needed;and of course app logic would be notified when requests coming from B and C(because itself is a server).
I've been searching online for days and have gone through grpc/grpc-java related material and code example. however, i find there's not that much code example to show what is best practice and pattern to leverage gRPC...i'd really like to hear whatever suggestion you may have...
thanks in advance!
Calling server.awaitTermination() in your main() is not required. The examples do so because grpc-java uses daemon threads by default. Thus, in the examples the only non-daemon thread is the main thread, and you need at least one non-daemon thread to keep the JVM running. See the documentation for java.lang.Thread.
awaitTermination() is not a serve_forever() method that processes new requests; awaitTermination() simply blocks the current thread until the grpc server has terminated. Processing happens on other threads.

How to make a Windows Service listen for additional request while it is already processing the current request?

I need to build a Windows Service in VB.net under Visual Studio 2003. This Windows service should read the flat file (Huge file of about a million records) from the local folder and upload it to the corresponding database table. This should be done in Rollback mode (Database transaction). While transferring data to table, the service should also be listening to additional client requests. So, if in between client requests for a cancel operation, then the service should rollback the transactions and give feedback to the client. This windows service also keeps writing continuously to two log files about the status and error records.
My client is ASPX page (A website).
Can somebody help me explain how to organize and achieve this functionality in a windows service(Processing and listening for additional client requests simultaneously. Ex. Cancellation request).
Also could you suggest me the ideal way of achieving this (like if it is best to implement it as web service or windows service or just a remote object or some other way).
Thank you all for your help in advance!
You can architect your service to spawn "worker threads" that do the heavy lifting, while it simply listens for additional requests. Because future calls are likely to have to deal with the current worker, this may work better than, say, architecting it as a web service using IIS.
The way I would set it up is: service main thread is listening on a port or pipe for a communication. When it gets a call to process data, it spawns a worker thread, giving it some "status token" (could be as simple as a reference to a boolean variable) which it will check at regular intervals to make sure it should still be running. Thread kicks off, service goes back to listening (network classes maintain a buffer of received data so calls will only fail if they "time out").
If the service receives a call to abort, it will set the token to a "cancel" value. The worker thread will read this value on its next poll and get the message, rollback the transaction and die.
This can be set up to have multiple workers processing multiple files at once, belonging to callers keyed by their IP or some unique "session" identifier you pass back and forth.
You can design your work like what FTP do. FTP use two ports, one for commands and another for data transfer.
You can consider two classes, one for command parsing and another for data transfer, each one on separate threads.
Use a communication channel (like a privileged queue) between threads. You can use Syste.Collections.Concurrent if you move to .NET 4.0 and more threading features like CancellationTokens...
WCF has advantages over web service, but comparing it to windows service needs more details of your project. In general WCF is easier to implement in compare to windows service.

How can a LuaSocket server handle several requests simultaneously?

The problem is the inability of my Lua server to accept multiple request simultaneously.
I attempted to make each client message be processed in its on coroutine, but this seems to have failed.
while true do
local client = server:accept()
coroutine.resume(coroutine.create( function()
GiveMessage( client )
end ) )
end
This code seems to not actually accept more than one client message at the same time. What is wrong with this method? Thank you for helping.
You will not be able to create true simultaneous handling with coroutines only — coroutines are for cooperative multitasking. Only one coroutine is executed at the same time.
The code that you've wrote is no different from calling GiveMessage() in a loop directly. You need to write a coroutine dispatcher and find a sensible reason to yield from GiveMessage() for that approach to work.
There are least three solutions, depending on the specifics of your task:
Spawn several forks of your server, handle operations in coroutines in each fork. Control coroutines either with Copas or with lua-ev or with home-grown dispatcher, nothing wrong with that. I recommend this way.
Use Lua states instead of coroutines, keep a pool of states, pool of worker OS threads and a queue of tasks. Execute each task in a free Lua state with a free worker thread. Requires some low-level coding and is messier.
Look for existing more specialized solutions — there are several, but to advice on that I need to know better what kind of server you're writing.
Whatever you choose, avoid using single Lua state from several threads at the same time. (It is possible, with the right amount of coding, but a bad idea.)
AFAIK coroutines don't play nice with luaSocket out-of-the-box. But there is Copas you can use.

ASP.net Request Processing

I heard that when a request goes from browser(client) to IIS ,after extension filtering
(aspnet_isapi.dll)several named pipe connections are established between the ISAPI DLL and the worker process(w3wp.exe).
What is the name of those pipes ? will those pipe acts as a communication channel like the one we have with WCF?
You will find here a superb explanation by Rick Strahl of how ASP.NET works, and yes, named pipes are used in IIS5 for communication between the ISAPI DLL in the inetinfo process and the worker process, but in IIS6 this is no longer necessary since the lowest level of the HTTP stack was transferred to the kernel driver HTTP.SYS, which passes the requests directly to the worker process.
Named pipes are objects managed by the operating system kernel, for which there is a specific Win32 API. WCF named pipe bindings are built on top of these, but involve a great deal more layered on top of the raw pipe transport. Even in IIS5 where named pipes are used for ASP.NET, these are not used in anything like the same way that WCF uses them, so there is no reason to think of them as connected or analogous in any way. The types in the System.IO.Pipes namespace are a nearer comparison, being much thinner wrappers over the OS pipe API.
This is just a binary pipe, one of standard ways to communicate between processes in windows (the others are shared memory and com+ iirc). You can have several worker processes, obviously, so i am not sure there's one single name for the pipe. And i highly doubt this is using any kind of .net serialized data - not sure about this.

Resources