Does one GRPC channel request generate one more new thread in the GRPC server side? - grpc

I'm new to GRPC. I want to know that if the server start a new thread to process when a GRPC client start one request.

There may be up to one Runnable enqueued to the Server's executor for application processing. Each request may generate more than one Runnable over time, but only one at a given time. The default executor is an unbounded cached thread pool, so worst-case each request gets its own thread initially, but later requests will generally reuse previous threads.
It is good practice for high QPS services to specify a fixed-sized executor, to avoid excessive number of threads and reduced context switch thrashing.

Related

What does HttpClient do in face of concurrent request exactly at the same time?

As we know it's best practice to use HttpClient in a shared state (Singletone) in Dotnet core instead of creating and dispose of it for each request. We do it to prevent the Port Exhaustion problem.(More Info here).
My problem is What does HttpClient do in face of concurrent requests exactly at the same time when it is shared?
For Example, assume that Request A and Request B sent to a shared HttpClient object at the same time.
my first assumption about its behavior is that it keeps request A and B in a queue and insert them in a row and run A then Run B!
My second assumption is that it uses two peripheral ports (with 2 internal thirds) and answer both of them at the same time.
Is any of this assumption correct? If no Any other Idea?
It's safe to say that your first assumption is incorrect and the second assumption is closer to the truth. each HttpClient instance manages its own connection pool to execute requests concurrently with int.MaxValue as a default value for the number of concurrent connections per server.
Meaning that HttpClient will execute all requests concurrently. Every time you send a new request it'll try to use an existing connection to the server (if it's not in use) otherwise it'll open a new connection to execute the request.
There's an excellent article explaining this in details here:
https://www.stevejgordon.co.uk/httpclient-connection-pooling-in-dotnet-core

High response time vs queuing

Say I have a webserivce used internally by other webservices with an average response time of 1 minute.
What are the pros and cons of such a service with "synchronous" responses versus making the service return id of the request, process it in the background and make the clients poll for results?
Is there any cons with HTTP connections which stay active for more than one minute? Does the default keep alive of TCP matters here?
Depending on your application it may matter. Couple of things worth mentioning are !
HTTP protocol is sync
There is very wide misconception that HTTP is async. Http is synchronous protocol but your client could deal it async. E.g. when you call any service using http, your http client may schedule is on the background thread (async). However The http call will be waiting until either it's timeout or response is back , during all this time the http call chain is awaiting synchronously.
Sockets
Since HTTP uses socket and there is hard limit on sockets. Every HTTP connection (if created new every time) opens up new socket . if you have hundreds of requests at a time you can image how many http calls are scheduled synchronously and you may run of sockets. Not sure for other operation system but on windows even if you are done with request sockets they are not disposed straight away and stay for couple of mins.
Network Connectivity
Keeping http connection alive for long is not recommended. What if you loose network partially or completely ? your http request would timeout and you won't know the status at all.
Keeping all these things in mind it's better to schedule long running tasks on background process.
If you keep the user waiting while your long job is running on server, you are tying up a valuable HTTP connection while waiting.
Best practice from RestFul point of view is to reply an HTTP 202 (Accepted) and return a response with the link to poll.
If you want to hang the client while waiting, you should set a request timeout at the client end.
If you've some Firewalls in between, that might drop connections if they are inactive for some time.
Higher Response Throughput
Typically, you would want your OLTP (Web Server) to respond quickly as possible, Since your queuing the task on the background, your web server can handle more requests which results to higher response throughput and processing capabilities.
More Memory Friendly
Queuing long running task on background jobs via messaging queues, prevents abusive usage of web server memory. This is good because it will increase the Out of memory threshold of your application.
More Resilient to Server Crash
If you queue task on the background and something goes wrong, the job can be queued to a dead-letter queue which helps you to ultimately fix problems and re-process the request that caused your unhandled exceptions.

Why create new thread with startAsync instead of doing work in servlet thread?

In servlet 3.0 one can use startAsync to put long work in another thread, so that you can free up servlet thread.
Seems that I'm missing something, because I don't see, why not just to use servlet thread for working? Is the thread created by startAsync somehow cheaper?
In most situations when handling requests you are blocking or waiting on some external resource/condition. In this case you are occupying the thread (hence a lot of memory) without doing any work.
With servlet 3.0 you can serve thousands of concurrent connections, much more than available threads. Think about an application that provides downloading of files with limited throughput. Most of the time your threads are idle because they are waiting to send next chunk of data. In ordinary servlets you cannot serve more clients than the number of your HTTP threads, even though most of the time these threads are idle/sleeping.
In servlet 3.0 you can have thousands of connected clients with few HTTP threads. You can find a real world example in my article: Tenfold increase in server throughput with Servlet 3.0 asynchronous processing inspired by this question: Restrict download file bandwidth/speed in Servlet
Is the thread created by startAsync somehow cheaper?
There is no thread created by startAsync! It just tells the servlet container: hey, although the doGet/doPost method finished, I am not done with this request, please do not close. That's the whole point - you probably won't create new thread per each async request. Here is another example - you have thousands of browsers waiting for a stock price change using comet. In standard servlets this would mean: thousands of idle threads waiting for some event.
With servlet 3.0 you can just keep all asynchronous requests waiting in an ArrayList or a some queue. When the stock price change arrives, send it to all clients one after another. Not more than one thread is needed in this scenario - and all HTTP threads are free to process remaining resources.
With servlet 3.0 you can just keep all asynchronous requests waiting in an ArrayList or a some queue
Problem is this. You still need a new thread to process the request and pick up the request to finally send the response.
So we free up http threads but have to create some thread to process the request

What are the different approaches in developing a web-server?

What are the different approaches in developing a web-server?
So I guess there are (1) multi-thread (2) event-loop, is there anything else? What would be the pros/cons of each approach? when would you use each? can you list specific impl' for each approache
Different approach can be:
Single threaded: All connections are handled by a single thread that
"listens" for and awaits for connections and processes requests.It
is simple to implement but it is the most useless server as it can
only serve request at a time
Multithreaded:The server listens for requests and each incoming
request is allocated to a new thread to handle it.So each client
connection is handled by its dedicated thread. This approach(unlike
1) supports concurrent processing of client requests but does not
scale well since each new request creates a new thread at the server
and this takes a lot of resources.Eventually the server will hit a
limit
Multithreaded-Pools:Same idea as (2) but instead of creating a new
thread to handle each incoming request a thread from a thread-pool
is used.I.e. threads are created and placed on a pool for later
reuse.This scales very well supporting multiple client requests and
it is the standard approach.E.g. Tomcat works like this.
Event-Queue:Each incoming request is placed into a queue and is
processed by a background thread taking requests of the queue. It is
non-blocking and this type of asynchronous processing also scales
well.To be honest I am not sure if it is better than (3) in
performance.I think that tomcat can be configured for this using the
NIO architecture
You should add non-blocking I/O. Have a look at Netty.
Some servers like G-WAN mix Multithreaded-Pools and Event-Queues, letting the server saturate CPU Cores with each thread processing many connections.
Disclamer: I am involved in the development of this project.

Erlang accept incoming tcp connections dynamically

What I am trying to solve: have an Erlang TCP server that listens on a specific port (the code should reside in some kind of external facing interface/API) and each incoming connection should be handled by a gen_server (that is even the gen_tcp:accept should be coded inside the gen_server), but I don't actually want to initially spawn a predefined number of processes that accepts an incoming connection). Is that somehow possible ?
Basic Procedure
You should have one static process (implemented as a gen_server or a custom process) that performs the following procedure:
Listens for incoming connections using gen_tcp:accept/1
Every time it returns a connection, tell a supervisor to spawn of a worker process (e.g. another gen_server process)
Get the pid for this process
Call gen_tcp:controlling_process/2 with the newly returned socket and that pid
Send the socket to that process
Note: You must do it in that order, otherwise the new process might use the socket before ownership has been handed over. If this is not done, the old process might get messages related to the socket when the new process has already taken over, resulting in dropped or mishandled packets.
The listening process should only have one responsibility, and that is spawning of workers for new connections. This process will block when calling gen_tcp:accept/1, which is fine because the started workers will handle ongoing connections concurrently. Blocking on accept ensure the quickest response time when new connections are initiated. If the process needs to do other things in-between, gen_tcp:accept/2 could be used with other actions interleaved between timeouts.
Scaling
You can have multiple processes waiting with gen_tcp:accept/1 on a single listening socket, further increasing concurrency and minimizing accept latency.
Another optimization would be to pre-start some socket workers to further minimize latency after accepting the new socket.
Third and final, would be to make your processes more lightweight by implementing the OTP design principles in your own custom processes using proc_lib (more info). However, this you should only do if you benchmark and come to the conclusion that it is the gen_server behavior that slows you down.
The issue with gen_tcp:accept is that it blocks, so if you call it within a gen_server, you block the server from receiving other messages. You can try to avoid this by passing a timeout but that ultimately amounts to a form of polling which is best avoided. Instead, you might try Kevin Smith's gen_nb_server instead; it uses an internal undocumented function prim_inet:async_accept and other prim_inet functions to avoid blocking.
You might want to check out http://github.com/oscarh/gen_tcpd and use the handle_connection function to convert the process you get to a gen_server.
You should use "prim_inet:async_accept(Listen_socket, -1)" as said by Steve.
Now the incoming connection would be accepted by your handle_info callback
(assuming you interface is also a gen_server) as you have used an asynchronous
accept call.
On accepting the connection you can spawn another ger_server(I would recommend
gen_fsm) and make that as the "controlling process" by calling
"gen_tcp:controlling_process(CliSocket, Pid of spwned process)".
After this all the data from socket would be received by that process
rather than by your interface code. Like that a new controlling process
would be spawned for another connection.

Resources