What is the behavior of FlurlClient with simultaneous requests? - tcpclient

As far as I know, in HTTP 1.1 you can use the same TCP/IP connection for multiple requests, but you can't execute more than one request at a time on that connection. In other words, it has to go like: Request, Response, Request, Response, Request, .... You can't do something like: Req1, Req2, Resp1, Req3, Resp3, Resp2. Maybe you can with HTTP/2, I don't know.
Anyway, my question is: what happens if you try to send multiple simultaneous requests with FlurlClient?
Like:
using (var client = new FlurlClient("https://api.com"))
{
var req1Task = client.Request("path1").PostJsonAsync(thing);
var req2Task = client.Request("path2").GetAsync();
await Task.WhenAll(req1Task, req2Task);
// Do something with responses.
}
I know the answer for HttpClient.
The answer is that if you try to start another request on HttpClient when a request is already pending, HttpClient will create a new TCP/IP connection. Whereas if you had waited until the first request was done, HttpClient would reuse the connection for the 2nd request.
My guess is that FlurlClient is the same.

Your assumption is correct about FlurlClient behaving the same as HttpClient in that regard. Flurl is just a thin abstraction layer on top of HttpClient, which is itself an abstraction on top of several more layers. Eventually you hit the platform-specific networking stack that actually implements the protocols.
It is valid and (usually) smart to have multiple calls happening concurrently like you've done in your example. Once connection limits are hit (which is adjustable via ServicePointManager), requests will simply queue up until a connection is available. Just be sure that number doesn't get too high or you'll likely start receiving errors from the server. Also, like HttpClient, be sure you're reusing a FlurlClient instance as much as possible so you don't run into this problem.

Related

how top-level async web requests handle the reponse

I have fundumental question about how async requests work at top level.
Imagin if we have a top level route called HomePage(). This route is an async route and within this route we call to 10 different APIs before sending the response(image it takes like 5 seconds, remember this is an example to understand the concept and these numbers are for learning purposes). All of these api requests are awaited. So the request handler just releases the thread hanlding this request and goes to handle other requests until the response for these apis come back. So lets add this constraint. Our network card can handle only 1 connection and that one is held open till the response for the request to HomePage is ready. Therefor we cannot make any other requests to the server so whats the difference if this whole thing was sync from the beggining. We cannot drop the connection to the first request to HomePage because if that's the case then how are we ever going to send back the response for that request and we cannot handle new requests because the connection is kept open.
I suspect that my problem is how the reponse is sent back on top level async routes.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE). Examples would be much appreciated.
So lets add this constraint. Our network card can handle only 1 connection
That constraint cannot exist. Network cards handle packets, not connections. Connections are a virtual construct that exist in the host computer.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE).
Of course the connection is kept alive. The top-level async method will return the thread to the thread pool, where it is available to handle any other requests.
If you have some artificial constraint on your web app that prevents it from having more than one connection, then there won't be any other requests to handle, and the thread pool threads will do nothing.

https and websocket handler with different deadlines

I have a toy proxy server that accepts connections on a port. I set some deadlines for read/write operations to avoid having too many idle connections from bad clients that fail to close properly.
The problem is that I would like to set a higher deadline for connections that are towards websockets (wss in particular). For plain http requests I can see the 101 Switching Protocols response but https/wss is trickier since I mostly do a io.CopyBuffer from src connection to dst connection and I don't see anything "websocket related" in the initial proxy connection in order to differentiate between https and wss and apply the proper deadline.
I've included a debug screen to such a request towards a wss:// demo server.
Any ideas?
One cannot reliably distinguish between "normal" HTTP traffic and Websockets just from looking at the encrypted data.
One can try to do some heuristics by looking at traffic patterns, i.e. in which direction how many data are transferred in which time and with which idle times between data. Such heuristics can be based on the assumption that HTTP is a request + response protocol with typically small requests shortly followed by larger responses, while Websockets can show arbitrary traffic patterns.
But arbitrary traffic patterns also means that Websockets might be used in a request + response way too. (including request + response though). Also, in some use cases the usage pattern for HTTP consists of mostly larger requests followed by small responses. Thus depending on the kind of application such heuristics might be successful or they might fail.
It is always a good practice to define global Server timeout to make sure that resources are not locked forever. That timeout should be not smaller that longest timeout in all handlers.
DefaultServer = &http.Server{
Handler: http.TimeoutHandler(handler, wssTimeout, timeoutResponse),
...
}
In handler that processes http and wss requests, we need to set timeout dynamically.
func (proxy *ProxyHttpServer) handleHttps(w http.ResponseWriter, r *http.Request) {
// Request Context is going to be cancelled if client's connection closes, the request is canceled (with HTTP/2), Server we created above time outed.
// all code down the stack should respect that ctx.
ctx := r.Context()
timeoit := httpTimeout
if itIsWSS(r) {
timeout = wssTimeout
}
ctx, cancel = cWithTimeout(ctx, timeout)
defer cancel()
// all code below to use ctx instead of context.Backgound()/TODO()

What does HttpClient do in face of concurrent request exactly at the same time?

As we know it's best practice to use HttpClient in a shared state (Singletone) in Dotnet core instead of creating and dispose of it for each request. We do it to prevent the Port Exhaustion problem.(More Info here).
My problem is What does HttpClient do in face of concurrent requests exactly at the same time when it is shared?
For Example, assume that Request A and Request B sent to a shared HttpClient object at the same time.
my first assumption about its behavior is that it keeps request A and B in a queue and insert them in a row and run A then Run B!
My second assumption is that it uses two peripheral ports (with 2 internal thirds) and answer both of them at the same time.
Is any of this assumption correct? If no Any other Idea?
It's safe to say that your first assumption is incorrect and the second assumption is closer to the truth. each HttpClient instance manages its own connection pool to execute requests concurrently with int.MaxValue as a default value for the number of concurrent connections per server.
Meaning that HttpClient will execute all requests concurrently. Every time you send a new request it'll try to use an existing connection to the server (if it's not in use) otherwise it'll open a new connection to execute the request.
There's an excellent article explaining this in details here:
https://www.stevejgordon.co.uk/httpclient-connection-pooling-in-dotnet-core

Netty HTTP 1.1 Pipelining Support

I need to send multiple async requests to a rest server through the same connection and get them executed in FIFO order, I think HTTP 1.1 pipelining is just perfect for this.
I found some related issues on Netty but I couldn't find much on their user guide and nothing on their test cases.
Is HTTP 1.1 pipelining supported on Netty? How would that be implemented?
An example would be greatly appreciated.
Related -unanswered- question: HTTP 1.1 pipelining vs HTTP 2 multiplexing
Since Netty is closer to the TCP layer, than to the HTTP layer, sending multiple requests is easy, after setting up the pipeline, just write them.
HttpRequest request1 = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/");
request1.headers().set(HttpHeaderNames.HOST, host);
request1.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
request1.headers().set(HttpHeaderNames.ACCEPT_ENCODING, HttpHeaderValues.GZIP);
channel.writeAndFlush(request1);
HttpRequest request2 = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/");
request2.headers().set(HttpHeaderNames.HOST, host);
request2.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
request2.headers().set(HttpHeaderNames.ACCEPT_ENCODING, HttpHeaderValues.GZIP);
channel.writeAndFlush(request2);
And then inside you channelRead method, read them in the same order you have send them.
To properly manage the queues for the packets, you could a solution like this, where you basicly keep a queue, so you know the correct callback to call after a request completes.

Async Netty HttpServer and HttpClient

I have been exploring Netty for the past days, as I am writing a quick and tight HTTP server that should receive lots of requests, and Netty's HTTP server implementation is quite simple and does the job.
My next step is as part of the request handling, I need to launch an HTTP request to an external web server. My intuition is to implement an asynchronous client that can send a lot of requests simultaneously, but I am a little confused as what is the right approach. My understanding is that Netty server uses a worker thread for each incoming message, therefore that worker thread would not be freed to accept new messages until my handler finishes its work.
Here is the punch: even if I have an asynchronous HTTP client in hand, it won't matter if I need to wait for each response and process it back with my server handler - the same worker thread would remain blocking all this time. The alternative is to use the async nature of the client, returning a future object quickly to release the thread and place a listener (meaning I have to return 200 or 202 status to the client), and check my future object to indicate when the response is received and I can push it to the client.
Does this make sense? Am I way off with my assumptions? What is a good practice to implement such kind of Netty acceptor server + external client with high concurrency?
Thanks,
Assuming you're asking about Netty 4.
Netty configured with a ServerBootstrap will have a fixed number of worker threads that it uses to accept requests and execute the channel, like so:
Two threads accepting / processing requests
bootstrap.group(NioEventLoopGroup(2))
One thread accepting requests, two threads processing.
bootstrap.group(NioEventLoopGroup(1), NioEventLoopGroup(1))
In your case, you have a channel includes a bunch of Http Codec decoding/encoding stuff and your own handler which itself makes an outgoing Http request. You're right that you don't want to block the server from accepting incoming requests, or decoding the incoming Http message, and there are two things you can do to mitigate that, you've struck on the first already.
Firstly, you want to use an Async netty client to make the outgoing requests, have a listener write the response to the original requests channel when the outgoing request returns. This means you don't block and wait, meaning you can handle many more concurrent outgoing requests than the number of threads available to process those requests.
Secondly, you can have your custom handler run in its own EventExecutorGroup, which means it runs in a separate threadpool from the acceptor / http codec channel handlers, like so:
// Two separate threads to execute your outgoing requests..
EventExecutorGroup separateExecutorGroup new DefaultEventExecutorGroup(2);
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
.... http codec stuff ....
pipeline.addLast(separateExecutorGroup, customHandler);
}
};
Meaning your outgoing requests don't hog the threads that would be used for accepting / processing incoming ones.

Resources