I need to send multiple async requests to a rest server through the same connection and get them executed in FIFO order, I think HTTP 1.1 pipelining is just perfect for this.
I found some related issues on Netty but I couldn't find much on their user guide and nothing on their test cases.
Is HTTP 1.1 pipelining supported on Netty? How would that be implemented?
An example would be greatly appreciated.
Related -unanswered- question: HTTP 1.1 pipelining vs HTTP 2 multiplexing
Since Netty is closer to the TCP layer, than to the HTTP layer, sending multiple requests is easy, after setting up the pipeline, just write them.
HttpRequest request1 = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/");
request1.headers().set(HttpHeaderNames.HOST, host);
request1.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
request1.headers().set(HttpHeaderNames.ACCEPT_ENCODING, HttpHeaderValues.GZIP);
channel.writeAndFlush(request1);
HttpRequest request2 = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/");
request2.headers().set(HttpHeaderNames.HOST, host);
request2.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
request2.headers().set(HttpHeaderNames.ACCEPT_ENCODING, HttpHeaderValues.GZIP);
channel.writeAndFlush(request2);
And then inside you channelRead method, read them in the same order you have send them.
To properly manage the queues for the packets, you could a solution like this, where you basicly keep a queue, so you know the correct callback to call after a request completes.
Related
How can I abort the HTTP Request and not close the connection while implementing a netty HTTP client?
At the moment I am using NioSocketChannel and I am confused if doClose is the only option.
Is there a more generic way of canceling the request, that can work with any kind of socket channel, eg KQueue?
Netty's ChannelFuture extends the netty Future which provides a cancel method: https://netty.io/4.0/api/io/netty/util/concurrent/Future.html#cancel-boolean-
I'm pretty sure that's what you're looking for.
doClose closes the channel where a channel is described as "A nexus to a network socket or a component which is capable of I/O operations such as read, write, connect, and bind." So that would close the connection, as you're seeing. It doesn't look like that's going to help you.
I'm a bit rusty on nuances of the HTTP protocol and I'm wondering if it can support publish/subscribe directly?
HTTP is a request reponse protocol. So client sends a request and the server sends back a response.
In HTTP 1.0 a new connection was made for each request.
Now HTTP 1.1 improved on HTTP 1.0 by allowing the client to keep the connection open and make multiple requests.
I realise you can upgrade an HTTP connection to a websocket for fast 2 way communications. What I'm curious about is whether this is strictly necessary?
For example if I request a resource "http://somewhere.com/fetch/me/slowly"
Is the server free to reply directly twice?
Such as first with a 202 accepted
and then shortly later with the content when it is ready,
but without the client sending an additional request first?
i.e.
Client: GET http://somewhere.com/fetch/me/slowly
Server: 202 "please wait..."
Server: 200 "here's your document"
Would it be correct to implement a publish/subscribe service this way?
For example:
Client: http://somewhere.com/subscribe
Server: item 1
...
Server: item 2
I get the impression that this 'might' work because clients will typically have an event loop watching the connection but is technically wrong (because a client following the protocol need not be implemented that way).
However, if you use chunked transfer encoding this would work.
HTTP/2 seems to allow this as well but I'm not clear whether something changed to make it possible.
I haven't seen much discussion of this in relation to pub/sub so what if anything is wrong with using plain HTTP/1.1 with or without chunked encoding?
If this works why do you need things like RSS or ATOM?
A HTTP request can have multiple 'responses', but the responses all have statuscodes in the 1xx range, such as 102 Processing.
However, these responses are only headers, never bodies.
HTTP/1.1 (like 1.0 before it) is a request/response protocol. Sending a response unsolicited is not allowed. HTTP/2 is a frames protocol which adds server push which allows the server to offer extra data and handle multiple requests in parallel but doesn't change its request/response nature.
It is possible to keep a HTTP connection open and keep sending more data though. Many (audio, video) streaming services will use this.
However, this just looks like a continuous body that keeps on streaming, rather than many multiple HTTP responses.
If this works why do you need things like RSS or ATOM
Because keeping a TCP connection open is not free.
As far as I know, in HTTP 1.1 you can use the same TCP/IP connection for multiple requests, but you can't execute more than one request at a time on that connection. In other words, it has to go like: Request, Response, Request, Response, Request, .... You can't do something like: Req1, Req2, Resp1, Req3, Resp3, Resp2. Maybe you can with HTTP/2, I don't know.
Anyway, my question is: what happens if you try to send multiple simultaneous requests with FlurlClient?
Like:
using (var client = new FlurlClient("https://api.com"))
{
var req1Task = client.Request("path1").PostJsonAsync(thing);
var req2Task = client.Request("path2").GetAsync();
await Task.WhenAll(req1Task, req2Task);
// Do something with responses.
}
I know the answer for HttpClient.
The answer is that if you try to start another request on HttpClient when a request is already pending, HttpClient will create a new TCP/IP connection. Whereas if you had waited until the first request was done, HttpClient would reuse the connection for the 2nd request.
My guess is that FlurlClient is the same.
Your assumption is correct about FlurlClient behaving the same as HttpClient in that regard. Flurl is just a thin abstraction layer on top of HttpClient, which is itself an abstraction on top of several more layers. Eventually you hit the platform-specific networking stack that actually implements the protocols.
It is valid and (usually) smart to have multiple calls happening concurrently like you've done in your example. Once connection limits are hit (which is adjustable via ServicePointManager), requests will simply queue up until a connection is available. Just be sure that number doesn't get too high or you'll likely start receiving errors from the server. Also, like HttpClient, be sure you're reusing a FlurlClient instance as much as possible so you don't run into this problem.
I have been exploring Netty for the past days, as I am writing a quick and tight HTTP server that should receive lots of requests, and Netty's HTTP server implementation is quite simple and does the job.
My next step is as part of the request handling, I need to launch an HTTP request to an external web server. My intuition is to implement an asynchronous client that can send a lot of requests simultaneously, but I am a little confused as what is the right approach. My understanding is that Netty server uses a worker thread for each incoming message, therefore that worker thread would not be freed to accept new messages until my handler finishes its work.
Here is the punch: even if I have an asynchronous HTTP client in hand, it won't matter if I need to wait for each response and process it back with my server handler - the same worker thread would remain blocking all this time. The alternative is to use the async nature of the client, returning a future object quickly to release the thread and place a listener (meaning I have to return 200 or 202 status to the client), and check my future object to indicate when the response is received and I can push it to the client.
Does this make sense? Am I way off with my assumptions? What is a good practice to implement such kind of Netty acceptor server + external client with high concurrency?
Thanks,
Assuming you're asking about Netty 4.
Netty configured with a ServerBootstrap will have a fixed number of worker threads that it uses to accept requests and execute the channel, like so:
Two threads accepting / processing requests
bootstrap.group(NioEventLoopGroup(2))
One thread accepting requests, two threads processing.
bootstrap.group(NioEventLoopGroup(1), NioEventLoopGroup(1))
In your case, you have a channel includes a bunch of Http Codec decoding/encoding stuff and your own handler which itself makes an outgoing Http request. You're right that you don't want to block the server from accepting incoming requests, or decoding the incoming Http message, and there are two things you can do to mitigate that, you've struck on the first already.
Firstly, you want to use an Async netty client to make the outgoing requests, have a listener write the response to the original requests channel when the outgoing request returns. This means you don't block and wait, meaning you can handle many more concurrent outgoing requests than the number of threads available to process those requests.
Secondly, you can have your custom handler run in its own EventExecutorGroup, which means it runs in a separate threadpool from the acceptor / http codec channel handlers, like so:
// Two separate threads to execute your outgoing requests..
EventExecutorGroup separateExecutorGroup new DefaultEventExecutorGroup(2);
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
.... http codec stuff ....
pipeline.addLast(separateExecutorGroup, customHandler);
}
};
Meaning your outgoing requests don't hog the threads that would be used for accepting / processing incoming ones.
I am using Netty for socket connection mainly. But i also want to use netty to handle some http connections as well.
The problem is : the data in the post method sent to Netty Http Server is so large . So Netty raise the exception: Long Frame Exception.
Anyone please tell me how to configure Netty accept bigger Post param value.
Thank you very much
I suspect you have HttpChunkAggregator in the pipeline. Please remove it and handle HttpChunk by yourself.