I have implemented a channel handler for handling http pipelining. My code is at github: https://github.com/huntc/netty-http-pipelining
My question is around the approach that I have taken and whether it is a reasonable one in the context of Netty's architecture.
When my HttpPipeliningHandler receives an upstream HttpRequest it then forms a new message event of type OrderedUpstreamMessageEvent. This event is also part of my package and retains information in relation to the request that will be required when formulating reply messages.
When a channel handler further upstream receives the OrderedUpstreamMessageEvent it forms a reply by generating a OrderedDownstreamMessageEvent e.g.:
ctx.sendDownstream(new OrderedDownstreamMessageEvent(oue, somemessage));
where
ctx = ChannelHandlerContext instance
oue = OrderedUpstreamMessageEvent instance
somemessage = some message instance to be sent as an http response
You can also do more fun stuff like send chunked replies.
Does this approach look reasonable? It certainly works! Is it regular/acceptable to transform message events in an upstream handler? Obviously if the message event is transformed again then the pipelining functionality will not work.
I had a quick look... A few comments.
1) Your access of PriorityQueue must be synchronized as the downstream event may be fired by any thread.
2) Same needs to be done for nextRequiredSequence or use AtomicInteger which should be better
3) You want to use Channel.close()
The rest looks good
Related
I have built a testsuite for a service orchestration with Citrus framework.
In one case, when the former request results in an "empty" response, the last orchestration step, a HTTP request, is skipped.
How can I test that this last request is NOT executed? I did not found a way to do this.
When I do not check this with an explicit expectation, the test is successful no matter if a request is executed or not.
I have an HTTP server simulation in place to respond according to the request. What I was looking for is something like
runner.http(action -> action.server(simulation)
.receiveNothingDuring(5000)
);
to wait for 5 seconds for a request to arrive and SUCCEED if nothing arrives. This is kind of the inverse of the normal receive assertion.
You can use the receiveTimeout action that is exactly what you need:
runner.receiveTimeout(action -> action.endpoint(simulation)
.timeout(5000));
See also descriptions in docs
When i make an ajax call to server the full page is not postback, only few amount of data goes to the server and return a response page.
But i am wondering about processing. How the Server or server code knows whether the request in normal call or Ajax call.
I request to experts, please clear my doubt.
Thanks in advance.
How the Server or server code knows whether the request in Normal call or Ajax call.
The server knows this if your javascript code marks the HTTP packet as such. E.g. in jQuery the HTTP header sent to the server has an X-Requested-With set and ASP.NET uses this to distinguish if HTTP packets are ajax calls or not.
To know more about HTTP packets you can inspect the ones sent either in a packet sniffer such as Fiddler or in a browser with dev. tools that monitors traffic. In the latter case you can see this in e.g. Chrome dev tools by doing the following:
Open up Chrome Developer Tools, Ctrl+Alt+I (or Cmd+Alt+I in Mac).
Select the Network tab (you may have to refresh the page to enable network monitoring)
Perform the Ajax call, the HTTP request made should show up in the list at the bottom.
Select the relevant packet, you should now see "Headers", "Preview", "Response", "Cookies" and "Timing" tabs for the selected packet.
Select the "Headers" tab
You may have to expand the Request Headers part. Among the headers should be X-Requested-With: XMLHttpRequest
Here is a screenshot of the tool looking at packages as I was editing this answer:
Note that ajax calls don't necessarily have to be asynchronous as they can be synchronous (blocking the javascript until response is loaded) as well. Synchronous calls are necessary sometimes, e.g. popup blockers don't allow you to open a browser window inside an asynchronous ajax callback.
How the Server or server code knows whether the request in Normal call or Ajax call
It doesn't. There is nothing about an HTTP request sent by Ajax that is any different from any other HTTP request.
The code that makes the request can do something to make it recognisable (e.g. by adding a query string, by changing the Accept header to something more suitable for the context (such as Accept: application/json) or by adding additional HTTP headers (some libraries add X-Requested-With: XMLHttpRequest).
None of those are guarantees as someone could always make an HTTP request manually. They are fine for determining which view to return within your own application, but not if you are trying to implement any kind of security.
AJAX calls performs with instance of XmlHttpRequest prototype. 3rd argument of its .open() method is async:bool. So
xhr.open("GET", "http://example.com", true)
is async and
xhr.open("GET", "http://example.com") is sync.
jQuery get(), post() and ajax() is async by default and you need to pass async param to make it synchronous. So answer to your question: YOU tell the browser what request you want.
So I understand the concept of server-sent events (EventSource):
A client connects to an endpoint via EventSource
Client just listens to messages sent from the endpoint
The thing I'm confused about is how it works on the server. I've had a look at different examples, but the one that comes to mind is Mozilla's: http://hacks.mozilla.org/2011/06/a-wall-powered-by-eventsource-and-server-sent-events/
Now this may be just a bad example, but it kinda makes sense how the server side would work, as I understand it:
Something changes in a datastore, such as a database
A server-side script polls the datastore every Nth second
If the polling script notices a change, a server-sent event is fired to the clients
Does that make sense? Is that really how it works from a barebones perspective?
The HTML5 doctor site has a great write-up on server-sent events, but I'll try to provide a (reasonably) short summary here as well.
Server-sent events are, at its core, a long running http connection, a special mime type (text/event-stream) and a user agent that provides the EventSource API. Together, these make the foundation of a unidirectional connection between a server and a client, where messages can be sent from server to client.
On the server side, it's rather simple. All you really need to do is set the following http headers:
Content-Type: text/event-stream
Cache-Control: no-cache
Connection: keep-alive
Be sure to respond with the code 200 and not 204 or any other code, as this will cause compliant user agents to disconnect. Also, make sure to not end the connection on the server side. You are now free to start pushing messages down that connection. In nodejs (using express), this might look something like the following:
app.get("/my-stream", function(req, res) {
res.status(200)
.set({ "content-type" : "text/event-stream"
, "cache-control" : "no-cache"
, "connection" : "keep-alive"
})
res.write("data: Hello, world!\n\n")
})
On the client, you just use the EventSource API, as you noted:
var source = new EventSource("/my-stream")
source.addEventListener("message", function(message) {
console.log(message.data)
})
And that's it, basically.
Now, in practice, what actually happens here is that the connection is kept alive by the server and the client by means of a mutual contract. The server will keep the connection alive for as long as it sees fit. Should it want to, it may terminate the connection and respond with a 204 No Content next time the client tries to connect. This will cause the client to stop trying to reconnect. I'm not sure if there's a way to end the connection in a way that the client is told not to reconnect at all, thereby skipping the client trying to reconnect once.
As mentioned client will keep the connection alive as well, and try to reconnect if it is dropped. The algorithm to reconnect is specified in the spec, and is fairly straight forward.
One super important bit that I've so far barely touched on however is the mime type. The mime type defines the format of the message coming down the connecting. Note however that it doesn't dictate the format of the contents of the messages, just the structure of the messages themselves. The mime type is extremely straight forward. Messages are essentially key/value pairs of information. The key must be one of a predefined set:
id - the id of the message
data - the actual data
event - the event type
retry - milleseconds the user agent should wait before retrying a failed connection
Any other keys should be ignored. Messages are then delimited by the use of two newline characters: \n\n
The following is a valid message: (last new line characters added for verbosity)
data: Hello, world!
\n
The client will see this as: Hello, world!.
As is this:
data: Hello,
data: world!
\n
The client will see this as: Hello,\nworld!.
That pretty much sums up what server-sent events are: a long running non-cached http connection, a mime type and a simple javascript API.
For more information, I strongly suggest reading the specification. It's small and describes things very well (although the requirements of the server side could possibly be summarized a bit better.) I highly suggest reading it for the expected behavior with certain http status codes, for instance.
You also need to make sure to call res.flushHeaders(), otherwise Node.js won't send the HTTP headers until you call res.end(). See this tutorial for a complete example.
So, I want to use node.js and http request pipelining, but I want to use HTTP only as a transport, nothing else. I am interested in exploiting the request pipelining feature. However, one problem that I am running into is that till a send a response to a previous request, the next requests's callback isn't fired by node. I want a way to be able to do this. I shall be handling the ordering of results in the application. Is there a way to do this?
The HTTP RFC mentions that the responses should be in order, but I don't see any reason for node.js to not call the next callback till the 1st one is responded to. The application can in theory send the response to the 2nd query as a response to the 1st one (as long as there is some way for the recipient to know that it is a response to the 2nd one).
The HTTP client in NodeJS does not support pipelining. (Slightly old post from Ryan, but I'm fairly sure it still holds.)
Is it possible to retrieve the URL of a 3rd party webservice calling my controller method?
Things like Request.Current.Url refer to my own URL (/someController/someAction/).
The 3rd party webservice is sending HTTP POST data to my /someController/someAction and I wish to send back a HTTP POST message to the caller of the method, the 3rd party webservice, without using
return Content["some response"]
which will force me to exit the method. Since the answer is plain text I would like to send it using HTTP Post.
What I actually try to do is respond to the calling webservice without exiting my method (return Content() will exit) so I can call other methods to process the data send to me by the webservice. I want to first tell the webservice I received their stuff and than process, in this way when a processing error occurs the webservice at least will not resend old data. Is there another way to do this than building your own HTTP post?
You can rely on Request.UrlReferer, but your idea seems not that good. The best solution would be propably to start new thread for data processing and stick to return Content.