I am using RestSharp to consume a REST web service and will be making a large volume of calls in a short time period.
The documentation for the API strongly recommends the use of persistent HTTP connections to do this, however I am struggling to get this working with RestSharp.
I have tried adding the "Connection: Keep-alive" header to the request but when I do this the request fails with the following error - "Keep-Alive and Close may not be set using this property."
Can I not use this header with RestSharp or is there something else I need to do to enable this?
Can anyone help? Thanks.
To get a good answer, you need to ask a good question. Where in the documentation does it say this? (Link/Reference?) How many requests is a "large volume"? Also, if you post your code for how you added Connection: Keep-Alive to your http headers, someone here may be able to comment on your technique and help you with the specific programming issue.
Also, Connection: Keep-Alive may already be present on the outgoing HttpRequests! Check it out using Fiddler or WireShark. I've seen a few blog posts with wireshark captures of RestSharp requests that had the Connection: Keep-Alive header present without any extra config. For example, while testing other mvc3 functionality using RestSharp as a consumer, Jimmy Bogard captures his RestSharp requests with fiddler which already have the Connection: Keep-Alive header.
Apparently it is also the default behavior for built in .Net classes like System.Net.Webclient to use Connection: Keep-Alive. Reference Does WebClient use KeepAlive?
I think that making use of keep alive is going to be more about your code using RestSharp in an optimal manner than it is about configuring RestSharp itself. If you want to make sure your connection is reused you need to make sure your use of RestSharp allows for that by keeping one RestClient instance in scope and reusing it throughout multiple requests against the same host.
Again using Fiddler or WireShark will help you capture some HttpRequests for analysis.
Related
I’m investigating a problem where Tomcat (7.0.90 7.0.92) returns a response with no HTTP headers very occasionally.
According to the captured packets by Wireshark, after Tomcat receives a request it just returns only a response body. It returns neither a status line nor HTTP response headers.
It makes a downstream Nginx instance produce the error “upstream sent no valid HTTP/1.0 header while reading response header from upstream”, return 502 error to the client and close the corresponding http connection between Nginx and Tomcat.
What can be a cause of this behavior? Is there any possibility which makes Tomcat behave this way? Or there can be something which strips HTTP headers under some condition? Or Wireshark failed to capture the frames which contain the HTTP headers? Any advice to narrow down where the problem is is also greatly appreciated.
This is a screenshot of Wireshark's "Follow HTTP Stream" which is showing the problematic response:
EDIT:
This is a screen shot of "TCP Stream" of the relevant part (only response). It seems that the chunks in the second response from the last looks fine:
EDIT2:
I forwarded this question to the Tomcat users mailing list and got some suggestions for further investigation from the developers:
http://tomcat.10.x6.nabble.com/Tomcat-occasionally-returns-a-response-without-HTTP-headers-td5080623.html
But I haven’t found any proper solution yet. I’m still looking for insights to tackle this problem..
The issues you experience stem from pipelining multiple requests over a single connection with the upstream, as explained by yesterday's answer here by Eugène Adell.
Whether this is a bug in nginx, tomcat, your application, or the interaction of any combination of the above, would probably be a discussion for another forum, but for now, let's consider what would be the best solution:
Can you post your nginx configuration? Specifically, if you're using keepalive and a non-default value of proxy_http_version within nginx? – cnst 1 hour ago
#cnst I'm using proxy_http_version 1.1 and keepalive 100 – Kohei Nozaki 1 hour ago
As per an earlier answer to an unrelated question here on SO, yet sharing the configuration parameters as above, you might want to reconsider the reasons behind your use of the keepalive functionality between the front-end load-balancer (e.g., nginx) and the backend application server (e.g., tomcat).
As per a keepalive explanation on ServerFault in the context of nginx, the keepalive functionality in the upstream context of nginx wasn't even supported until very-very recently in the nginx development years. Why? It's because there are very few valid scenarios for using keepalive when it's basically faster to establish a new connection than to wait for an existing one to become available:
When the latency between the client and the server is on the order of 50ms+, keepalive makes it possible to reuse the TCP and SSL credentials, resulting in a very significant speedup, because no extra roundtrips are required to get the connection ready for servicing the HTTP requests.
This is why you should never disable keepalive between the client and nginx (controlled through http://nginx.org/r/keepalive_timeout in http, server and location contexts).
But when the latency between the front-end proxy server and the backend application server is on the order of 1ms (0.001s), using keepalive is a recipe for chasing Heisenbugs without reaping any benefits, as the extra 1ms latency to establish a connection might as well be less than the 100ms latency of waiting for an existing connection to become available. (This is a gross oversimplification of connection handling, but it just shows you how extremely insignificant any possible benefits of the keepalive between the front-end load-balancer and the application server would be, provided both of them live in the same region.)
This is why using http://nginx.org/r/keepalive in the upstream context is rarely a good idea, unless you really do need it, and have specifically verified that it produces the results you desire, given the points as above.
(And, just to make it clear, these points are irrespective of what actual software you're using, so, even if you weren't experiencing the problems you experience with your combination of nginx and tomcat, I'd still recommend you not use keepalive between the load-balancer and the application server even if you decide to switch away from either or both of nginx and tomcat.)
My suggestion?
The problem wouldn't be reproducible with the default values of http://nginx.org/r/proxy_http_version and http://nginx.org/r/keepalive.
If your backend is within 5ms of front-end, you most certainly aren't even getting any benefits from modifying these directives in the first place, so, unless chasing Heisenbugs is your path, you might as well keep these specific settings at their most sensible defaults.
We see that you are reusing an established connection to send the POST request and that, as you said, the response comes without the status-line and the headers.
after Tomcat receives a request it just returns only a response body.
Not exactly. It starts with 5d which is probably a chunk-size and this means that the latest "full" response (with status-line and headers) got from this connection contained a "Transfer-Encoding: chunked" header. For any reason, your server still believes the previous response isn't finished by the time it starts sending this new response to your last request.
A missing chunked seems confirmed as the screenshot doesn't show a last-chunk (value = 0) ending the previous request. Note that the last response ends with a last-chunk (the last byte shown is 0).
What causes this ? The previous response isn't technically considered as fully answered. It can be a bug on Tomcat, your webservice library, your own code. Maybe even, you're sending your request too early, before the previous one was completely answered.
Are some bytes missing if you compare the chunk-sizes from what is actually sent to the client ? Are all buffers flushed ? Beware of the line endings (CRLF vs LF only) too.
One last cause that I'm thinking about, if your response contains some kind of user input taken from the request, you can be facing HTTP Splitting.
Possible solutions.
It is worth trying to disable the chunked encoding at your library level, for example with Axis2 check the HTTP Transport.
When reusing a connection, check your client code to make sure that you aren't sending a request before you read all of the previous response (to avoid overlapping).
Further reading
RFC 2616 3.6.1 Chunked Transfer Coding
It turned out that the "sjsxp" library which JAX-WS RI v2.1.3 uses makes Tomcat behave this way. I tried a different version of JAX-WS RI (v2.1.7) which doesn't use the "sjsxp" library anymore and it solved the issue.
A very similar issue posted on Metro mailing list: http://metro.1045641.n5.nabble.com/JAX-WS-RI-2-1-5-returning-malformed-response-tp1063518.html
I'm a bit rusty on nuances of the HTTP protocol and I'm wondering if it can support publish/subscribe directly?
HTTP is a request reponse protocol. So client sends a request and the server sends back a response.
In HTTP 1.0 a new connection was made for each request.
Now HTTP 1.1 improved on HTTP 1.0 by allowing the client to keep the connection open and make multiple requests.
I realise you can upgrade an HTTP connection to a websocket for fast 2 way communications. What I'm curious about is whether this is strictly necessary?
For example if I request a resource "http://somewhere.com/fetch/me/slowly"
Is the server free to reply directly twice?
Such as first with a 202 accepted
and then shortly later with the content when it is ready,
but without the client sending an additional request first?
i.e.
Client: GET http://somewhere.com/fetch/me/slowly
Server: 202 "please wait..."
Server: 200 "here's your document"
Would it be correct to implement a publish/subscribe service this way?
For example:
Client: http://somewhere.com/subscribe
Server: item 1
...
Server: item 2
I get the impression that this 'might' work because clients will typically have an event loop watching the connection but is technically wrong (because a client following the protocol need not be implemented that way).
However, if you use chunked transfer encoding this would work.
HTTP/2 seems to allow this as well but I'm not clear whether something changed to make it possible.
I haven't seen much discussion of this in relation to pub/sub so what if anything is wrong with using plain HTTP/1.1 with or without chunked encoding?
If this works why do you need things like RSS or ATOM?
A HTTP request can have multiple 'responses', but the responses all have statuscodes in the 1xx range, such as 102 Processing.
However, these responses are only headers, never bodies.
HTTP/1.1 (like 1.0 before it) is a request/response protocol. Sending a response unsolicited is not allowed. HTTP/2 is a frames protocol which adds server push which allows the server to offer extra data and handle multiple requests in parallel but doesn't change its request/response nature.
It is possible to keep a HTTP connection open and keep sending more data though. Many (audio, video) streaming services will use this.
However, this just looks like a continuous body that keeps on streaming, rather than many multiple HTTP responses.
If this works why do you need things like RSS or ATOM
Because keeping a TCP connection open is not free.
I've heared that you (in some cases) can prevent timeouts by sending the HTTP-header back to the client before the whole HTTP-body is prepared.
I know that this is impossible using gzip ... but is this possible using HTTPS?
I read in some posts that the secure part of HTTPS is done in the transport-layer (TLS/SSL) - therefore it should be possible, right?
Sorry for mixing gzip in here - it's a completely different level - I know ... and it may is more confusing than giving an example ;)
In HTTP 1.1 it's possible to send the response header before preparing of the body of the response is completed . To do this one normally uses chunked encoding.
Some servers also stream the data as is by not specifying the content length and indicating the end of stream by closing connection, but this is quite a brutal way to do things (chunked encoding was designed exactly for sending the data before it's completely available).
As HTTP(S) is HTTP running over SSL/TLS channel, TLS doesn't affect the above behavior in any way.
Yes, you can do this. HTTPS is just HTTP over an TLS/SSL transport, the HTTP protocol is exactly the same.
I'm still starting out with Lua, and would like to write a (relatively) simple proxy using it.
This is what I would like to get to:
Listen on port.
Accept connection.
Since this is a proxy, I'm expecting HTTP (Get/Post etc..)/HTTPS/FTP/whatever requests from my browser.
Inspect the request (Just to extract the host and port information?)
Create a new socket and connect to the host specified in the request.
Relay the exact request as it was received, with POST data and all.
Receive the response (header/body/anything else..) and respond to the initial request.
Close Connections? I suppose Keep-Alive shouldn't be respected?
I realize it's not supposed to be trivial, but I'm having a lot of trouble setting this up using LuaSockets or Copas --- how do I receive the entire request? Keep receiving until I scan \r\n\r\n? Then how do I pull the post data? and the body? Or accept a "download" file? I read about the "sink", but admittedly didn't understand most of what that meant, so maybe I should read up more on that?
In case it matters, I'm working on a windows machine, using LuaForWindows and am still rather new to Lua. Loving it so far though, tables are simply amazing :)
I discovered lua-http but it seems to have been merged into Xavante (and I didn't find any version for lua 5.1 and LuaForWindows), not sure if it makes my life easier?
Thanks in advance for any tips, pointers, libraries/source I should be looking at etc :)
Not as easy as you may think. Requests to proxies and request to servers are different. In rfc2616 you can see that, when querying a proxy, a client include the absolute url of the requested document instead of the usual relative one.
So, as a proxy, you have to parse incomming requests, modify them, query the appropriate servers, and return response.
Parsing incomming requests is quite complex as body length depends on various parameters ( method, content encoding, etc ).
You may try to use lua-http-parser.
I have heard that http is a nice way to design my own protocol. although i can design a binary protocol, i would prefer to follow the HTTP standard to design my protocol.
basically the flow of the application is that with the request the client sends some parameter strings to the server, the server sends the response string to the application. this procedure continues several times, before the connection thread terminates.
i am using java servlets for the above.
how should the client send the HTTP parameters so that parsing is easy at the server.
Get /a HTTP/1.1
Host: localhost
??? what comes here
??? what comes here
Since that is a GET request, nothing.
I'd suggest using querystring parameters, then you can access them using ServletRequest.getParameterNames(), getParameterValues(), getParameterMap() etc.
So, your request line would take the form:
GET /a?x=1&y=1 HTTP/1.1
since this is the standard way of passing parameter data, other clients, such as web browsers, will be able to use your service easily.
This assumes that the operation does not cause side-effects on the server. If it does then you should be using a POST, PUT or DELETE request depending on the exact nature of the operation.
HTTP Made Really Easy is a useful document since, at least initially, the HTTP Spec can be a bit daunting.
Why not base your protocol on something that already exists for example SOAP?
What you're designing is a data exchange format, not a protocol really.
So the question is, really, what sort of data do you want to send? To answer that, you need to consider who is receiving it. If it's yourself, then just keep it simple.