I view my homepage in chrome, only one http request, and i encounter this case.
What 's wrong?
I can see that the dns not found when happened, so it's not a bug of server.
So what chrome do at the moment.
From: https://groups.google.com/forum/?fromgroups=#!topic/speedtracer/MlF_gTvMRD8
The 'Blocked' time is the delta between the 'requestTime' and the
'startTime'. Basically, this is a measure of the amount of time
between the UI thread initiating the request for the network resource
and the HTTP GET for the request bits getting onto the wire. In
between, there can be internal queuing/message passing delay, cache
lookup, dns resolution, and TCP connection setup.
Among the causes mentioned, it sounds like DNS lookup is the only one that would take that long. However it does look like the DNS lookup eventually succeeded, otherwise the whole request would have failed. Here's one possible cause of DNS delay: http://www.bigthinka.net/support/web-browsers/google-chrome-slow-dns-resolving.html
If that doesn't solve it. Try updating the question with more info about your DNS settings.
Related
I’m investigating a problem where Tomcat (7.0.90 7.0.92) returns a response with no HTTP headers very occasionally.
According to the captured packets by Wireshark, after Tomcat receives a request it just returns only a response body. It returns neither a status line nor HTTP response headers.
It makes a downstream Nginx instance produce the error “upstream sent no valid HTTP/1.0 header while reading response header from upstream”, return 502 error to the client and close the corresponding http connection between Nginx and Tomcat.
What can be a cause of this behavior? Is there any possibility which makes Tomcat behave this way? Or there can be something which strips HTTP headers under some condition? Or Wireshark failed to capture the frames which contain the HTTP headers? Any advice to narrow down where the problem is is also greatly appreciated.
This is a screenshot of Wireshark's "Follow HTTP Stream" which is showing the problematic response:
EDIT:
This is a screen shot of "TCP Stream" of the relevant part (only response). It seems that the chunks in the second response from the last looks fine:
EDIT2:
I forwarded this question to the Tomcat users mailing list and got some suggestions for further investigation from the developers:
http://tomcat.10.x6.nabble.com/Tomcat-occasionally-returns-a-response-without-HTTP-headers-td5080623.html
But I haven’t found any proper solution yet. I’m still looking for insights to tackle this problem..
The issues you experience stem from pipelining multiple requests over a single connection with the upstream, as explained by yesterday's answer here by Eugène Adell.
Whether this is a bug in nginx, tomcat, your application, or the interaction of any combination of the above, would probably be a discussion for another forum, but for now, let's consider what would be the best solution:
Can you post your nginx configuration? Specifically, if you're using keepalive and a non-default value of proxy_http_version within nginx? – cnst 1 hour ago
#cnst I'm using proxy_http_version 1.1 and keepalive 100 – Kohei Nozaki 1 hour ago
As per an earlier answer to an unrelated question here on SO, yet sharing the configuration parameters as above, you might want to reconsider the reasons behind your use of the keepalive functionality between the front-end load-balancer (e.g., nginx) and the backend application server (e.g., tomcat).
As per a keepalive explanation on ServerFault in the context of nginx, the keepalive functionality in the upstream context of nginx wasn't even supported until very-very recently in the nginx development years. Why? It's because there are very few valid scenarios for using keepalive when it's basically faster to establish a new connection than to wait for an existing one to become available:
When the latency between the client and the server is on the order of 50ms+, keepalive makes it possible to reuse the TCP and SSL credentials, resulting in a very significant speedup, because no extra roundtrips are required to get the connection ready for servicing the HTTP requests.
This is why you should never disable keepalive between the client and nginx (controlled through http://nginx.org/r/keepalive_timeout in http, server and location contexts).
But when the latency between the front-end proxy server and the backend application server is on the order of 1ms (0.001s), using keepalive is a recipe for chasing Heisenbugs without reaping any benefits, as the extra 1ms latency to establish a connection might as well be less than the 100ms latency of waiting for an existing connection to become available. (This is a gross oversimplification of connection handling, but it just shows you how extremely insignificant any possible benefits of the keepalive between the front-end load-balancer and the application server would be, provided both of them live in the same region.)
This is why using http://nginx.org/r/keepalive in the upstream context is rarely a good idea, unless you really do need it, and have specifically verified that it produces the results you desire, given the points as above.
(And, just to make it clear, these points are irrespective of what actual software you're using, so, even if you weren't experiencing the problems you experience with your combination of nginx and tomcat, I'd still recommend you not use keepalive between the load-balancer and the application server even if you decide to switch away from either or both of nginx and tomcat.)
My suggestion?
The problem wouldn't be reproducible with the default values of http://nginx.org/r/proxy_http_version and http://nginx.org/r/keepalive.
If your backend is within 5ms of front-end, you most certainly aren't even getting any benefits from modifying these directives in the first place, so, unless chasing Heisenbugs is your path, you might as well keep these specific settings at their most sensible defaults.
We see that you are reusing an established connection to send the POST request and that, as you said, the response comes without the status-line and the headers.
after Tomcat receives a request it just returns only a response body.
Not exactly. It starts with 5d which is probably a chunk-size and this means that the latest "full" response (with status-line and headers) got from this connection contained a "Transfer-Encoding: chunked" header. For any reason, your server still believes the previous response isn't finished by the time it starts sending this new response to your last request.
A missing chunked seems confirmed as the screenshot doesn't show a last-chunk (value = 0) ending the previous request. Note that the last response ends with a last-chunk (the last byte shown is 0).
What causes this ? The previous response isn't technically considered as fully answered. It can be a bug on Tomcat, your webservice library, your own code. Maybe even, you're sending your request too early, before the previous one was completely answered.
Are some bytes missing if you compare the chunk-sizes from what is actually sent to the client ? Are all buffers flushed ? Beware of the line endings (CRLF vs LF only) too.
One last cause that I'm thinking about, if your response contains some kind of user input taken from the request, you can be facing HTTP Splitting.
Possible solutions.
It is worth trying to disable the chunked encoding at your library level, for example with Axis2 check the HTTP Transport.
When reusing a connection, check your client code to make sure that you aren't sending a request before you read all of the previous response (to avoid overlapping).
Further reading
RFC 2616 3.6.1 Chunked Transfer Coding
It turned out that the "sjsxp" library which JAX-WS RI v2.1.3 uses makes Tomcat behave this way. I tried a different version of JAX-WS RI (v2.1.7) which doesn't use the "sjsxp" library anymore and it solved the issue.
A very similar issue posted on Metro mailing list: http://metro.1045641.n5.nabble.com/JAX-WS-RI-2-1-5-returning-malformed-response-tp1063518.html
I am new to Apache Camel and Netty and this is my first project. I am trying to use Camel with the Netty component to load balance heavy traffic in a back end load test scenario.This is the setup I have right now:
from("netty:tcp:\\this-ip:9445?defaultCodec=false&sync=true").loadBalance().roundRobin().to("netty:tcp:\\backend1:9445?defaultCodec=false&sync=true,netty:tcp:\\backend2:9445?defaultCodec=false&sync=true)
The issue is unexpected buffer sizes that I am receiving in the response that I see in the client system sending tcp traffic to Camel. When I send multiple requests one after the other I see no issues and the buffer size is as expected. But, when I try running multiple users sending similar requests to Camel on the same port, I intermittently see unexpected buffer sizes, sometimes 0 bytes to sometimes even greater than the expected number of bytes. I tried playing around with multiple options mentioned in the Camel-Netty page like:
Increasing backlog
keepAlive
buffersizes
timeouts
poolSizes
workerCount
synchronous
stream caching (did not work)
disabled useOriginalMessage for performance
System level TCP parameters, etc. among others.
I am yet to resolve the issue. I am not sure if I'm fundamentally missing something. I did take a look at the encoder/decoders and guess if that could be an issue. But, I don't understand why a load balancer needs to encode/decode messages. I have worked with other load balancers which just require endpoint configurations and hence, I am assuming that Camel does not require this. Am I right? Please know that the issue is not with my client/backend as I ran a 2000 user load test from my client to the backend with less than 1% failures but see a large number of failure ( not that there are no successes) with Camel. I have the following questions:
1.Is this a valid use-case for Apache Camel- Netty? Should I be looking at Mina or others?
2.Can I try to route tcp traffic to JMS or other components and then finally to the tcp endpoint?
3.Do I need encoders/decoders or should this configuration work?
4.Should I continue with this approach or try some other load balancer?
Please let me know if you have any other suggestions. TIA.
Edit1:
I also tried the same approach with netty4 and mina components. The route looks similar to the one in netty. The route with netty4 is as follows:
from("netty4:tcp:\\this-ip:9445?defaultCodec=false&sync=true").to("netty4:tcp:\\backend1:9445?defaultCodec=false&sync=true")
I read a few posts which had the same issue but did not find any solution relevant to my issue.
Edit2:
I increased the receive timeout at my client and immediately noticed the mismatch in expected buffer length issue fall to less than 1%. However, I see that the response times for each transaction when using Camel and not using it is huge; almost 10 times higher. Can you help me with reducing the response times for each transaction? The message received back at my client varies from 5000 to 20000 bytes. Here is my latest route:
from("netty:tcp://this-ip:9445?sync=true&allowDefaultCodec=false&workerCount=20&requestTimeout=30000")
.threads(20)
.loadBalance()
.roundRobin()
.to("netty:tcp://backend-1:9445?sync=true&allowDefaultCodec=false","netty:tcp://backend-2:9445?sync=true&allowDefaultCodec=false")
I also used certain performance enhancements like:
context.setAllowUseOriginalMessage(false);
context.disableJMX();
context.setMessageHistory(false);
context.setLazyLoadTypeConverters(true);
Can you point me in the right direction about how I can reduce the individual transaction times?
For netty4 component there is no parameter called defaultCodec. It is called allowDefaultCodec. http://camel.apache.org/netty4.html
Also, try something like this first.
from("netty4:tcp:\\this-ip:9445?textline=true&sync=true").to("netty4:tcp:\\backend1:9445?textline=true&sync=true")
The above means the data being sent is normal text. If you are sending byte or something else you will need to provide decoding/encoding for netty to handle the data.
And a side note. Before running the Camel route, test manually to send test messages via a standard tcp tool like sockettest to verify that everything works. Then implement the same via Camel. You can find sockettest here http://sockettest.sourceforge.net/ .
I finally solved the issue with the same route settings as above. The issue was with the Request and Response Delimiter not configured properly due to which it was either closing the connection too early leading to unexpected buffer sizes or it was waiting too long even after the entire buffer was received leading to high response times.
My web app sits behind a Nginx. Occasionally, the loading of my web page takes more than 10 seconds, I used Chrome DevTools to track the timing, and it looks like this:
The weird thing is, when the page loads slowly, the initial connection time is always 11 seconds long. And after this slow request, subsequent loading of the same page becomes very fast.
What is the possible problem that cause this?
P.S. If this is caused by a resource limitation on my server, can I see some errors/warnings in some system log?
The initial connection refers to the time taken to perform the initial TCP handshake and negotiating an SSL (where applicable). The slowness could be caused by congestion, where the server has hit a limit and can't respond to new connections while existing ones are pending. You could look into some performance enhancements in your Nginx configuration.
Use dig command to check you domain name resolution process.
if return multi answer section, check these ips is valid.
You should eliminate the file incallings which are pointing to not-existing files. I had the same issue at a customer where a 404 image caused the problem, as it delayed the loading of other files.
I have an ASP.NET Web API application running behind a load balancer. Some clients keep an HTTP busy connection alive for too much time, creating unnecessary affinity and causing high load on some server instances. In order to fix that, I wish to gracefully close a connection that is doing too much requests in a short period of time (thus forcing the client to reconnect and pick a different server instance) while at same time keeping low traffic connections alive indefinitely. Hence I cannot use a static configuration.
Is there some API that I can call to flag a request to "answer this then close the connection" ? Or can I simply add the Connection: close HTTP header that ASP.NET will see and close the connection for me?
It looks like the good solution for your situation will be the built-in IIS functionality called Dynamic IP restriction. "To provide this protection, the module temporarily blocks IP addresses of HTTP clients that make an unusually high number of concurrent requests or that make a large number of requests over small period of time."
It is supported by Azure Web Apps:
https://azure.microsoft.com/en-us/blog/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/
If that is the helpful answer, please mark it as a helpful or mark it as the answer. Thanks!
I am not 100% sure this would work in your situation, but in the past I have had to block people coming from specific IP addresses geographically and people coming from common proxies. I created an Authorized Attribute class following:
http://www.asp.net/web-api/overview/security/authentication-filters
In would dump the person out based on their IP address by returning a HttpStatusCode.BadRequest. On every request you would have to check a list of bad ips in the database and go from there. Maybe you can handle the rest client side, because they are going to get a ton of errors.
Write an action filter that returns a 302 Found response for the 'blocked' IP address. I would hope, the client would close the current connection and try again on the new location (which could just be the same URL as the original request).
This question is regarding a bot of mine which's primary focus is scraping.
The path is mapped out correctly and it does what it needs to do.
Rate limits are tested and I am certain this is not a factor, if it was and where it was we received actual responses.
However, the webpage(s) I am trying to scrape seem to have build in a kind of weird/ unfamiliar security manner, something that I haven't came across before. And here I am wondering, how it's executed and how I deal with it appropriately.
While the scraper/bot is doing it's thing, sending requests getting responses, at random times it will encounter this what I suspect is a security measure. There are simply no responses back from the server, not a 4xx error or any at all.
At first sight the proxies just appear dead, but that's not it, because they are not. The proxies work just fine, and manually I can just browse the page on them, no issues here.
The server just stops giving responses.
Now to find a workaround for this, I would need to be able to tell the difference between a timeout (for my proxies) and a no response. They appear the same, but are not.
Does anyone have insight on this problem, maybe there is a genius way to separate those that I am not aware of.
Now to find a workaround for this, I would need to be able to tell the difference between a timeout (for my proxies) and a no response. They appear the same, but are not.
A timeout is if the server does not respond within a specific time. No response means, that the server either closes the connection either before the timeout occurs or that it will close the connection after the timeout occurred without sending anything back.
The first case can be easily detected by the connection close before timeout. If you want to detect instead if the server will close the connection without response only after your current timeout then your only option is to extend the timeout. There is nothing in the server which will indicate that the server will close the connection without response at some future time.
And since your only connection is with the proxy there is no real way to detect if the problem is at the proxy or the server. Your only hope might be to set your timeout waiting for the proxy larger then the timeout the proxy has waiting for the server. This way you'll maybe get a response from the proxy indicating that the connection to the server timed out.
They appear the same, but are not.
They are the same. There is no difference. A read timeout means that data didn't arrive within the timeout period. For whatever reason. TCP doesn't know, and can't tell you. At the C level, recv() returned -1 with errno == EAGAIN/EWOULDBLOCK. That's all the information there is.
What you are asking is tantamount to 'data didn't arrive: where didn't it arrive from?' It's not a meaningful question.