We have often some issues in terms of interoperability on the Web. One of these issues for browsers vendors is the wrongly spelled Connection HTTP header. The most common errors are given by these two forms.
nnCoection:
Cneonction:
There are has been a few articles about this, including Fun with HTTP headers. Often it is happening by period, then disappear. It seems that some of them are created by load balancers such as this example: NetScaler Appliance.
Do you know any other instances of hardware or software that create these issues?
Update Here an example among others of a site which doesn't send back a good Connection HTTP header.
curl -sI ehg-nokiafin.hitbox.com
HTTP/1.1 200 OK
Date: Tue, 25 Jan 2011 20:35:45 GMT
Server: Hitbox Gateway 9.3.6-rc1
P3P: policyref="/w3c/p3p.xml", CP="NOI DSP LAW NID PSA ADM OUR IND NAV COM"
Cneonction: close
Pragma: no-cache
Cache-Control: max-age=0, private, proxy-revalidate
Expires: Tue, 25 Jan 2011 20:35:46 GMT
Content-Type: text/plain
Content-Length: 23
update 2011-01-26
On Amazon forum about AWS, there is a thread about nnCoection. A comment says:
FYI, the reason it misspells the word
connection is so that the internet
check-sum (a simple sum) still adds
up, this way the change can occur at
the packet level. If it completely
removed the header, it would have to
stall forwarding the response until
the header was entirely read, so it
could rewrite the headers, recompute
the checksum and then send it along.
with
sum(ord(c) for c in "Connection")
and
sum(ord(c) for c in "nnCoection")
both gives 1040
Are you sure it's an actual issue? The linked article suggests that these sorts of headers are "misspelled on purpose" so that a load balancer, reverse proxy or other middlebox can defeat the server's wishes that the connection be kept alive, without having to track a delta in TCP stream position over the life of the connection. Something like this may actually be necessary to bring a downed and recovered server back into active duty, by forcing kept-alive connections to other servers to migrate to the one brought online.
If you have a protocol that's dependent on HTTP Connection: keep-alive to function (cough), you're probably doing it wrong.
Related
I try to use c++ develop a HTTP server on Windows,and when i reponse a HTTP by use WSASend to send out
char response[] =
"HTTP/1.1 200 OK\r\n\
Date: Mon, 27 Jul 2009 12:28:53 GMT\n\r\
Server: Apache/2.2.14 (Win32)\n\r\
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT\n\r\
Content-Length: 88\n\r\
Content-Type: text/html\n\r\
Connection: Closed\n\r\n\r\
<html><body><h1>Hello, World!</h1></body></html>"
Althrougn the browser did show Hello, World! when i type in 127.0.0.1,but the browser just keep show loading sigh as if the pages not yet load complete.And the browser's console never show the response message.Why?
Is there some format issue with my response message?
Content-Length: 88\n\r\
....
Connection: Closed\n\r\n\r\
There are several problems with your code. All over your code you use \n\r instead of \r\n. Therefore the response is invalid HTTP. And the Content-length header must reflect the actual length of the body: <html><body><h1>Hello, World!</h1></body></html> has 48 bytes and not 88 bytes as your code claims. Apart from that it must be Connection: Close instead of Connection: Closed.
Note that HTTP is way more complex than you think. If you really need to implement it yourself instead of using established libraries please study the actual standard (that's what standards are for!) instead of fiddling around until it seems to work. Otherwise it might work only within your specific environment and with a specific browser and you'll get strange problems later.
I have an Azure Web Site configured to use multiple (2) instances:
I have a service bus that should pass messages (ie Cache Evict) between the instances. I need to test this mechanism.
In a conventional (on premise) system I would point a browser to instance 1 (ie http://myserver1.example.com), perform an action, then point my browser to the other instance (http://myserver2.example.com) to test.
However, in Azure I can't see a way to hit a specific instance. Is it possible? Or is there an alternative way to to run through this test scenario (act on instance 1, ensure instance 2 behaves appropriately)?
Unfortunately, there isn't an official way of doing this. However, you can achieve that by setting a cookie called ARRAffinity on your request.
Try hitting your site from any client (Chrome, Firefox, curl, httpie, etc) and inspect the response headers that you are getting back.
For example in curl you would do
curl -I <siteName>.azurewebsites.net
you would get this
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: text/html
Last-Modified: Wed, 17 Sep 2014 16:57:26 GMT
Accept-Ranges: bytes
ETag: "2ba0757598d2cf1:0"
Server: Microsoft-IIS/8.0
X-Powered-By: ASP.NET
Set-Cookie: ARRAffinity=<very long hash>; Path=/;Domain=<siteName>.azurewebsites.net
Date: Fri, 28 Nov 2014 03:13:07 GMT
what you are interested in is the ARRAFinity if you send couple of request you would notice that hash will keep changing between 2 values that represent your 2 instances.
Set that in your Cookie header on your request will guarantee it going to one of the instances and not the other.
curl --cookie ARRAfinity=<one of the hashes you got> <siteName>.azurewebsites.net
Let's say i want to download a file called example.pdf from http://www.xxx.ууу/example.pdf
Probably, i send GET request like this:
GET /example.pdf HTTP/1.1␍␊
Host: www.xxx.yyy␍␊
␍␊
But what's next?
How does exchange of http headers look like?
I'm assuming you've read the Wikipedia article on the HTTP protocol. If you just need more examples I'd highly recommend you download Wireshark. Wireshark is an extremely powerful packet sniffer which will allow you to watch packet communications between you and any website. In addition it will actually break down the packets and tell you a little bit about their meanings in more "human terms". It has a bit of a learning curve but it can teach you a lot about a number of different protocols including HTTP.
http://www.wireshark.org/
I'm not sure what your ultimate goal is, but you can view real-time http header interaction with the Live HTTP Headers Firefox add-on. It's also possible in Chrome, but it's a little more work.
Check the HTTP 1.1 RFC.
You might want to look at http://www.w3.org/Protocols/rfc2616/rfc2616.html . But also, there is rarely a need to recreate the protocol.
To answer such GET request, the packet with the following header should be passed:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 6475593
Content-Type: application/x-msdownload
Etag: "qwfw473usll"
Last-Modified: Sun, 18 Jul 2021 12:02:31 GMT
Server: Caddy
Date: Sun, 18 Jul 2021 12:03:47 GMT
After the last line, you must specify 2 CRLF and row bytes of the file to be transmitted.
Q:
I have a web application which published on a server .When i try the web application from another city. the performance is so bad and every thing is slow.
Should i make any enhancement to my code or this is related mostly to the network factors?
Any advices please.
Error/Status Code: 200
Start Offset: 0.194 s
Initial Connection: 193 ms
Time to First Byte: 286 ms
Content Download: 1286 ms
Bytes In (downloaded): 48.6 KB
Bytes Out (uploaded): 0.4 KB
Request Headers:
GET /sch/ScheduleForm.aspx HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:7.0.1) Gecko/20100101 Firefox/7.0.1 PTST/25
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection: keep-alive
Response Headers:
HTTP/1.1 200 OK
Date: Wed, 21 Dec 2011 15:17:00 GMT
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Set-Cookie: ASP.NET_SessionId=ane2ncmyyoqwckjmv4bijq45; path=/; HttpOnly
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 4938
First of all you must find if the delay is because of the network, or by the server it self, or by the Client Computer or by your page design.
Its rare to have bad performance only because you change city, maybe the other computer have slow connection or bad configuration, or bad isp and you see that slow.
Client Quick Check
The faster way to a quick check of the network responce is to ping your site, open a command promt window and just write "ping www.yoursite.com /t" and see the time. And if your server is on the same country must be under 50ms.
Network - Page design
Now second point of a global check. You can use this site
http://www.webpagetest.org/
to check the speed of your page globally, and get very interesting results, as the time response.
Server
There always the case that you have place your site on a multi use/shared computer, with thousand of sites and bad configuration, so on peak time the server performs badly. I have see this happens a lot of times.
Your Web Application
If the site is slow compare with the development computer then is ether because live have a huge database that you do not have check, ether because a hacker have found a back door and is attacking by making empty accounts as attack, or something similar. This is up to you to find if the delay is because of your calls inside your program.
And more reasons for that delay exist
One more note, use the tools that exist on Chrome, Mozilla, Opera and Safari to see the time response of your site when your page load.
Hope this helps.
Your best starting point would be to find out where the speed issue is coming from. Try Firebug/Chrome debugging tools to see if the time is being taken on the server (ie sending you the first byte of the website) or if it's just plain old loading time (eg images take a long time).
If it's on the server then you have potentially some architectural/coding issues, if it's just delivery time of content then you have network/content issues (compress content with GZip, optimise pngs with PNGOUT that sort of thing).
Good luck :)
That depends heavily how the geographical location of the two cities is. If the two cities are nearby you will most likely not notice any difference.
If the cities are on different continents you may notice for sure latency issues.
If you have latency issues you can not solve them solely by adjusting your code. You need to do something like e.g. geographical load balancing and hosting different servers at different locations.
Facebook e.g. has multiple data centres.. e.g. one at the West-Coast one on the East-Coast and i think they have one in Europe as well. Depending from which location you come from requests are forwarded to the nearest data centre.
I'm using Apache Abdera to POST atom multipart data to my server, and am having some odd problems that I can't pin down.
It looks like an issue with chunked transfer encoding, but I'm insufficiently experienced to be certain. The problem manifests as the server throwing an error indicating that the request I sent it contains only one mime part, not two as required. I attached Wireshark to the interface and captured the conversation, and it went like this:
POST /sss/col-uri/2ee98ea1-f9ad-4f01-9b1c-cfa3c4a6dc3c HTTP/1.1
Host: localhost
Expect: 100-continue
Transfer-Encoding: chunked
Content-Type: multipart/related; boundary="1306399868259";type="application/atom+xml;type=entry"
The server's response:
HTTP/1.1 100 Continue
My client continues:
198
--1306399868259
Content-Type: application/atom+xml;type=entry
Content-Disposition: attachment; name="atom"
<entry xmlns="http://www.w3.org/2005/Atom"><title xmlns="http://purl.org/dc/terms/">Richard Woz Ere</title><bibliographicCitation xmlns="http://purl.org/dc/terms/">this is my citation</bibliographicCitation><content type="application/zip" src="cid:48bd9436-e8b6-4f68-aa83-5c88eda52fd4" /></entry>
0
b0e9
--1306399868259
Content-Type: application/zip
Content-Disposition: attachment; name="payload"; filename="example.zip"
Content-ID: <48bd9436-e8b6-4f68-aa83-5c88eda52fd4>
Packaging: http://purl.org/net/sword/package/SimpleZip
And at this point the server responds with:
HTTP/1.1 400 Bad Request
Date: Thu, 26 May 2011 08:51:08 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.3 Python/2.6.1
Connection: close
Transfer-Encoding: chunked
Content-Type: text/xml
Indicating the error (which is well understood). My server goes on to stream a pile of base64 encoded bits onto the output stream, but in the mean time the server is not listening, it has already decided that the request was erroneous.
Unfortunately, I'm not in charge of the HTTP layer - this is all handled by Abdera using Apache httpclient. My code that does this looks like this:
client.execute("POST", url.toString(), new SWORDMultipartRequestEntity(deposit), options);
Here, the SWORDMultipartRequestEntity is a copy of the standard Abdera MultipartRequestEntity class, with a few extra headers thrown in (see, for example, Packaging in the above snippet); the "deposit" argument is just an object holding the atom part and the inputstream.
When attaching a debugger I get to this line of code fine, and then it disappears into a rat hole and then I get this error back.
Any hints or tips? I've pretty much exhausted my angles of attack!
The only thing that stands out for me is that immediately after the atom:entry document, there is a newline with "0" on it alone, which appears to be chunked transfer encoding speak for "I'm finished". Not sure how it got there, or whether it really has any effect. Help much appreciated.
Cheers,
Richard
The lonely 0 may indeed be a problem. My uninformed guess is that it results from some call to flush(), which then writes the whole buffer as another HTTP chunk. Unfortunately at the point where flush is called, the buffer had already been flushed and its size is therefore zero. So the HttpChunkedOutputFilter (or however it is called) should be taught than an empty buffer does not need to be flushed.
[update:] You should set a breakpoint in the ChunkedOutputStream class, especially the flush method. I just looked at its code and it seems to be ok, but maybe I missed something.