Testing specific Azure Web Site Instance - asp.net

I have an Azure Web Site configured to use multiple (2) instances:
I have a service bus that should pass messages (ie Cache Evict) between the instances. I need to test this mechanism.
In a conventional (on premise) system I would point a browser to instance 1 (ie http://myserver1.example.com), perform an action, then point my browser to the other instance (http://myserver2.example.com) to test.
However, in Azure I can't see a way to hit a specific instance. Is it possible? Or is there an alternative way to to run through this test scenario (act on instance 1, ensure instance 2 behaves appropriately)?

Unfortunately, there isn't an official way of doing this. However, you can achieve that by setting a cookie called ARRAffinity on your request.
Try hitting your site from any client (Chrome, Firefox, curl, httpie, etc) and inspect the response headers that you are getting back.
For example in curl you would do
curl -I <siteName>.azurewebsites.net
you would get this
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: text/html
Last-Modified: Wed, 17 Sep 2014 16:57:26 GMT
Accept-Ranges: bytes
ETag: "2ba0757598d2cf1:0"
Server: Microsoft-IIS/8.0
X-Powered-By: ASP.NET
Set-Cookie: ARRAffinity=<very long hash>; Path=/;Domain=<siteName>.azurewebsites.net
Date: Fri, 28 Nov 2014 03:13:07 GMT
what you are interested in is the ARRAFinity if you send couple of request you would notice that hash will keep changing between 2 values that represent your 2 instances.
Set that in your Cookie header on your request will guarantee it going to one of the instances and not the other.
curl --cookie ARRAfinity=<one of the hashes you got> <siteName>.azurewebsites.net

Related

How to use TCP send out a HTTP response?

I try to use c++ develop a HTTP server on Windows,and when i reponse a HTTP by use WSASend to send out
char response[] =
"HTTP/1.1 200 OK\r\n\
Date: Mon, 27 Jul 2009 12:28:53 GMT\n\r\
Server: Apache/2.2.14 (Win32)\n\r\
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT\n\r\
Content-Length: 88\n\r\
Content-Type: text/html\n\r\
Connection: Closed\n\r\n\r\
<html><body><h1>Hello, World!</h1></body></html>"
Althrougn the browser did show Hello, World! when i type in 127.0.0.1,but the browser just keep show loading sigh as if the pages not yet load complete.And the browser's console never show the response message.Why?
Is there some format issue with my response message?
Content-Length: 88\n\r\
....
Connection: Closed\n\r\n\r\
There are several problems with your code. All over your code you use \n\r instead of \r\n. Therefore the response is invalid HTTP. And the Content-length header must reflect the actual length of the body: <html><body><h1>Hello, World!</h1></body></html> has 48 bytes and not 88 bytes as your code claims. Apart from that it must be Connection: Close instead of Connection: Closed.
Note that HTTP is way more complex than you think. If you really need to implement it yourself instead of using established libraries please study the actual standard (that's what standards are for!) instead of fiddling around until it seems to work. Otherwise it might work only within your specific environment and with a specific browser and you'll get strange problems later.

Strange characters preceding and following HTML in HTTP request

Background
I am building a custom HTTP parser in C++/CX using sockets. As such, I have full control over the entire HTTP request and response.
Request
GET /posts/html-android-app?referrer=rss HTTP/1.1
Host: mixturatech.com
Connection: close
Response
HTTP/1.1 200 OK
Date: Thu, 30 Apr 2015 04:44:59 GMT
Server: Apache
X-Powered-By: PHP/5.2.17
Access-Control-Allow-Origin: *
Cache-Control: public
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
6a2f
<!DOCTYPE html>
[trimmed document content]
</html>
0
Additional Data
If I navigate to the webpage with Chrome, WireShark captures the same data that I am seeing (with extraneous characters), yet Chrome manages to trim that content out. (I am looking at Chrome's data in the Network tab in Developer Tools.)
I do not see this problem on every site I retrieve, but the problem, if it exists, seems to be sitewide.
Questions
What is up with the 6a2f and 0 preceding and following the document?
Is this an encoding issue?
Is there some way that I can positively identify, without hardcoding boundaries for the document, such as it must start with < and end with >, where the actual content lies?
Will those characters, if they exist in a page, always be limited to length 4 and 1 respectively?
This is "chunked transfer encoding". Read http://greenbytes.de/tech/webdav/rfc7230.html#chunked.encoding.

How does HTTP download work?

Let's say i want to download a file called example.pdf from http://www.xxx.ууу/example.pdf
Probably, i send GET request like this:
GET /example.pdf HTTP/1.1␍␊
Host: www.xxx.yyy␍␊
␍␊
But what's next?
How does exchange of http headers look like?
I'm assuming you've read the Wikipedia article on the HTTP protocol. If you just need more examples I'd highly recommend you download Wireshark. Wireshark is an extremely powerful packet sniffer which will allow you to watch packet communications between you and any website. In addition it will actually break down the packets and tell you a little bit about their meanings in more "human terms". It has a bit of a learning curve but it can teach you a lot about a number of different protocols including HTTP.
http://www.wireshark.org/
I'm not sure what your ultimate goal is, but you can view real-time http header interaction with the Live HTTP Headers Firefox add-on. It's also possible in Chrome, but it's a little more work.
Check the HTTP 1.1 RFC.
You might want to look at http://www.w3.org/Protocols/rfc2616/rfc2616.html . But also, there is rarely a need to recreate the protocol.
To answer such GET request, the packet with the following header should be passed:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 6475593
Content-Type: application/x-msdownload
Etag: "qwfw473usll"
Last-Modified: Sun, 18 Jul 2021 12:02:31 GMT
Server: Caddy
Date: Sun, 18 Jul 2021 12:03:47 GMT
After the last line, you must specify 2 CRLF and row bytes of the file to be transmitted.

REST: What is a good Hypermedia and Resource Caching Strategy?

If I have a RESTful service that has discoverable resources via an endpoint such as:
Request:
GET http://acme.org/someInfo
Response:
HTTP/1.1 200 OK
Content-Length: ...
Content-Type: application/vnd.acme+xml
Date: Fri, 16 Dec 2012 12:40:00 GMT
Last-Modified: Tue, 1 Mar 2012 11:45:00 GMT
<someInfo xmlns="http://schemas.acme.org/someInfo" xmlns:dap="http://schemas.acme.org/dap">
<dap:link rel="http://relations.acme.org/someInfo" uri="htp://acme.org/someInfo/foo" />
<dap:link rel="http://relations.acme.org/someInfo" uri="htp://acme.org/someInfo/bar" />
<dap:link rel="http://relations.acme.org/someInfo" uri="htp://acme.org/someInfo/baz" />
</someInfo>
And then with this response, a client may then follow one of the hypermedia links:
Request:
GET http://acme.org/someInfo/foo
Response:
HTTP/1.1 200 OK
Content-Length: ...
Content-Type: application/vnd.acme+xml
Date: Fri, 16 Dec 2012 12:45:00 GMT
Last-Modified: Wed, 28 Sep 2012 11:45:00 GMT
<fooInfo xmlns="http://schemas.acme.org/fooInfo">
...
</fooInfo>
The first response may change less frequently (ex: many months), and the second one may change slightly more frequently (ex: every month or so). What is a good HTTP caching strategy for this sort of scenario; by date, client ETag comparison, something else?
EDIT: If the data is stale in magnitudes of a day or so, that is fine. Any more would probably be problematic.
This is a performance versus consistency issue that really can only be answered by the business.
For each resource you need to ask two questions:
If the resource changes and the users do not see that change for X
hours, what is the business impact? Will reactors explode if the user does not see the temperature change?
How much does it cost to see
a new version of that resource? Are you on a 1Gbps local network,
or accessing it from a mobile phone in Siberia?
Once you know how valuable it is to have that data up-to-date and how much it costs to get that data then you can decide on the best caching strategy.

Cneonction and nnCoection HTTP headers

We have often some issues in terms of interoperability on the Web. One of these issues for browsers vendors is the wrongly spelled Connection HTTP header. The most common errors are given by these two forms.
nnCoection:
Cneonction:
There are has been a few articles about this, including Fun with HTTP headers. Often it is happening by period, then disappear. It seems that some of them are created by load balancers such as this example: NetScaler Appliance.
Do you know any other instances of hardware or software that create these issues?
Update Here an example among others of a site which doesn't send back a good Connection HTTP header.
curl -sI ehg-nokiafin.hitbox.com
HTTP/1.1 200 OK
Date: Tue, 25 Jan 2011 20:35:45 GMT
Server: Hitbox Gateway 9.3.6-rc1
P3P: policyref="/w3c/p3p.xml", CP="NOI DSP LAW NID PSA ADM OUR IND NAV COM"
Cneonction: close
Pragma: no-cache
Cache-Control: max-age=0, private, proxy-revalidate
Expires: Tue, 25 Jan 2011 20:35:46 GMT
Content-Type: text/plain
Content-Length: 23
update 2011-01-26
On Amazon forum about AWS, there is a thread about nnCoection. A comment says:
FYI, the reason it misspells the word
connection is so that the internet
check-sum (a simple sum) still adds
up, this way the change can occur at
the packet level. If it completely
removed the header, it would have to
stall forwarding the response until
the header was entirely read, so it
could rewrite the headers, recompute
the checksum and then send it along.
with
sum(ord(c) for c in "Connection")
and
sum(ord(c) for c in "nnCoection")
both gives 1040
Are you sure it's an actual issue? The linked article suggests that these sorts of headers are "misspelled on purpose" so that a load balancer, reverse proxy or other middlebox can defeat the server's wishes that the connection be kept alive, without having to track a delta in TCP stream position over the life of the connection. Something like this may actually be necessary to bring a downed and recovered server back into active duty, by forcing kept-alive connections to other servers to migrate to the one brought online.
If you have a protocol that's dependent on HTTP Connection: keep-alive to function (cough), you're probably doing it wrong.

Resources