HTTP request, strange socket behaviour - http

I experience strange behavior when doing HTTP requests through sockets, here the request:
POST https://example.com:443/service/XMLSelect HTTP/1.1
Content-Length: 10926
Host: example.com
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705)
Authorization: Basic XXX
SOAPAction: http://example.com/SubmitXml
Later on there goes body of my request with given content length.
After that I receive something like:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/xml;charset=utf-8
Transfer-Encoding: chunked
Date: Tue, 30 Mar 2010 06:13:52 GMT
So everything seem to be fine here. I read all contents from network stream and successfully receive response. But my socket which I'm doing polling on switches it's modes like that:
write ( i write headers and request here )
read ( after headers sent i begin to receive response )
write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really )
read ( here it switches to read back again )
last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this:
POST https://example.com:443/service/XMLSelect HTTP/1.1
Content-Length: 1076
Accept-Encoding: gzip
Content-Encoding: gzip
Host: example.com
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705)
Authorization: Basic XXX
SOAPAction: http://example.com/SubmitXml
I receive response like that:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Encoding: gzip
Content-Type: text/xml;charset=utf-8
Transfer-Encoding: chunked
Date: Tue, 30 Mar 2010 07:26:33 GMT
2000
�
I receive a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile:
write ( i write headers and request here )
read ( after headers sent i begin to receive response )
write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! )
What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something?
PS All actual service references and login/passwords replaced with fake ones :)

A socket becomes writable whenever there's space in the socket send buffer. The OS can't really know if your application has more data to send, but knows about its internal structures, like socket buffers. You have to explicitly add/remove the socket to the write fd_set for select(2) (enable/disable EPOLLOUT event for epoll(4)). This is usually handled with a state machine, like in libevent. Also polling works best with non-blocking sockets.
Hope this helps.

Related

SignalR server closes connection to Unity client right after SSE connection is established

I'm working on a prototype project connecting a self-hosted SignalR server running on Mono with C# clients (for testing) and Unity clients (representing the actual use-case scenario). The Unity client is using BestHTTP Pro as its SignalR library.
As the WebSocket transport method is not supported on Mono, I'm focusing on server-sent events, and observing very odd behavior there. Communication between server and C# clients is working just fine out of the box. With the Unity client though, the (supposedly) persistent connection is closed immediately after the initial response to the /signalr/connect request. No errors are reported anywhere; the response code is 200 in both cases.
Further investigation with Fiddler reveals that the Unity client is sending a Connection: Keep-Alive header that the C# client doesn't send, to which the server responds with a Connection: close header and, well, closing the connection (in other words, exactly the opposite of what the client asks it to do).
Manually removing the keep-alive request header actually makes everything work with the Unity client. Since this feels more like an odd workaround than a correct solution, my question is: Is this strange server-side behavior a bug in the SignalR libraries? Or could Mono be to blame here (I suspect this might be the case)? How can I dig deeper into this, and ideally make the SSE transport work without client-side hacks?
Library versions used:
Microsoft ASP.NET SignalR 2.2.1
BestHTTP Pro 1.9.17
For reference, here are the full request/response headers; Unity/BestHTTP client:
GET /signalr/connect?tid=1&_=XXX&transport=serverSentEvents&clientProtocol=1.5&connectionToken=XXX&connectionData=XXX HTTP/1.1
Accept: text/event-stream
Cache-Control: no-cache
Accept-Encoding: identity
Host: XXX
Connection: Keep-Alive
Connection: Keep-Alive, TE
TE: identity
User-Agent: BestHTTP
HTTP/1.1 200 OK
X-Content-Type-Options: nosniff
Content-Type: text/event-stream
Server: Mono-HTTPAPI/1.0
Date: Wed, 08 Mar 2017 10:34:05 GMT
Connection: close
Content-Length: 73
C# client:
GET /signalr/connect?clientProtocol=1.4&transport=serverSentEvents&connectionData=XXX&connectionToken=XXX HTTP/1.1
User-Agent: SignalR.Client.NET45/2.2.1.0 (Microsoft Windows NT 6.2.9200.0)
Accept: text/event-stream
Host: XXX
HTTP/1.1 200 OK
X-Content-Type-Options: nosniff
Content-Type: text/event-stream
Server: Mono-HTTPAPI/1.0
Date: Wed, 08 Mar 2017 13:11:16 GMT
Transfer-Encoding: chunked
Keep-Alive: timeout=15,max=99
BestHTTP developer here.
First of all, the plugin can use WebSocket as a SignalR transport on every supported platform. While the mono framework that Unity uses has no WebSocket implementation, the plugin uses its own.
The Server-Sent Events protocol has no direct indication what should be done for a case like this, but modified the plugin to work the same way as other clients. You can wait for the next release on the Asset Store, or you can contact me for an updated package.

Multiple Protocols - Correct Implementation

I was inspecting some of the HTTP exchanges between my browser and Google and it triggered this question.
In short, my browser (Firefox 36.0.4) is making HTTP/1.1 requests and Google is responding with HTTP/2.0; there is no attempt to respond in the requested protocol. I am aware that much of the HTTP/2.0 spec has already been implemented in a haphazard way through SPDY, but this seems like a poor neogitation with the client.
I thought that the purpose of declaring protocols in the header was that a server would be able to determine how it should respond to the client, which is in one of three ways:
1. the client has requested the server's preferred protocol, so the server continues with the request as normal
2. The client has requested another protocol version that the server supports, the server responds in the request protocol but includes an upgrade header indicating its preferred protocol. The client MAY request an upgrade at which point the server will send a 101 Switching Protocols response and switch to the preferred protocol.
3. The client has requested an unsupported or outdated protocol, the server sends a 426 Upgrade Required response with supported protocols (in descending order of preference) in the upgrade header; the client must repeat the request with a supported protocol.
4. The client reuested a major protocol version that is wholly unsupported; e.g. HTTP/2.x while the server only supports HTTP/1.x. The server responds with 505 HTTP Version Not Supported
The exchange with Google is not doing this; is this poor practice or am I missing something?
An example, selected at random:
https://plus.google.com/u/0/_/notifications/frame?querystring=blahblahblah
GET /u/0/_/notifications/frame?querystring=blahblahblah HTTP/1.1
Host: plus.google.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:36.0) Gecko/20100101 Firefox/36.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://www.google.co.uk/?gfe_rd=cr&ei=Lc8bVcXFOKbj8we_uIKYDg&gws_rd=ssl
Cookie: NID=67=iZxcMVTvg-6PsQIUpZ5tSPL-7-uJdls3vdci3afLmoLCpD5JOq0NfzhTnnpcCW9ymbXsn3GRGxfSgYlXGEk9XmnbUne0LCPrUc_ahhpc5wV6n-GZ8F7s-JS-JWgZWEwri-GaWXK1vgyRw7jMbqEiAUSRCzs1Fr1K6ZUIH0EpJdlwZD-K26MJNazpyHL_vZ5k4m8NrtFDkAoYPw; OTZ=2759671_52_56_123900_52_436380; SID=DQAAAP0AAAAqKgGz5aFNESd464Z_jUsmTi7JQfEKsuWkGZVJe8QvdbOPTZpL5ZNjKSsSSg9QvJglP-aMNLrgn2b7MsDF_4Z7Ebe1X347Cd3-j3ktLedgmq9nRO92hxEseqf974VNumrst-XqMj9Oq_xf-KDz-CDEJ1XiqWZYVHurV-IrXib5ei7x9dqlLF2NSPYLaCxlrwKdjCQX-FDDB03FWEuE7dIMYs3BQ-_NU5fG9os6I6r6ABy9mkiy84rraZFVthd38VJF5z2WYmgQ55QJPr9EDpSA5VKH1tbW6XyLjZLt5EEEj1xoqRF4EguRkIOiG8IiqRs49GnwqQSCpTw3ROW-jNDI; HSID=A7u8vyQI-v7jJSEbS; SSID=AOojY4hDLYgnSjUrK; APISID=z23KH1a0VsBukvMu/ARaOeOni08HfbGg6R; SAPISID=5iTgyxKDRPP7fNtF/AdiFbKNYN04h7n6cu; PREF=ID=cc54787f58f50d42:U=8e10581450dbe3b5:FF=0:LD=en:TM=1416091562:LM=1418086819:GM=1:S=0KVfl2hqkG8Psvwv; OGP=-5061451:-5061492:; OGPC=4061155-1:
Connection: keep-alive
HTTP/2.0 200 OK
Alternate-Protocol: 443:quic,p=0.5
Cache-Control: private, max-age=0
content-security-policy-report-only: script-src 'unsafe-inline' 'unsafe-eval' 'self' https://*.googleapis.com https://*.gstatic.com https://apis.google.com https://www.google-analytics.com https://www.googletagmanager.com https://*.talkgadget.google.com https://pagead2.googleadservices.com https://pagead2.googlesyndication.com https://tpc.googlesyndication.com https://s.ytimg.com https://www.youtube.com https://clients1.google.com https://www.google.com;report-uri /_/cspreport/es_oz_20150330.18_p0
Content-Type: text/html; charset=utf-8
Date: Wed, 01 Apr 2015 10:57:55 GMT
Expires: Wed, 01 Apr 2015 10:57:55 GMT
Server: GSE
x-content-type-options: nosniff
x-ua-compatible: IE=edge, chrome=1
X-XSS-Protection: 1; mode=block
X-Firefox-Spdy: h2-15
This is a https request. The client announced the support for HTTP/2.0 with the ALPN (formerly NPN) extension in the SSL handshake. Therefore the server knows that the client can do HTTP/2.0. If this extension is not given the server is not allowed to reply with a higher major HTTP version compared to the client request.
The HTTP version in the response is an advertisement of the capabilities of the server, not the actual protocol version of the response.
The protocol version of the response is the one that has been sent with the request.
In the past (but perhaps even nowadays) it was common for an old client to send a HTTP/1.0 request, and have the server respond in this way:
GET / HTTP/1.0
User-Agent: Netscape/1.0
HTTP/1.1 200 OK
Content-Length: 0
<connection closed>
The server advertised that it was able to speak HTTP/1.1, but behaved as HTTP/1.0 in the response (by closing the connection).
The same is happening in your case: you make a HTTP/1.1 request, the server advertises that it can speak HTTP/2.0 and responds with the HTTP/1.1 response format.
A smart client receiving that response could start speaking HTTP/2.0 to that server.

If modified since - HTTP protocol

If my browser uses cache (local cache), does it GUARANTEE that each HTTP request it sends contains "IF MODIFIED SINCE" header line?
If not, how do I define that it will ? and what if I define a proxy server to the browser ? will it add it automatically then?
thanks in advance
I was just working on this with my RESTful web service and ran a few tests for a particular resource. First of all I was trying to control the browser cache from my web server by setting the following HTTP headers on the HTTP response for the resource:
Cache-Control: must-revalidate, max-age=30
Last-Modified: Mon May 19 11:21:05 GMT 2014
Expires: Mon May 19 11:51:05 GMT 2014
Then from my web UI I have a timer that periodically (every 5 seconds) does a GET on the resource that I've said is cacheable. Since the resource in the browser cache has not yet expired the GET request for the resource is served from the browser cache, however, once the "max-age" has expired the next GET request goes to the server and the browser adds the "If-Modified-Since" header with the "Last-Modified" date as the value like this:
[GET] - /cms_cm_web/api/notification
referer: http://localhost:8080/cms_ui/#/
accept: application/json, text/plain, */*
accept-language: en-us
ua-cpu: AMD64
accept-encoding: gzip, deflate
user-agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
host: localhost:8080
if-modified-since: Mon, May 19 11:21:05 GMT 2014
connection: Keep-Alive
This came from IE9 browser. I get the same from latest Firefox and Chrome browsers as well.
From here the server can look for the "If-Modified-Since" header and if it determines the resource has not been modified then it returns a 304 Not Modified response, otherwise, it returns the resource representation with a 200 OK response.
so according to the HTTP specification you can control caching using "Expires" and/or "Cache-Control" headers together with a "Last-Modified" header. This will cause the browser cache to perform what's called a "conditional GET" request as it includes the "If-Modified-Since" header.

Http Status Code 413

I have a page with normal Ajax Update panel. There is a submit button which sends the user selection to server. If the user waits for a minute or two, the response from the server is HTTP 413 ( request entity is too large) from server. This only happens when I try to resubmit it after waiting for a minute or two. If a land of the page and submit the form, the server is able to process it.
I have modified the uploadReadAheadSize(as mentioned http://forums.asp.net/t/1574804.aspx) and set it to 200,000,000 - still the problem persists
Http Request
POST https://server/somepage HTTP/1.1
Accept: */*
Accept-Language: en-US,zh-Hans;q=0.9,zh-CN;q=0.8,zh-SG;q=0.7,zh-Hant;q=0.6,zh-HK;q=0.4,zh-MO;q=0.3,zh-TW;q=0.2,zh;q=0.1
Referer: https://server/somepage
x-requested-with: XMLHttpRequest
x-microsoftajax: Delta=true
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Cache-Control: no-cache
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; InfoPath.3; .NET4.0E)
Host: some-server
Content-Length: 86124
Connection: Keep-Alive
Form-Data...........
The request is over SSL.
I also tried to edit httpruntime configuration in web config
<httpRuntime executionTimeout="3600" maxRequestLength="1902400" />
That solved the error. Correct me, if I am able to correctly describe the problem.
The SSL opens a secure tunnel for some time. So whenever I tried posting data in that time frame everything went fine. But once the tunnel got closed and server preloads the request before client re-negotiation. But preload max length was small and hence it failed.
I tried to set the value of uploadReadAhead value to 120,000 which was greater than the entity request size of about 86,000. Still the request failed (weird .. ????).
It was fine once I set it to the value of approx 10 MB.

Getting error #2032 when calling WCF Service even though call succeeds

We are developing an application using Adobe Air from the client gui with a mix of WCF and REST on the backend. One of the requirements of this application is that it must work offline. So, when the user clicks save, the application stores it in a local sqlite database. Every 15 seconds, the application checks if it is online, and if so, sends any pending requests out. Then if the call succeeds it updates the local database so it won't try to send that case out again.
For this particular operation - OpenMedicalCase, the app sends out the request and can't decode the response. I am verifying that the WCF side of things is working correctly. The response message is well formed. The network monitor in flashbuilder says I am receiving 100 Continue:
POST /services/medicalcase.svc HTTP/1.1
Referer: app:/AWC_MRDS.swf
Accept: text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, text/css, image/png, image/jpeg, image/gif;q=0.8, application/x-shockwave-flash, video/mp4;q=0.9, flv-application/octet-stream;q=0.8, video/x-flv;q=0.7, audio/mp4, application/futuresplash, /;q=0.5
x-flash-version: 10,1,53,64
Content-Type: text/xml; charset=utf-8
SOAPAction: "http://tempuri.org/IMedicalCaseService/OpenMedicalCase"
Content-Length: 8534
User-Agent: Mozilla/5.0 (Windows; U; en-US) AppleWebKit/531.9 (KHTML, like Gecko) AdobeAIR/2.0.2
Host: localhost:11934
Cookie: RememberMe=1147670691^1#3435272784175716681
HTTP/1.1 100 Continue
Server: ASP.NET Development Server/10.0.0.0
Date: Thu, 23 Dec 2010 21:23:49 GMT
Content-Length: 0
Calls to other operations at the same endpoint are returning 200 OK like expected. So what ends up happening is that flex thinks the call did not succeed and sends it over and over again. AFAIK flex is not sending Expect: 100-Continue in the headers either.
Update: I attached debuggers to the wcf service AND the gui, setting a breakpoint right before the server sends a response. Flex receives 100 Continue before the service code returns anything. Please note that I am only testing this using the ASP.NET development server. Is there some property or configuration option I need to change on the flex side? WCF?

Resources