Is there a HTTP header field / hack to tell the browser NOT to pipeline its requests? - http

I am implementing a minimalistic web server application on a Microcontroller. When I have several images (or CSS/JS) on the web page, the browser creates several connections and fetches them. But the Microcontroller can not catch up with this. Is there a way to tell the browser to stop pipelining and fetch them one by one ?
Note :: "Connection: close" is already in place.

I think Connection:close is exactly the wrong message. When the browser creates multiple connections, it precisely does not pipeline its requests - so ISTM that you want the browser to pipeline, instead of creating parallel connections.
So one step towards that would be to use HTTP 1.1, and keep the connection open. The browser would then reuse the TCP connection for further requests. This should allow the microcontroller to catch up.
Now, the browser might still try to create additional, parallel connections. The best reaction to that is to not accept any of these connections. So limit the number of parallel connections that you are serving (independent of client), and only read new requests when you are done reading the previous ones. In doing so, prefer to read from established connections over accepting new connections.
If you have access to the TCP stack of the controller, you might be able to tell what host a connection comes from, so you can accept connections from other browsers while limiting the number of connections from the same browser (something that you cannot do in the regular socket API).

"Pipelining" is something else; it means that the user agent sends additional requests on the same connection although the first one didn't complete yet (see http://greenbytes.de/tech/webdav/rfc2616.html#pipelining).
"Connection: close" doesn't seem to be relevant; that being said: is there a reason why you don't want the connection reused?
With respect to your question: no, I don't think you can prevent clients from doing that. Did you try limiting the maximum number of open connections on your server?

Same problem... However, Firefox loads my site very fast unlike Opera. I have not invented anything better than rejecting connections at an initial stage: SYN. I'm just answering with RST flag. But probably it doesn't suit Opera.
My device supports only two simultaneous connections.

Related

How to limit HTTPS to one TCP connection?

I'm using uIP along with mbed TLS to run a simple web server on a microcontroller, and host an HTTPS page.
The problem is: my chip only has enough RAM to handle one TLS connection at a time, but Firefox (and Chrome) tries to open multiple connections at once to load the images on the page. If I tell uIP to abort or close additional connections, Firefox assumes an error and gives up loading the rest of the page.
I can tell uIP to limit the total connections to 1, and in that case it just drops new SYN packets if there is already a connection. This actually works, as Firefox will wait and try again until the page is fully loaded. I can't use this a solution however, since I do need to allow more than 1 TCP connection total in order to handle other types of connections (I can serve a regular HTTP web page at the same time, for example). If I could tell uIP to limit connections on a specific port to 1 at a time, that may solve the problem, but I don't think uIP has that capability. I also don't see a way to force uIP to drop certain packets.
I've looked all over the web, but I can't find any information on running a web server using just one TCP connection at a time.
Does anyone have any ideas?
Thanks!
Marlon
Just ignore the SSL connection until you are ready to process it. Browsers should tolerate this.

Why do we need a half-close socket?

According to this blog, it seems half open connection is what we want to avoid.
So why does Java still provides the facility to make a socket half close?
According to this blog, it seems half open connection is what we want to avoid.
This author of the blog explicitly notes that he does not talk about deliberately half-closed connections but about half-open connections which are caused by intermediate devices like routers which drop the connection state after some timeout.
So why does Java still provides the facility to make a socket half close?
Because there are useful? Half-close just means that no more data will be send on the socket but it will still be able to receive data. This kind of behavior is actually useful for various situations where the client sends only a request and receives a response because it can be used to indicate the end of the request to the peer.

Under what circumstances will my browser attempt to re-use a TCP connection for multiple requests?

I am using Firefox, but I'd like to know how browsers decide this in general.
It seems that when I access the same URL twice in a short amount of time, my browser tries to re-use the TCP same connection for both requests (this is called keep-alive). However, when I access two different URLs (but still served by the same server), the browser sometimes decides to open up a new connection for each request. Obviously, the browser does not use a one-connection-per-URL policy.
I am asking this because I am trying to implement a web service that uses long polling. I can imagine that a user might want to open this service in multiple tabs on the same browser. However, with keep-alive, the second long poll request does not get sent until the first one completes (at least in Firefox), because the browser is trying to shove both of them into the same socket, which I did not expect when I designed the service. Even if the browser implements pipe-lining, there is no way that I can respond to the second request before I respond to the first, because HTTP mandates that I complete the responses in order.
When using HTTP/1.1, by default, the TCP connections are left open for reuse. This is for better performance than starting a new connection per request. The connection can be reused but the connection could close at any time by any of the parties.
You should read HTTP1.1 and the part on persistent connections.
In your case it is not even using HTTP pipelining (not broadly supported) because the next request is sent after the response of the first.
The browsers have a connection pool and reuse it per hostname. Generally speaking, a browser should not reuse a single connection for multiple hostnames, even if those hostnames actually resolve to the same IP address.
Most browsers allow the user to configure or override the number of persistent connections per server; most modern browsers default to six. If Firefox is truly blocking the second request because there's already a connection active, this is a bug in Firefox and should be filed in their bug tracking system. But if such a bug existed, I think you'd see many sites broken.

Using NetConnection and URLStream to send/recieve data at high frequency

I'm writing a Comet-like app using Flex on the client and my own hand-written server.
I need to be able to send short bursts of data from the client at quite a high frequency (e.g. of the order of 10ms between sends).
I also need the server to push short bursts of data at a similarly high frequency.
I'm using NetConnection.call() to send the data to the server, and URLStream (with chunked encoding) to push the data from the server to the client.
What I've found is that the data isn't being sent/received as soon as it's available. For example, in IE, it seems the data is sent every 200ms rather than as soon as NetConnection.call() is called. Similarly, URLStream isn't making the data available as soon as the server is sending it.
Judging by the difference in behaviour between the browsers, it seems as though the Flash Player (version 10) is relying on the host browser to do all the comms. Can anyone confirm this? Update: This is very likely as only the host browser would know about the proxy settings that might be set.
I've tried using the Socket class and there's no problem with speed there: it works perfectly. However, I'd like to be able to use HTTP-based (port 80) connections so that my app can run in heavily fire-walled environments (I tried using a Socket over port 80, but that has its problems).
Incidentally, all development/testing has been done on an internal LAN, so bandwidth/latency is not an issue.
Update: The data being sent/received is in small packets and doesn't need to be in any particular format. For example, I might need to send a short array of Numbers, and this could either be encoded in AMF (e.g. via NetConnection.call()) or could be put into GET parameters (e.g. using sendToURL()). The main point of my question is really to see whether anyone else has experienced the same problem in calling NetConnection/URLStream frequently, and whether there is a workaround (it's also possible that the fault lies with my server code of course, rather than Flash).
Thanks.
Turns out the problem had nothing to do with Flash/Flex or any of the host browsers. The problem was in my server code (written in C++ on Linux), and without access to my source code the cause is hard to find (so I couldn't have hoped for an answer from this forum).
Still - thank you everyone who chipped in.
It was only after looking carefully at the output shown in Wireshark that I noticed the problem, which was twofold:
Nagle's algorithm
I was sending replies in multiple packets by calling write() multiple times (e.g. once for the HTTP response header, and again for the HTTP response body). The server's TCP/IP stack was waiting for an ACK for the first packet before sending the second, but because of Nagle's algorithm the client was waiting 200ms before sending back the ACK to the first packet, so the server took at least 200ms to send the full HTTP response.
The solution is to use send() with the flag MSG_MORE until all the logically connected blocks are written. I could also have used writev() or setsockopt() with TCP_CORK, but it suited my existing code better to use send().
Chunk-encoded streams
I'm using a never-ending HTTP response with chunk encoding to push data back to the client. Naggle's algorithm needs to be turned off here because even if each chunk is written as one packet (using MSG_MORE), the client OS TCP/IP stack will still wait up to 200ms before sending back an ACK, and the server can't push a subsequent chunk until it gets that ACK.
The solution here is to ask the server not to wait for an ACK for each sent packet before sending the next packet, and this is done by calling setsockopt() with the TCP_NODELAY flag.
The above solutions only work on Linux and aren't POSIX-compliant (I think), but that isn't a problem for me.
I'm almost 100% sure the player relies on the browser for such communications. Can't find an official page stating so atm, but check this out for example:
Applications hosting the Flash Player
ActiveX control or Flash Player
plug-in can use the
EnforceLocalSecurity and
DisableLocalSecurity API calls to
control security settings.
Which I think somehow implies the idea. Also, I've suffered some network related bugs on FF/IE only which again points out to the player using each browser for networking (otherwise there wouldn't be such differences).
And regarding your latency problem, I think that if speed is critical, your best bet is sockets. You have some work to do, but seems possible, check out the docs again:
This error occurs in SWF content.
Dispatched if a call to
Socket.connect() attempts to connect
either to a server outside the
caller's security sandbox or to a port
lower than 1024. You can work around
either problem by using a cross-domain
policy file on the server.
HTH,
Juan

Are socket connections faster than http on Blackberry?

I'm writing an app for Blackberry that was originally implemented in standard J2ME. The network connection was done using Connector.open("socket://...:80/...") instead of http://
Now, I've implemented the connection using both methods, and it seems like some times, the socket method is more responsive, and some times it doesn't work at all. Is there a significant difference between the two? Mostly what I'm trying to achieve is responsiveness from the connection to get a smooth progress bar.
Blackberry's implementation of http and https provide more options for connecting to the target server than socket, and, of course, implement all the HTTP protocol stuff for you. I've not benchmarked them, but it makes a certain amount of sense that direct TCP via socket would be quicker in some cases, especially if what is listening on port 80 isn't an HTTP server (no protocol overhead)
I've had difficulty in the past with different network providers, some requiring deviceside=true others deviceside=false, and no real way to know until the first support call for that network came in.
Mostly what I'm trying to achieve is responsiveness from the connection to get a smooth progress bar.
Pardon my saying so, but a "smooth progress bar" is "gilding the lily" - nice to have and look at, but not critical to the application's function, reliability or robustness. Go with what is more robust and reduces code size - likely http in this case.
Since both operate over a network I don't think you can guarantee a smooth progress bar. You might have more chance of that if you remind the person to stay in one place so you have a chance of a consistent connection ;)
There is less overhead with a socket connection than an HTTP one. In fact, HTTP connections run over the socket connection. You can take advantage of the reduced overhead of the socket connection to appear more responsive, but you will likely have more work to do than you would with HTTP. The API is more low-level so coding is more complex.
One difference between a socket and an HTTP connection on the BlackBerry is that HTTP connections may be transparently routed via an HTTP proxy in the case of BES and BIS connections.
In theory sockets will be faster, but then you're responsible for managing the overhead of rolling your own protocol (depending on complexity). Though sockets are more lightweight, I've found that HTTP and all the comes with it greatly reduces the headache.

Resources