I am reading that Keep-Alives is meant for performance - so that no connections need to be recreated but just reuse the existing ones. What if there is a traffic spike, will new connections be created?
Additionally, if I don't turn on Keep-Alive and in a high traffic environment, will it eventually running out of connections/socket port on client side? because a new connection has to be created for each http/web request.
HTTP is a stateless protocol.
In HTTP 1.0 each request meant opening a new TCP connection.
That caused performance issues (e.g. have to re-do the 3-way handshake for each GET or POST) so the Keep-Alive Header was added to maintain the connection across requests and in HTTP1.1 the default is persistent connection.
This means that the connection is reused across requests.
I am not really familiar with IIS but if there is a configuration to close the connection after each HTTP response, it will have impact on the performance.
Concerning the running out of sockets/ports on the client side, that could occur if the client fires a huge amount of requests and a new TCP connection must be opened per HTTP request.
After a while the ports will be depleted
Related
I'm implementing HTTP over TLS proxy server (sni-proxy) that make two socket connection:
Client to ProxyServer
ProxyServer to TargetServer
and transfer data between Client and TargetServer(TargetServer detected using server_name extension in ClientHello)
The problem is that the client doesn't close the connection after the response has been received and the proxy server waits for data to transfer and uses resources when the request has been done.
What is the best practice for implementing this project?
The client behavior is perfectly normal - HTTP keep alive inside the TLS connection or maybe even a Websocket connection. Given that the proxy does transparent forwarding of the encrypted traffic it is not possible to look at the HTTP traffic in order to determine exactly when the connection can be closed. A good approach is therefore to keep the connection open as long as the resources allow this and on resource shortage close the connections which were idle (no traffic) the longest time.
I'm building a HTTP client(for embedded devices) and I was wondering,
If I receive a HTTP 3xx response, and in the location header I get a hostname different from the one I had in the request. Should I disconnect the TCP connection and reconnect to the new host, or I just need to send a new request with a new host header and keep the old TCP connection alive.
Thank you
It doesn't make sense to reuse the original TCP connection if you're being redirected elsewhere. If my webserver only hosts example.com and I redirect you to elsewhere.net, my webserver will probably not respond to a request for elsewhere.net.
Worse, this also potentially sets you up for a great man-in-the-middle attack if my server redirects you to http://bank.com and you reuse the same TCP connection when sending a request to bank.com. My server can maliciously respond to requests with Host: bank.com, which isn't something you want to happen.
You can't assume the original connection can be reused unless the redirect is to the same same host with the same protocol.
Persistent HTTP connections are a little tricky with the number of client/server combinations. You can avoid the complexity by just wasting time closing and re-establishing each connection:
If you're implementing a HTTP/1.0 client, Connection: keep-alive isn't something you have to implement. Compliant servers should just close the connection after every request if you don't negotiate that you support persistent connections.
If you're implementing a HTTP/1.1 client and don't want to persist the connection, just send Connection: close with your request and a HTTP/1.1 server should close the connection.
Suppose when we request a resource over HTTP, we get a response as shown below:
GET / HTTP/1.1
Host: www.google.co.in
HTTP/1.1 200 OK
Date: Thu, 20 Apr 2017 10:03:16 GMT
...
But when a browser requests many resources at a time, how can it identify which request got which response?
when a browser requests many resources at a time, how can it identify which request got which response?
A browser can open one or more connections to a web server in order to request resources. For each of those connections the rules regarding HTTP keep-alive are the same and apply to both HTTP 1.0 and 1.1:
If HTTP keep-alive is off, the request is sent by the client, the response is sent by the server, the connection is closed:
Connection 1: [Open][Request1][Response1][Close]
If HTTP keep-alive is on, one "persistent" connection can be reused for succeeding requests. The requests are still issued serially over the same connection, so:
Connection 1: [Open][Request1][Response1][Request3][Response3][Close]
Connection 2: [Open][Request2][Response2][Request4][Response4][Close]
With HTTP Pipelining, introduced with HTTP 1.1, if it is enabled (on most browsers it is by default disabled, because of buggy servers), browsers can issue requests after each other without waiting for the response, but the responses are still returned in the same order as they were requested.
This can happen simultaneously over multiple (persistent) connections:
Connection 1: [Open][Request1][Request2][Response1][Response2][Close]
Connection 2: [Open][Request3][Request4][Response3][Response4][Close]
Both approaches (keep-alive and pipelining) still utilize the default "request-response" mechanism of HTTP: each response will arrive in the order of the requests on that same connection. They also have the "head of line blocking" problem: if [Response1] is slow and/or big, it holds up all responses that follow on that connection.
Enter HTTP 2 multiplexing: What is the difference between HTTP/1.1 pipelining and HTTP/2 multiplexing?. Here, a response can be fragmented, allowing a single TCP connection to transmit fragments of different requests and responses intermingled:
Connection 1: [Open][Rq1][Rq2][Resp1P1][Resp2P1][Rep2P2][Resp1P2][Close]
It does this by giving each fragment an identifier to indicate to which request-response pair it belongs, so the receiver can recompose the message.
I think you are really asking for HTTP Pipelining here. This is a technique introduced in HTTP/1.1, through which all requests would be sent out by the client in order and be responded by the server in the very same order. All the gory details are now in RFC 7230, sec. 6.3.2.
HTTP/1.0 had (or has) a comparable method known as Keep Alive. This would allow a client to issue a new request right after the previous has been answered. The benefit of this approach is that client and server no longer need to negotiate through another TCP handshake for a new request/response cycle.
The important part is that in both methods the order of the responses matches the order of the issued requests over one connection. Therefore, responses can be uniquely mapped to the issuing requests by the order in which the client is receiving them: First response matches, first request, second response matches second request, … and so forth.
I think the answer you are looking for is TCP,
HTTP is a protocol that relies on TCP to establish connection between the Client and the Host
In HTTP/1.0 a different TCP connection is created for each request/response pair,
HTTP/1.1 introduced pipelining, wich allowed mutiple request/response pair, to reuse a single TCP connection, to boost performance (Didnt work very well)
So the request and the corresponding response are linked by the TCP connection they rely on,
It's then easy to associate a specific request with the response it produced,
PS: HTTP is not bound to use TCP forever, for example google is experimenting with other transport protocols like QUIC, that might end up being more efficient than TCP for the needs of HTTP
In addition to the explanations above consider a browser can open many parallel connections, usually up to 6 to the same server. For each connection it uses a different socket. For each request-response in each socket it is easy to determine the correlation.
In each TCP connection, request and response are sequential. A TCP connection can be re-used after finishing a request-response cycle.
With HTTP pipelining, a single connection can be multiplexed for
multiple overlapping requests.
In theory, there can be any number[*1] of simultaneous TCP connections, enabling parallel requests and responses.
In practice, the number of simultaneous connections is usually limited on the browser and often on the server as well.
[*1] The number of simultaneous connections is limited by the number of ephemeral TCP ports the browser can allocate on a given system. Depending on the operating system, ephemeral ports start at 1024 (RFC 6056), 49152 (IANA), or 32768 (some Linux versions).
So, that may allow up to 65,535 - 1023 = 64,512 TCP source ports for an application. A TCP socket connection is defined by its local port number, the local IP address, the remote port number and the remote IP address. Assuming the server uses a single IP address and port number, the limit is the number of local ports you can use.
I am trying to understand what are the HTTP pipelining and HTTP keep-alive connections, and trying to establish a connection between these two topics and Server Sent events technology.
As far as I understand,
HTTP keep-alive connection is the default in HTTP 1.1 way of using TCP when the established once TCP connection is used for sending several HTTP requests one by one.
HTTP pipelining is the capability of client to send requests to server while responses to previous requests were not yet received using the same TCP connection, generally not used as a default way in browsers.
My questions:
1) if it is possible to send several requests to server one after one using one TCP connection - how the client can distinguish between the responses? I guess client is using FIFO order of sending responses by server?
2) Why non-idempotent requests such as POST requests shouldn't be pipelined (according to wikipedia)?
3) What about the limitations of the web-server: is the number of possible open TCP connections limited? If yes, then if some number of clients hold keep-alive connections others cannot establish connections, and this can result in a problem, right?
4) Server Sent Events are using the keep-alive connection but, as far as I understand, SSE are not using pipelining. Instead they manage to process several responses to one request, or may be they just send another request when the next response with event arrived. Which guess is correct?
5) One TCP connection means one socket? One socket means one TCP connection? Closing/opening socket means closing/opening TCP connection?
Yes, FIFO. TCP/IP guarantees delivering data in-order, so responses can't arrive in a different order (if the server/proxy is buggy and sends responses in wrong order then you're totally screwed).
I don't recall any reason per HTTP spec. It may be just caution, because pipelining is poorly implemented in some proxies/servers.
HTTP spec suggests 2 connections per server, browsers have settled on 6-8 connections per server, but there is no fixed limit. Running out of connections is a real problem for Apache, and for high-load situations it's recommended to disable KeepAlive in Apache and use a proxy (e.g. HAProxy) that can cheaply provide Keep-Alive functionality to clients.
The benefit of a proxy is that one proxy can distribute connections to several servers (helps scaling), or can modify the traffic (e.g. gzip compress everything even if server-side-software didn't).
SSE doesn't rely on Keep-Alive. It's not using multiple responses. It's a single response that takes forever to "download", so pipelining or keep-alive are irrelevant for SSE. The TCP/IP connection cannot return any more responses while SSE response is being sent.
SSE will keep the server busy as long as the connection is open (so typicall all the time for every user). That's why it's best to use SSE with Node.js/Tornado that can handle hundreds of thousands connections rather than PHP/Apache that is designed for few connections at a time.
Sockets are programming interface for TCP/IP connections. Generally yes, one socket is one connection.
I just went through the specification of http 1.1 at http://www.w3.org/Protocols/rfc2616/rfc2616.html and came across a section about connections http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8 that says
" A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.
Persistent connections provide a mechanism by which a client and a server can signal the close of a TCP connection. This signaling takes place using the Connection header field (section 14.10). Once a close has been signaled, the client MUST NOT send any more requests on that connection. "
Then I also went through a section on http state management at https://www.rfc-editor.org/rfc/rfc2965 that says in its section 2 that
"Currently, HTTP servers respond to each client request without relating that request to previous or subsequent requests;"
A section about the need to have persistent connections in the RFC 2616 also said that prior to persistent connections every time a client wished to fetch a url it had to establish a new TCP connection for each and every new request.
Now my question is, if we have persistent connections in http/1.1 then as mentioned above a client does not need to make a new connection for every new request. It can send multiple requests over the same connection. So if the server knows that every subsequent request is coming over the same connection, would it not be obvious that the request is from the same client? And hence would this just not suffice to maintain the state and would this just nit be enough for the server to understand that the request was from the same client ? In this case then why is a separate state management mechanism required at all ?
Basically, yes, it would make sense, but HTTP persistent connections are used to eliminate administrative TCP/IP overhead of connection handling (e.g. connect/disconnect/reconnect, etc.). It is not meant to say anything about the state of the data moving across the connection, which is what you're talking about.
No. For instance, there might an intermediate (such as a proxy or a reverse proxy) in the request path that aggregates requests from multiple TCP connections.
See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-21.html#intermediaries.