Setup
180.87.13.77 ===Router=========GRE TUNNEL=========ROUTER=====Internet===Webserver
The client .77 cannot open few web sites.
We did a packet capture and noticed in all the TCP sessions
after HTTP GET is requested, Server is Sending RST/ACK
THis terminate the session.
No idea on what is happening.Wireshark Capture
Related
I'm implementing HTTP over TLS proxy server (sni-proxy) that make two socket connection:
Client to ProxyServer
ProxyServer to TargetServer
and transfer data between Client and TargetServer(TargetServer detected using server_name extension in ClientHello)
The problem is that the client doesn't close the connection after the response has been received and the proxy server waits for data to transfer and uses resources when the request has been done.
What is the best practice for implementing this project?
The client behavior is perfectly normal - HTTP keep alive inside the TLS connection or maybe even a Websocket connection. Given that the proxy does transparent forwarding of the encrypted traffic it is not possible to look at the HTTP traffic in order to determine exactly when the connection can be closed. A good approach is therefore to keep the connection open as long as the resources allow this and on resource shortage close the connections which were idle (no traffic) the longest time.
Story: In a client-server system I use a long time connection(an http(s) request from client to server with a long time timeout) in order to use notify client to do some actions(Most of data transfer is from client to server but some commands send to client in response of this http(s) request).
Problem: If client cancel the connection server can understand that but if internet connection of client loses(e.g, unplug the LAN cable or it loses the WLAN/GPRS antenna) neither client nor server understand this. Connection still remains until (some time spends and) somebody writes something in it which is too late.
PS: 0) I googled with AKC/NACK, Keep-alive, ping-pong and heartbeat key words for http(s) request and could not find a protocol which it periodically check the status of request.
1) In this you can find an argument for curl command which sets a time interval to send a props(also I monitored this with wireshark). But if still if you unplug the cable neither curl command nor server can understand the connection lost.
curl -k --keepalive-time 5 https://exampel.com/v1/v/f9a64e73/notification
2) Also here explains that there is an http header which is used to use a connection multiple time.
In server side and with nginx web-server we can enable TCP keep-alive probe with so_keepalive=on as listen input argument. Find more information in this link.
I understand that http2 uses one tcp connection to serve multiple requests, for example, if I request index.html which contains a.css and a.js, these three requests will be done in one tcp connection.
What happens if user clicks index2.html? does this request still use the same previous tcp connection? If so, will the browser keep the connection open until user closes the browser? And on the server side, does the server keep many connections open all the time?
When using HTTP/2, browsers typically open only one connection per domain.
In your example, index2.html will be sent on the same TCP connection that was used for index.html, a.css and a.js.
In HTTP/2 requests are multiplexed on the same TCP connection, so that the browser can send them concurrently, without waiting for a previous request to be responded to.
Both browsers and servers have an idle timeout for TCP connections.
If the connection is idle for long enough, it will be closed by either party - the one that has the shorter idle timeout, to save resources.
For example, you may open a connection to a wikipedia.org, perform a few requests, and then leave that tab and work on something else.
After a while (typically 30 seconds) the browser will close the TCP connection to wikipedia.org.
On the server side, the server will keep the connections from various clients open, until they are either closed by the client or until the server-side idle timeout fires, at which point it's the server that initiated the close of the TCP connection.
With HTTP/2, the number of connections that a server has to maintain is vastly less than it was with HTTP/1.1.
With HTTP/2, a server has to maintain just 1 TCP connection per client; with HTTP/1.1, the server had to maintain typically 2-8 TCP connections per client.
What happens if user clicks index2.html? does this request still use the same previous tcp connection?
Yes. On top of that, multiple browser tabs/windows also share a single HTTP/2 connection.
If so, will the browser keep the connection open until user closes the browser?
Below from RFC - connection management
For best performance, it is expected that clients will not close
connections until it is determined that no further communication with
a server is necessary (for example, when a user navigates away from a
particular web page) or until the server closes the connection.
Clients SHOULD NOT open more than one HTTP/2 connection to a given
host and port pair.
And on the server side, does the server keep many connections open all the time?
Servers are encouraged to maintain open connections for as long as
possible but are permitted to terminate idle connections if necessary.
When either endpoint chooses to close the transport-layer TCP
connection, the terminating endpoint SHOULD first send a GOAWAY
(Section 6.8) frame so that both endpoints can reliably determine
whether previously sent frames have been processed and gracefully
complete or terminate any necessary remaining tasks.
More info on connection error below.
RFC connection-error-handling
A connection error is any error that prevents further processing of
the frame layer or corrupts any connection state. An endpoint that
encounters a connection error SHOULD first send a GOAWAY frame with
the stream identifier of the last stream that it successfully received
from its peer. The GOAWAY frame includes an error code that indicates
why the connection is terminating. After sending the GOAWAY frame for
an error condition, the endpoint MUST close the TCP connection. It is
possible that the GOAWAY will not be reliably received by the
receiving endpoint. In the event of a connection error, GOAWAY only
provides a best-effort attempt to communicate with the peer about why
the connection is being terminated.
An endpoint can end a connection at any time. In particular, an
endpoint MAY choose to treat a stream error as a connection error.
Endpoints SHOULD send a GOAWAY frame when ending a connection,
providing that circumstances permit it.
I am new to SignalR and I have a question on SignalR communication when we introduce a load balancer.
Lets assume we want to execute a void method on server side which receives some data as a parameter from client. Server takes that data and processes further. Lets say after processing for a while, it identifies that it has to send the notification back to client.
Case 1(Between client and server): Client calls void method on server side(Hub) by passing some data. Connection gets disconnected. Server processes the client data further. When it identifies that it has to push the notification back to client, it recreates the connection and pushes back the data to client.
Case 2(Between client and server with load balancer in between): How does the above scenario(Case 1) work here?. When server sends the push notification back to load balancer after processing client data, how does it know to which client it has to send the notification back?
You should read the scaleout docs. Short version: messages get sent to all servers, so if the client reconnects (it's not the server that establishes the connection!) before the connection times out , it will get the message.
Quote from the docs:
The cursor mechanism works even if a client is routed to a different
server on reconnect. The backplane is aware of all the servers, and it
doesn’t matter which server a client connects to.
I want to know few basic things about connection establishment between client and server.
suppose my web page has left menu where i have some links, on click of those child pages
are open in the right side of master page .Now each link is requesting a new web page to
the server. Each web page is calling 5-6 web services asynchronously to get the data. So
if i am clicking on a left menu link, a connection is established between client and
server using (client IP and Port) to (server Ip and port). But before the response comes,
suppose i clicked on other link of menu then how server knows that old conneciton is
terminated and new connection is established.next thing,when i clicked on a link, request
goes to server, server process the request but before sending the response if connection
is terminated from client side, what happens to that response, does server discards the
response and takes the new request for process.
actually i have lot of confusion, so if anyone can explain me the full client-server round trip process,that will really very helpful .
Thanks in advance
Server will discard the response and will work in subsequent requests. Reading about Hypertext Transfer Protocol will make you understand more. You can search on internet one of article is here
The request and response is made over TCP which is connection oriented protocol as the connection breaks IIS will know that client if not accessible. You try http://www.google.com.pk:80 will take you to http://www.google.com.pk as we can omit default port i.e. 80. Try this http://www.google.com.pk:82/ it will not open www.google.com as tcp connection could not be made on port 82.