How is SSE (Server-Sent Events) `retry` option supposed to work? - server-sent-events

When sending SSE message from a server there is a parameter retry that can be put into a message. (https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#fields).
The reconnection time to use when attempting to send the event. This must be an integer, specifying the reconnection time in milliseconds. If a non-integer value is specified, the field is ignored.
But I actually can not get what is it about. Can someone clarify what is this parameter for how it is supposed to work? I don't get what it is for and who (server or client?) is supposed to use it, and what for?

It’s a client instruction: the browser will wait the specified time after it detects a broken connection (because the server closes the connection after a message perhaps) before re-establishing a connection to the SSE resource url.

Maybe you are missing a newline after the retry. This should work(PHP):
echo "retry: 10000\n\n"; // wait for 10 seconds

Related

What exactly happens when you cancel a network request? [duplicate]

This question already has answers here:
FIN vs RST in TCP connections
(3 answers)
Closed 2 years ago.
I am using iOS but I am asking for networking in general. What does it mean to cancel a network request? Is there a message sent to the server or does the server acknowledge the socket being disconnected?
As you mention using NSURLSessionTask as your way to request, cancel() means a urlSession(_:task:didCompleteWithError:) will be send to the tasks delegate. But passing in a global error code NSURLErrorCancelled (-999) to the defined NSURLErrorDomain.
It is possible that cancelation is later called on the task as a complete processing of the request message is done. So it's up to you to act accordingly once your ErrorDomain is getting the error code NSURLErrorCancelled marking your intention to cancel, and therefore would want to throw away any data that is received since last request.
The Server gets possibly a complete request but your client is not receiving answers anymore. Or the request sequence is not complete so the Server recognises not correct what was intended but would work thru the request until it fails cause of incomplete or wrong formatted request data.
When your receiver callback is down do to canceling you just don't parse any answer of the Server and if you could still parse the Server data that would mean your task is still running. Any result after cancel() should be treated as possibly incomplete or misleading/wrong/invalid. This is why you set a NSURLErrorCancelled error to a NSURLErrorDomain, you want to know what the status is before you assume any received data is of value for you.
By the way NSURLErrorCancelled is also thrown when NSURLSessionAuthChallengeCancelAuthenticationChallenge is marking a server with no trust. So it's actually the same procedure, you decide if any received data is something you want to trust to.
If a socket is disconnected, there is no connection at all, no data passing thru, nothing to receive. nothing to request from. Any Error on both sides can't be exchanged. Server and Client are disconnected then.
Canceling a request does not imply a socket is stopped from working.
It just means the data since the last request is to be handled as invalid.
Why is this?
Because you can construct your own sockets, ignoring ErrorDomain stuff with a complete different request pattern.
Also means in case of client error/crash/canceling nothing is send, you just do not accept any answer as valid even if it was delivered thru the sockets.
For this reasons there are Protocols that define how a message should look like and what should happen in case it was incomplete or would need any kind of validation in a given pattern that validates any data that was send. TCP, UDP, JS-Websocket with handshake and ongoing "dataflow", even OSC etc. and lots of other protocols.

jmeter tcp response assertion

My device and socket communicate through TCP. Now i want to load test my server so i try to use JMeter.
Server and device keep the connection alive. I will need to send login message before sending any other message. And each message doesn't have any end line character but using some bit to define how long that message is.
Now when i send out my login message, server response with success code. Because the connection is keep, and there no end line character, JMeter doesn't know when will it get full response, so it wait until timeout. I even try Response assertion, using contain word but still not working.
My question is what should i do for this case so when JMeter receive some bit, for example 'SUCCESS' word from server, JMeter will understand that it already pass and keep the connection for next request.

TCP keep-alive to determine if client disconnected in netty

I'm trying to determine if a client has closed a socket connection from netty. Is there a way to do this?
On a usual case where a client closes the socket via close() and the TCP closing handshake has been finished successfully, a channelInactive() (or channelClosed() in 3) event will be triggered.
However, on an unusual case such as where a client machine goes offline due to power outage or unplugged LAN cable, it can take a lot of time until you discover the connection was actually down. To detect this situation, you have to send some message to the client periodically and expect to receive its response within a certain amount of time. It's like a ping - you should define a periodic ping and pong message in your protocol which practically does nothing but checking the health of the connection.
Alternatively, you can enable SO_KEEPALIVE, but the keepalive interval of this option is usually OS-dependent and I would not recommend using it.
To help a user implement this sort of behavior relatively easily, Netty provides ReadTimeoutHandler. Configure your pipeline so that ReadTimeoutHandler raises an exception when there's no inbound traffic for a certain amount of time, and close the connection on the exception in your exceptionCaught() handler method. If you are the party who is supposed to send a periodic ping message, use a timer (or IdleStateHandler) to send it.
If you are writing a server, and netty is your client, then your server can detect a disconnect by calling select() or equivalent to detect when the socket is readable and then call recv(). If recv() returns 0 then the socket was closed gracefully by the client. If recv() returns -1 then check errno or equivalent for the actual error (with few exceptions, most errors should be treated as an ungraceful disconnect). The thing about unexpected disconnects is that they can take a long time for the OS to detect, so you would have to either enable TCP keep-alives, or require the client to send data to the server on a regular basis. If nothing is received from the client for a period of time then just assume the client is gone and close your end of the connection. If the client wants to, it can then reconnect.
If you read from a connection that has been closed by the peer you will get an end-of-stream indication of some kind, depending on the API. If you write to such a connection you will get an IOException: 'connection reset'. TCP doesn't provide any other way of detecting a closed connection.
TCP keep-alive (a) is off by default and (b) only operates every two hours by default when enabled. This probably isn't what you want. If you use it and you read or write after it has detected that the connection is broken, you will get the reset error above,
It depends on your protocol that you use ontop of netty. If you design it to support ping-like messages, you can simply send those messages. Besides that, netty is only a pretty thin wrapper around TCP.
Also see this SO post which describes isOpen() and related. This however does not solve the keep-alive problem.

Does asynchronous receive guarantee the detection of connection failure?

From what I know, a blocking receive on a TCP socket does not always detect a connection error (due either to a network failure or to a remote-endpoint failure) by returning a -1 value or raising an IO exception: sometimes it could just hang indefinitely.
One way to manage this problem is to set a timeout for the blocking receive. In case an upper bound for the reception time is known, this bound could be set as timeout and the connection could be considered lost simply when the timeout expires; when such an upper bound is not known a priori, for example in a pub-sub system where a connection stays open to receive publications, the timeout to be set would be somewhat arbitrary but its expiration could trigger a ping/pong request to verify that the connection (and the endpoint too) is still up.
I wonder whether the use of asynchronous receive also manages the problem of detecting a connection failure. In boost::asio I would call socket::asynch_read_some() registering an handler to be asynchronously called, while in java.nio I would configure the channel as non-blocking and register it to a selector with an OP_READ interest flag. I imagine that a correct connection-failure detection would mean that, in the first case the handler would be called with a non-0 error_code, while in the second case the selector would select the faulty channel but a subsequent read() on the channel would either return -1 or throw an IOException.
Is this behaviour guaranteed with asynchronous receive, or could there be scenarios where after a connection failure, for example, in boost::asio the handler will never be called or in java.nio the selector will never select the channel?
Thank you very much.
I believe you're referring to the TCP half-open connection problem (the RFC 793 meaning of the term). Under this scenario, the receiving OS will never receive indication of the lost connection, so it will never notify the app. Whether the app is readding synchronously or asynchronously doesn't enter into it.
The problem occurs when the transmitting side of the connection somehow is no longer aware of the network connection. This can happen, for example, when
the transmitting OS abruptly terminates/restarts (power outage, OS failure/BSOD, etc.).
the transmitting side closes its side while there is a network disruption between the two sides and cleans up its side: e.g transmitting OS reboots cleanly during disruption, transmitting Windows OS is unplugged from the network
When this happens, the receiving side may be waiting for data or a FIN that will never come. Unless the receiving side sends a message, there's no way for it to realize the transmitting side is no longer aware of the receiving side.
Your solution (a timeout) is one way to address the issue, but it should include sending a message to the transmitting side. Again, it doesn't matter the read is synchronous or asynchronous, just that it doesn't read and wait indefinitely for data or a FIN. Another solution is using a TCP KEEPALIVE feature that is supported by some TCP stacks. But the hard part of any generalized solution is usually determining a proper timeout, since the timeout is highly dependent on characteristics of the specific application.
Because of how TCP works, you will typically have to send data in order to notice a hard connection failure, to find out that no ACK packet will ever be returned. Some protocols attempt to identify conditions like this by periodically using a keep-alive or ping packet: if one side does not receive such a packet in X time (and perhaps after trying and failing one itself), it can consider the connection dead.
To answer your question, blocking and non-blocking receive should perform identically except for the act of blocking itself, so both will suffer from this same issue. In order to make sure that you can detect a silent failure from the remote host, you'll have to use a form of keep-alive like I described.

Detecting HTTP close using inet

In my mochiweb application, I am using a long held HTTP request. I wanted to detect when the connection with the user died, and I figured out how to do that by doing:
Socket = Req:get(socket),
inet:setopts(Socket, [{active, once}]),
receive
{tcp_closed, Socket} ->
% handle clean up
Data ->
% do something
end.
This works when: user closes his tab/browser or refreshes the page. However, when the internet connection dies suddenly (say wifi signal lost all of a sudden), or when the browser crashes abnormally, I am not able to detect a tcp close.
Am I missing something, or is there any other way to achieve this?
There is a TCP keepalive protocol and it can be enabled with inet:setopts/2 under the option {keepalive, Boolean}.
I would suggest that you don't use it. The keep-alive timeout and max-retries tends to be system wide, and it is optional after all. Using timeouts on the protocol level is better.
The HTTP protocol has the status code Request Timeout which you can send to the client if it seems dead.
Check out the after clause in receive blocks that you can use to timeout waiting for data, or use the timer module, or use erlang:start_timer/3. They all have different performance characteristics and resource costs.
There isn't a default "keep alive" (but can be enabled if supported) protocol over TCP: in case there is a connection fault when no data is exchanged, this translates to a "silent failure". You would need to account for this type of failure by yourself e.g. implement some form of connection probing.
How does this affect HTTP? HTTP is a stateless protocol - this means that every request is independent of every other. The "keep alive" functionality of HTTP doesn’t change that i.e. "silent failure" can still occur.
Only when data is exchanged can this condition be detected (or when TCP Keep Alive is enabled).
I would suggest sending the application level keep alive messages over HTTP chunked-encoding. Have your client/server smart enough to understand the keep alive messages and ignore them if they arrive on time or close and re-establish the connection again.

Resources