Using a KillSwitch in an akka http streaming request - http

I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!

I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.

Related

gRPC Server-side non-streaming request

I have a gRPC service, and I would like to have a message initiated from the server to get order states from the client. I would like this server=>client request to be synchronous, and the client must initiate the service because of firewall constraints.
I do not see a way to accomplish this with gRPC messages, but I came up with two approaches that may work.
message OrderStates {
repeated OrderState order_state = 1;
}
Option 1 - Non-streaming request + Streaming response
service < existing service > {
rpc OrderStatuses(OrderStates) returns (stream google.protobuf.Empty);
}
With this approach, the client sends OrderStates when it starts up. Each time the server wants to get the current states from the client, it sends the streamed Empty response.
Option 2 - Streaming request + Streaming response
service < existing service > {
rpc OrderStatuses(stream google.protobuf.Empty) returns (stream OrderStates);
}
This is the same as Option 1, but the client sends the initial request as a streaming request.
Any advice would be helpful.
Your approach is the way to accomplish this because you have a constraint that the server cannot act as a gRPC client and initiate a connection to the client acting as a gRPC server which would be the way to achieve this without your constraint.
Because of the constraint that the client must initiate the connection, the only solution is to hold the connection open (with a stream) so that the server may send messages to the client unbidden.
I would go with option #2 and the semantic of the RPC being "Hey server, ping me when you want OrderStates. You must use streaming on the client so that it can send updates.
An unstated optimization may be that, if the client remains alive but does not send an update in response to the server's ping within some timeframe, then the server assumes that there is no update.

SignalR queue limit per user

I have this code to test asynchronous programming in SignalR. this code send back to client the text after 10 seconds.
public class TestHub : Hub
{
public async Task BroadcastMessage(string text)
{
await DelayResponse(text);
}
async Task DelayResponse(string text)
{
await Task.Delay(10000);
Clients.All.displayText(text);
}
}
this code work fine but there is an unexpected behavior. when 5 messages are sent in less than 10 second, client can't send more message until previous "DelayResponse" methods end. it happens per connection and if before 10 seconds close the connection and reopen it, client can send 5 messages again. I test it with chrome, firefox and IE.
I made some mistake or it is signalr limitation?
You are most likely hitting a browser limit. When using longPolling and serverSentEvent transport each send is a separate HTTP request. Since you are delaying response these requests are longer running and browsers have limits of how many concurrent connection can be opened. Once you reach the limit a new connection will not be open until one of the previous ones is completed.
More details on concurrent requests limit:
Max parallel http connections in a browser?
That's not the sens of signalR, that you waiting for a "long running" task. For that signalR supports server push mechanisme.
So if you have something which needs more time you can trigger this from client.
In the case the calculation is finish you can send a message from server to client.

Does SignalR provide message integrity mechanisms which ensure that no messages are lost during client reconnect

Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps

SignalR long polling transport

I'm using SignalR 0.5.3 with hubs and I'm explicitely setting transport to long polling like this:
$.connection.hub.start({ transport: 'longPolling' }, function () {
console.log('connected');
});
with configuration like this (in global.asax.cs Application_Start method):
GlobalHost.DependencyResolver.UseRedis(server, port, password, pubsubDB, "FooBar");
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(2);
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(15);
However the long polling doesn't seem to be working neither on development (IIS express) nor on production (IIS 7.5) environment. Connection seems to be made properly, however the long poll request is always timed out (after ~2 minutes) and reconnect happens afterwards. Logs from IIS are here. Response from first timed out request:
{"MessageId":"3636","Messages":[],"Disconnect":false,"TimedOut":true,"TransportData":{"Groups":["NotificationHub.56DDB6692001Ex"],"LongPollDelay":0}}
Timed out reconnect responses looks like this:
{"MessageId":"3641","Messages":[],"Disconnect":false,"TimedOut":true,"TransportData":{"Groups":["NotificationHub.56DDB6692001Ex"],"LongPollDelay":0}}
I would appreciate any help regarding this issue. Thanks.
Edit
If reconnect means the beginning of a new long poll cycle why it is initiated after ~2 minutes when KeepAlive setting in global.asax.cs is set to 15 seconds? Problem with this is that I have a reverse proxy in front of IIS which timeouts keep-alive requests after 25 seconds therefore I get 504 response when this reverse proxy timeout is reached.
Take a look at this post: How signalr works internally. The way long pulling works is after a set time the connection will timeout or receive a response and repull (reconnect)
Keep alive functionality is disabled for long polling. Seems that ConnectionTimeout is used instead.
This setting represents the amount of time to leave a transport
connection open and waiting for a response before closing it and
opening a new connection. The default value is 110 seconds.
https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/handling-connection-lifetime-events#timeoutkeepalive
If the request is timed out and server is not sending any data, but you expect it to send, maybe it is some issue on the server side that you don't yet see.

what happens in an application server (tomcat etc.) when a client request is cancelled and the server is still working ? (writing on its output)

If a client cancel its request, the application server is suposed to throw the following error :
java.net.SocketException: Connection reset by peer: socket write error
But what is exactly happening ?
Let's say I'm doing a very expensive operation on the server side, and I'm writing some data to the outputstream everytime my server service get a new result (kind of streaming).
In the middle of this operation, the client cancel the request. What happens ?
The operation stops, because the socket throws this error when the connection closed ? If it's not stopped, what happens to the data flushed in the outputstream after that ?
Thanks
I can't tell what Tomcat is doing but here is what happens:
the client closed the socket gracefully (then the server is notified about the close and closes its side of the connection too, in which case any buffered data ready to be sent is lost);
the client cut the socket brutally (then the server is NOT notified and it will detect the connection loss after a timeout or at the first attempt to send data - this will fail).
So, if your streaming is "constant", the server will always be 'protected' against undetected lost connections (the first send attempt will clean-up the area).
If this streaming is not constant, then you should make room for a timeout, or use TCP Keep-Alives to make sure that the connection state is tested on a regulary basis.
Hope it helps.

Resources