Asynchronous Message Queue on Cluster - asp.net

I've a site which in some cases use Message Queue asynchronously.
The method which sends the message returns its id. Then I make an AJAX call to get the response for the message with the saved id.
This works great, but now the site is going to be on a cluster and there starts my problem. I can't ensure that the AJAX call will be recieved by the same server which sent the message. Is there any known solution to this problem? Any suggestions?
Thanks, Diego

three solutions come to my mind:
Make client aware of server
Dispatch the request on server based on client input
Dispatch the request on server by remembering which request was processed earlier by which server
Sometime ago I developed a web application like this and I dispatched the request based on client input parameter, that's a better solution I think.

Clusters typically have one name and IP address combination, even though there is more than one node that makes up the cluster. If you use the clustered application's IP name or address, you should be directed to the active node of the cluster.

Related

how top-level async web requests handle the reponse

I have fundumental question about how async requests work at top level.
Imagin if we have a top level route called HomePage(). This route is an async route and within this route we call to 10 different APIs before sending the response(image it takes like 5 seconds, remember this is an example to understand the concept and these numbers are for learning purposes). All of these api requests are awaited. So the request handler just releases the thread hanlding this request and goes to handle other requests until the response for these apis come back. So lets add this constraint. Our network card can handle only 1 connection and that one is held open till the response for the request to HomePage is ready. Therefor we cannot make any other requests to the server so whats the difference if this whole thing was sync from the beggining. We cannot drop the connection to the first request to HomePage because if that's the case then how are we ever going to send back the response for that request and we cannot handle new requests because the connection is kept open.
I suspect that my problem is how the reponse is sent back on top level async routes.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE). Examples would be much appreciated.
So lets add this constraint. Our network card can handle only 1 connection
That constraint cannot exist. Network cards handle packets, not connections. Connections are a virtual construct that exist in the host computer.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE).
Of course the connection is kept alive. The top-level async method will return the thread to the thread pool, where it is available to handle any other requests.
If you have some artificial constraint on your web app that prevents it from having more than one connection, then there won't be any other requests to handle, and the thread pool threads will do nothing.

Python ZeroMQ : connecting two different clients together in a ROUTER and a REP configuration

I have a configuration with the following server/clients :
One server with two bound sockets, a REP and a ROUTER
A client (we will call it a worker) that stays connected to the ROUTER socket
Another (real) client that connects on the REP socket.
I want the server to be able to tell the real client to connect (directly or somehow through the server) to a websocket, opened on the worker client. But it seems, I cannot retrieve the worker's IP-address from a ZeroMQ socket.
How could I achieve this, without some dirty IP-address retrieve hacks?
How could I achieve this, without some dirty IP-address retrieve hacks?
The best would be to use an explicitly communicated IP-address dialogue / handshaking between the server and the worker which would take place upon their setup / initialisation, in which the worker adviced these configuration details to server, upon having been asked to provide a such answer.
Given that, the "new"-real-client .connect()-s it's REQ onto the server's REP, and asks the server about where to go next, the server thus can answer this and the "new"-real-client will get received this way a legitimate IP-address:port# and any additionally needed details for any additional TCP/IP-L3 service establishment and use.
That simple :o) distributed-system
Design-side Epilogue:Because there are some further, design-side implications, hardwired inside of each type of the ZeroMQ sockets' Access-Point, it might be found more appropriate to serve a separate REP-AccessPoint on the server side, so as not to subordinate each "new"-real-client to become dependent upon a presence of events outside of the domains of control of both the server and such "new"-real-client, but to rather allow both such REQ/REP-endpoints to enjoy the independence of anything but their temporally (semi-)private details (re-)negotiation(s).

ASP MVC: Can I drop a client connection programatically?

I have an ASP.NET Web API application running behind a load balancer. Some clients keep an HTTP busy connection alive for too much time, creating unnecessary affinity and causing high load on some server instances. In order to fix that, I wish to gracefully close a connection that is doing too much requests in a short period of time (thus forcing the client to reconnect and pick a different server instance) while at same time keeping low traffic connections alive indefinitely. Hence I cannot use a static configuration.
Is there some API that I can call to flag a request to "answer this then close the connection" ? Or can I simply add the Connection: close HTTP header that ASP.NET will see and close the connection for me?
It looks like the good solution for your situation will be the built-in IIS functionality called Dynamic IP restriction. "To provide this protection, the module temporarily blocks IP addresses of HTTP clients that make an unusually high number of concurrent requests or that make a large number of requests over small period of time."
It is supported by Azure Web Apps:
https://azure.microsoft.com/en-us/blog/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/
If that is the helpful answer, please mark it as a helpful or mark it as the answer. Thanks!
I am not 100% sure this would work in your situation, but in the past I have had to block people coming from specific IP addresses geographically and people coming from common proxies. I created an Authorized Attribute class following:
http://www.asp.net/web-api/overview/security/authentication-filters
In would dump the person out based on their IP address by returning a HttpStatusCode.BadRequest. On every request you would have to check a list of bad ips in the database and go from there. Maybe you can handle the rest client side, because they are going to get a ton of errors.
Write an action filter that returns a 302 Found response for the 'blocked' IP address. I would hope, the client would close the current connection and try again on the new location (which could just be the same URL as the original request).

retrieve dynamically assigned tcp port from akka.net remote

My job is to write a distributed client/server application with some concurrent tasks. So i decided to use akka.net for the concurrency issues. To implement the ipc between server and client akka remote is used. For some reasons there may run more than one client of the same type on a workstation. So i configured these clients for dynamic assignment of a tcp port. This worked fine for sending messages to the server.
My problem is to push some information to the clients. To accomplish this task an actor on the client exist. Now the server creates a reference for this actor. Therefor it needs the port the client is listening on . My idea is to send the tcp port the client uses to the server in some sort of connection procedure using a actor on the server.
After searching for some hours I didn't find any hint where to find the dynamically assigned tcp port. So how would the client get the assigned tcp port?
Ok, I could use akka.cluster. But using akka.cluster is breaking a fly on the wheel, I think. And if it solves my issue reamins to be seen.
Two suggestions, assuming that it is your client that makes the first contact with the server.
I'd have the server keep track of which clients are connected. I'd probably have a heartbeat message that gets sent once every few seconds from each client system. This way you can store an IActorRef for each alive client and send messages back without the need for finding the port. IActorRefs are preferable wherever possible for location transparency.
If you actually need to explicitly find the port, you may be able to extract it from the Path property of the IActorRef of one of the actors on the client system.
Thanks to patricks suggestions my issue is solved.
The solution is to extract the needed information from the senders path available while executing the hello message. With this information the server is able to maintain a list of all connected clients and theire network address.
Thanks a lot # patrick.
Regards Gregor

Rebus HTTP gateway and MSMQ health state

Let's say we have
Client node with HTTP gateway outbound service
Server node with HTTP gateway inbound service
I consider situation where MSMQ itself stops from some reason on the client node. In current implementation Rebus HTTP gateway will catch the exception.
What do you think about idea that instead of just catching, the MessageQueueException exception could be also sent to server node and put on error queue? (name of error queue could be gathered from headers)
So without additional infrastructure server would know that client has a problem so someone could react.
UPDATE:
I guessed problems described in the answer would be raised. I should have explained my scenario deeper :) Sorry about it. Here it is:
I'm going to modify HTTP gateway in the way that InboundService would be able to do both - Send and Receive messages. So the OutboundService would be the only one who initiate the connection(periodically e.g. once per 5 minutes) in order to get new messages from server and send its messages to server. That is because client node is not considered as a server but as a one of many clients which are behind the NAT.
Indeed, server itself is not interested in client health but I though that instead of creating separate alerting service on client side which would use HTTP gateway HTTP gateway code, the HTTP gateway itelf could do this since it's quite in business of HTTP gateway to have both sides running.
What if the client can't reach the server at all?
Since MSMQ would be dead I thought about using in-process standalone persistent queue object like that http://ayende.com/blog/4540/building-a-managed-persistent-transactional-queue
(just an example implementation, I'm not sure what kind of license it has)
to aggregate exceptions on client side until server is reachable.
And how often will the client notify the server that is has experienced an error?
I'm not sure about that part - I thought it could be related to scheduled time of message synchronization like once per 5 minutes but what in case there would be no scheduled time just like in current implementation (while(true) loop)? Maybe it could be just set by config?
I like to have a consistent strategy about handling errors which usually involves plain old NLog logging
Since client nodes will be in the Internet behind the NAT standard monitoring techniques won't work. I thought about using queue as NLog transport but since MSMQ would be dead it wouldn't work.
I also thought about using HTTP as NLog transport but on the server side it would require queue (not really, but I would like to store it in queue) so we are back to sbus and HTTP gateway...that kind of NLog transport would be de facto clone of HTTP gateway.
UPDATE2: HTTP as NLog transport (by transport I mean target) would also require client side queue like I described in "What if the client can't reach the server at all?" section. It would be clone of HTTP gateway embedded into NLog. Madness :)
All the thing is that client is unreliable so I want to have all the information about client on the server side and log it in there.
UPDATE3
Alternative solution could be creating separate service, which would however be part of HTTP gateway (e.g. OutboundAlertService). Then three goals would be fulfilled:
shared sending loop code
no additional server infrastructure required
no negative impact on OutboundService (no complexity of adding in-process queue to it)
It wouldn't take exceptions from OutboundService but instead it would check MSMQ perodically itself.
Yet other alternative solution would be simply using other than MSMQ queue as NLog target but that's ugly overkill.
Regarding your scenario, my initial thought is that it should never be the server's problem that a client has a problem, so I probably wouldn't send a message to the server when the client fails.
As I see it, there would be multiple problems/obstacles/challenges with that approach because, e.g. what if the client can't reach the server at all? And how often will the client notify the server that is has experienced an error?
Of course I don't know the details of your setup, so it's hard to give specific advice, but in general I like to have a consistent strategy about handling errors which usually involves plain old NLog logging and configuring WARN and ERROR levels to go the Windows Event Log.
This allows for setting up various tools (like e.g. Service Center Operations Manager or similar) to monitor all of your machines' event logs to raise error flags when someting goes wrong.
I hope I've said something you can use :)
UPDATE
After thinking about it some more, I think I'm beginning to understand your problem, and I think that I would prefer a solution where the client lets the HTTP listener in the other end know that it's having a problem, and then the HTTP listener in the other end could (maybe?) log that as an error.
Another option is that the HTTP listener in the other end could have an event, ReceivedClientError or something, that one could attach to and then do whatever is right in the given situation.
In your case, you might put a message in an error queue. I would just avoid putting anything in the error queue as a general solution because I think it confuses the purpose of the error queue - the "thing" in the error queue wouldn't be a message, and as such it would not be retryable etc.

Resources