SignalR.RabbitMq scaleout issue - signalr

I have a SignalrHub hosted in IIS in two servers which are loadbalanced. I used backplane rabbitmq. I had setup rabbitmq server on Server1 and on startup class registering with rabbitmq backplane pointing to the Server1 IP. Also as given in example (https://github.com/mdevilliers/SignalR.RabbitMq) I am trying to raise the message from server on every 10 sec to all clients. The issue is, when I run only one client which is connceted to Server 1, the client is started receiving messages from Server 2 also though its not connected physically with Server 2.
But from what I understand, if Client 1 is connected with Server 1 and Client 2 connected with Server 2, when message is sent from Server 1, it will reach to server 2 and those who are connected with Server 2 will receive the Server 1's message. But in my case when there is only one client connected with Server 1, it started receiveing from Server 2 also.
But when I tried the sample given in project (https://github.com/mdevilliers/SignalR.RabbitMq/tree/master/SignalR.RabbitMQ.Example) the same case by running a timer to send message to clients, using IIS Express running on two differnt Ports, this issue not there. I mean client1 receives only from Server 1 , not from Server 2 (though its up and running).
Let me know if anything is wrong here .
thanks

Related

Sharing data between web server(s) and worker server(s)

I'm currently developing an application from the ground up that requires some form of communication between the user-facing web servers, and the backend (hidden) worker servers that communicate with endpoint devices. I've diagrammed the whole environment below:
My question is, what is the best way (industry way, perhaps?) to send requests from the web servers to the worker servers with a request-response structure? My current implementation (as recommended by other answers) uses a REDIS message broker (although RabbitMQ and other solutions seem identical to how I am using it) to achieve this communication. Downside is, it's not request-response oriented.
Here is a sample scenario:
User A clicks a "PING Device E" button on the website
Webserver 1 receives this as an HTTP request. Webserver 1 knows Device E is connected to Worker Server 2 (knowing this is not a problem), and thus sends a message to Worker Server 2, telling it to send the PING request to Device E.
Device E responds to Worker Server 2 with "PONG". Worker Server 2 then completes the request from Web Server 1, telling it that it received "PONG".
Web Server 1 completes the initial HTTP request, telling the user the device responded with "PONG"

BizTalk 2016 sFTP WinSCP - No more messages can be received

Our BizTalk 2016 environment consists of 2 application server running in a group with CU5 and FP3.
We have 24 applications deployed. Across this applications we have 27 receive locations with the new sFTP WinSCP adapter configured. For all sFTP receive applications we have the "Connection Limit" configured to 5. We have 6 different sFTP Server we are connecting to.
After approximate 2 hours, we have the following event log warnings and the receive locations stop working:
"The adapter "SFTP" raised an error message. Details "The WCF service
host at address "sftp://..." has faulted and as a result no more
messages can be received on the corresponding receive location. To fix
the issue, BizTalk Server will automatically attempt to restart the
service host."
Against the event log message the service host is not restarting automatically.
Has someone any idea how to fix this issue?
Try out CU7 as it includes a couple of SFTP fixes.
The latest version of BizTalk Health Monitor comes up with the following Important Warning
The host instances of 'hostinstancename' need more worker threads per cpu to run correctly SFTP Receive Locations. Increase so the "Maximum Worker Threads" property to 500 for these host instances and be sure they are dedicated for this SFTP Receive Locations
So things to look at are
Have a dedicated host for receive locations using SFTP
Increase the Maximum Worker threads setting to 500
Check how frequently you poll (the default is 5 seconds)
Put a schedule on to only poll during the periods you need.
Disable message body tracking if it is not needed.

Published asp.net website not accessible in other system in same network

I have developed a asp.net website and published in a server.Whereas everything works perfectly in that server if i give browser from that IIS manager, and in server ( published system ) everything works perfect.But i need to access the same published application also in the other systems also ( which are connected in same network ). How is it possible..??
NOTE:
1. Used IIS 7.
When tried to connect from another system from same networ getting error as follows
The socket connection to 172.31.7.243 failed.
ErrorCode: 10060.A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 172.31.7.243:90
If Your are accessing Your application on Hosted Server like this then just replace
localhost with ipaddress of hosted server to access it over the network
[localhost]/MyProjectName
then like this:
[IPaddress_of_HostedServer]/MyProjectName
It is possible that the HTTP traffic on port 80 incoming from the local network is blocked.
You should check if ISA server is installed on your server and if so if HTTP traffic on port 80 is allowed or not. If it's blocked try to allow it for the local network ip range.
Edit: Or port 90 as your error code states, but why are you working on port 90?

Are all clients in a group assured to receive signalR calls when SignalR is scaled out behind load balancer?

I've been looking into SignalR implementation incorporated with a load balancer
and have a few basic (if not simple sounding) questions.
I must preface this by saying I've got zero (0) experience with load balancers.
We will have 2 servers sitting behind a load balancer.
The client is an ASP .Net application.
We've been told that the load balancer maintains session affinity.
Consider the following scenario:
Client1 & Client2 -- connect to GroupA--> Server1
Client3 & Client4 -- connect to GroupA--> Server2
1) Server1 makes a client call to GroupA - this assumes that
Clients 1-4 will get the notification, correct?
2) How does the processing occur on this?
3) Is it a function of SignalR itself, or the load balancer?
4) When sending messages at the group level, do messages only get delivered to the client
apps associated with the group on that specific server, or will messages get forwarded
to all clients of that group?
Does anyone have any thoughts on this?
Thanks,
JB
I believe the scenario you're looking at requires a SignalR Backplane to be setup.
Here's a relevant selection from the article but you'll want to read the full thing to answer your specific questions:
Each server instance connects to the backplane through the bus. When a
message is sent, it goes to the backplane, and the backplane sends it
to every server. When a server gets a message from the backplane, it
puts the message in its local cache. The server then delivers messages
to clients from its local cache.

Server-Sent Events queries

We have a requirement wherein the server needs to push the data to various clients. So we went ahead with SSE (Server-Sent events). I went through the documentation but am still not clear with the concept. I have following queries :
Scenario 1. Suppose there are 10 clients. So all the 10 clients will send the initial request to server. 10 connections are established. When the data enters the server, a message is pushed from server to client.
Query 1 : Will the server maintain the IP address of all the client? If yes is there an API to check it?
Query 2: What will happen if all the 10 client windows are closed? Will the server abort all connections after a period of time?
Query 3: What will happen if the Server is unable to send messages to client due to unavailability of client like machine shutdown. Will the server abort all connections after a period of time for those client for whom they are unable to send the message?
Please clarify?
This depends on how you implement the server.
If using PHP, as an Apache module, then each SSE connection creates a new PHP instance running in memory. Each "server" is only serving one client at a time. Q1: yes, but not your problem: you just echo messages to stdout. Q2/Q3: If the client closes the connection, for any reason, the PHP process will shutdown when it detects this.
If you are using a multi-threaded server, e.g. using http in node.js. Q1: the client IP is part of the socket abstraction, and you just send messages to the response object. Q2/Q3: as each client connection closes the socket, the request process that was handling it will end. Once all 10 have closed your server will still be running, but not sending data to any clients.
One key idea to realize with SSE is that each client is a dedicated socket. It is not a broadcast protocol, where you push out one message and all clients get exactly the same message. Instead, you have to send the data to each client, individually. But that also means you are free to send customized data to each client.

Resources