Azure RedisCache Backplane Logging/Signalr logging - signalr

I have a Single Page Azure Web App that uses Signalr(Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final) to broadcast events(login, logout, department creation etc) to connected clients.
I also scale my application to several instances at peak times and i use Redis Cache Backplane (Microsoft.AspNetCore.SignalR.Redis" Version="1.0.0-alpha2-final) to distribute event broadcast messages to all connected clients
i use angular front end (#aspnet/signalr-client": "^1.0.0-alpha2-final)
on azure, i enabled diagnostic log to log information and error messages.
the above works fine but when i scale up the application, it is difficult to trace information or error messages as i have to look through up to 10 instance application logs to find information or error;
my question: How do i ensure redis cache logs error messages or information on all available instances rather than on instances where client is connected; how do i know if a client has missed out on event broadcast message? how do i ensure signalr sever/hub logs all messages on all application log instances?
thank you in advance

The best way to do this might be to use the redis-cli, run the MONITOR command and then pipe that to a file (or somewhere that can store the logs).

Related

TCP Connection in Web application

I really need your advice on this.
I have many TCP Client Devices. The Web application is going to be accessed by many users after an authentication.
The problem I need to solve is -
Create a TCP Listener for these client machines and that should be accessed by every user -
Solution I think - create a tcp connection on a page. Every user will create a new tcp connection from their device(local host) once the page is load . This is possible because every user pc is diferent so its going to be entirely different connection. But this solution is not going with problem 2.
The Machine broadcasts data every 30 seconds. So my application should be able to catch that data and update on the page.
This is I think is the main problem.
I know live data update on a web page can be done using SignalR. But SignalR does not connect with TCP Client machine directly. So what I was trying to do is -
1st - I tried making a WCF Service for TCP listener. WCF service will get data from machine and save into database from where signalR will do its job. But Whenever I am using wcf service my system is getting hang. So I dont know if its a right way to do that.
2nd - I tried creating Windows service as tcp listener. But I dont think its gonna work with web application.
To be very frank, I am not able to understand what should I do for this functionality.
From my side - I just want a TCP connection on application level that should be persistent, independent of user and should not close on any page reload.
Whenever that connection receives data it should be updated on every user's web browser without any reload.
I cannot use timer. It should be realtime.
According to what I found and understood, TCP connection is with a device so signalR can not be used directly. We need something else(like a service) in between to make it work with signalR.
So at the end, What should I do?
I hope I am clear enough to state my problem.
I just want to discuss so that my doubts can get clear and I may find the desired result at the end.
EDIT 1 :
In websocket, signalR... normally client refers to web browser.
Can a device(Multimedia control device) be a client similar to web browser for signalR?

Web sockets with redis backplane scaleout - multiple redis channels per user or one redis channel for all users

I am connecting clients to our servers using SignalR (same as socketio websockets) so I can send them notifications for activities in the system. It is NOT a chat application. So messages when sent will be for a particular user only.
These clients are connected on multiple web servers and these servers are subscribed to a redis backplane. Like mentioned in this article - http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
My question here is for this kind of notification system, in redis pubsub - should i have multiple channels - one per user in the backplane and the app server listening to each users notification channel. Or have one channel for all these notifications and the app server parses each message and figure out if they have that userid connected and send the message to that user.
Based on the little I know about the details of your application, I think you should create channels/lists in the backplane/Redis on a per-client basis. This would be cheap in Redis, and it gives the server side process handling a specific client only the notifications they are supposed to have.
This should save your application iteration or handling of irrelevant data, which could have implications of performance at scale, and if security is at all a concern (don't know what the domain or application is), then it would be best to never retrieve/receive information unnecessarily that wasn't intended for a particular client.
I will pose a final question and some thoughts which I think support my opinion. If you don't do this on a client-by-client basis, then how will you handle when the user is not present to receive a message? You would either have to throw that message away, or have the application server handle that un-received message for every single client, every time they poll or otherwise receive information from Redis. This could really add up. Although, without knowing the details of the application, I'm not sure if this paragraph is relevant.
At the end of the day, though approaches and opinions may vary depending on the application, I would think about the architecture in terms of the entities and you outlined. You have clients, and they send and receive messages directly to one another. Those messages should be associated with each of the parties involved somehow, and they should be stored in a manner that will be efficient for lookup and which helps define/outline the structure of the application.
Hope my 2c helps!

Is it possible to improve this zmq architecture?

Intro:
In the below architecture, there are three key components.
Users - Machines where user application is running.
Applications - which are running inside the remote server.
Gateway/Broker - Required for isolation between user devices and server applications.
Message flow between user device and server application should happen as below
User shall transmit message to remote server, which will be used by
the one or more server applications.
Application shall broadcast/publish message to all connected
users.
Application shall send message to a particular user device
(Unicast).
In addition, one or more users will be connected or disconnected to the server arbitrarily and one or more application will be spawned or terminated arbitrarily.
For the above problem statement, I have designed the below zmq architecture.
The Gateway/Broker handles arbitrary assignments of users and applications and also provides the required isolation. It publishes user messages to all applications. It also aggregates all messages needed to be sent to users from applications via a SUB socket.
The application sends a two part message, the first part is the user identity and the second part is the actual message. The Gateway/Broker transmits that message to a user, based on identity. A special identity for a broadcast will be created, the gateway, if has received broadcast identity, will publish the message to all users via PUB socket.
The user connects to both ROUTER and PUB sockets in gateway. Fair queued data will be received from both sockets. While sending, the message will be sent to only gateway's ROUTER socket, not PUB socket.
Questions:
Q1: Is there any flaw with above architecture?
Q2: Is it possible to improve it more?
Metric assumed for the Q2:
The users and applications are dynamic in nature, they connect and disconnect on their own, the design should withstand that,
User reports its status periodically to server, design should facilitate latency of less than 333 [ms] ( a user, connected to server over internet, WAN connectivity btw user and server provides a latency much less than 333 [ms] )
Lossless transmission between server and users ( ACKing at backend, retransmission if lost )
You can try Malamute, which gives you what you need and more like credit-flow, keep-alive, tracking.
Malamute is small broker based on zeromq and part of the zeromq community. You can run Malamute as a component inside your application and don't need a dedicate service or daemon for it.
If you are using C or C++ that is no brainer as it integrate naturally. It also has binding for a lot more languages.
https://github.com/zeromq/malamute

SQL to resume suspended messages in order

We have an upcoming deploy for a system that processes a lot of messages through BizTalk. Since those messages are cumulative updates they need to be queued up during the deployment outage then processed in order when the deploy is finished. Since there may be a large number of them it’s difficult to do this manually.
One possible solution is to leave the send port stopped and let the messages suspend. We can then resume them in order when the deployment is completed.
Is it possible to run a SQL script (or a tool) against the BizTalk messagebox database that will resume suspended messages, for a specific port, in order of receipt?
If you have an ordered requirement (you either do or don't), then the Send Port should be marked for Ordered Delivery.
If so, then when you Start a Stopped Send Port, the messages will be processed in the same order they were submitted.
If you stop the port (but leave it subscribed) and start it again afterwards it should resume the message itself, or if not it is simple enough to go into the Administration Console and batch resume them.
However if the response messages of the send port are subscribed too by running Orchestrations you will not be able to un-deploy the Orchestrations until they have all completed, so stopping the send port would not work in this scenario.
Sometimes one option is if the initiating port is a one way receive, is to stop the receive location and let everything complete. You can then stop the application and redeploy and restart it and the send port will pick up all the waiting message to process.
If the above is not possible you may want to look at doing a side by side deployment where you increment the version numbers of all the assemblies in the solution so you can have both versions deployed at the same time and you can then allow the old version to finish running but have the new version processing any new messages.
The better option is to send messages to msmq, usually there is no extra coding required for this. You can just route messages to msmq using MSMQ adapter and then after deployment receive them in order as MSMQ adapter allows to receive in order. Just make sure you do a small test in yr QA environment before doing it in production.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

Resources