Asp.net chat application using database for message queue - asp.net

I have developed a chat web application which uses a SqlServer database for exchanging messages.
All clients poll every x seconds to check for new messages.
It is obvious that this approach consumes many resources, and I was wondering if there is a "cheaper" way of doing that.
I use the same approach for "presence": checking who is on.

Without using a browser plugin/extension like flash or java applet, browser is essentially a one way communication tool. The request has to be initiated by the browser to fetch data. You cannot 'push' data to the browser.
Many web app using Ajax polling method to simulate a server 'push'. The trick is to balance the frequency/data size with the bandwidth and server resources.
I just did a simple observation for gmail. It does a HttpPost polling every 5 seconds. If there's no 'state' change, the response data size is only a few bytes (not including the http headers). Of course google have huge server resources and bandwidth, that's why I mention: finding a good balance.
That is "Improving user experience vs Server resource". You might need to come out with a creative way of polling strategy, instead of a straightforward polling every x seconds.
E.g. If no activity from party A, poll every 3 seconds. While party A is typing, poll every 5 seconds. This is just a illustraton, you can play around with the numbers, or come out with a more efficient one.
Lastly, the data exchange. The challenge is to find a way to pass minimum data sizes to convey the same info.
my 2 cents :)

For something like a real-time chat app, I'd recommend a distributed cache with a SQL backing. I happen to like memcached with the Enyim .NET provider, so I'd do something like the following:
User posts message
System writes message to database
System writes message to cache
All users poll cache periodically for new messages
The database backing allows you to preload the cache in the event the cache is cleared or the application restarts, but the functional bits rely on in-memory cache, rather than polling the database.

If you are using SQL Server 2005 you can look at Notification Services. Granted this would lock you into SQL 2005 as Notification Services was removed in SQL 2008 it was designed to allow the SQL Server to notify client applications of changes to the database.
If you want something a little more scalable, you can put a couple of bit flags on the Users record. When a message for the user comes in change the bit for new messages to true. When you read the messages change it to 0. Same for when people sign on and off. That way you are reading a very small field that has a damn good chance of already being in cache.
Do the workflow would be ready the bit. If it's 1 then go get the messages from the message table. If it's 0 do nothing.

In ASP.NET 4.0 you can use the Observer Pattern with JavaScript Objects and Arrays ie: AJAX JSON calls with jQuery and or PageMethods.
You are going to always have to hit the database to do analysis on whether there is any data to return or not. The trick will be on making those calls small and only return data when needed.

There are two related solutions built-in to SQL Server 2005 and still available in SQL Server 2008:
1) Service Broker, which allows subscribers to post reads on queues (the RECEIVE command with WAIT..). In your case you would want to send your message through the database by using Service Broker Services fronting these Queues, which could then be picked up by the waiting clients. There's no polling, the waiting clients just get activated when a message is received.
2) Query Notifications, which allow a subscriber to define a Query, and the receive notifications when the dataset that would result from executing that query would change. Built on Service Broker, Query Notifications are somewhat easier to use, but may also be somewhat less efficient. (Not that Query Notifications and their siblings, Event Notifications are frequently mistaken for Notification Services (NS), which causes concern because NS is decommitted in 2008, however, Query & Event Notifications are still fully available and even enhanced in SQL Server 2008).

Related

How to handle client view synchronization with signal r when a client gets offline for a short period of time and some messages are lost?

I am using SignalR in my web api to provide real-time functionality to my client apps (mobile and web). Everything works ok but there is something that worries me a bit:
The clients get updated when different things happen in the backend. For example, when one of the clients does a CRUD operation on a resource that will be notified by SignalR. But, what happens when something happens on the client, let's say the mobile app, and the device data connection is dropped?.
It could happen that another client has done any action over a resource and when SignalR broadcasts the message it doesn't arrive to that client. So, that client will have an old view sate.
As I have read, it seems that there's no way to know if a meesage has been sent and received ok by all the clients. So, beside checking the network state and doing a full reload of the resource list when this happens is there any way to be sure message synchronization has been accomplished correctly on all the clients?
As you've suggested, ASP NET Core SignalR places the responsibility on the application for managing message buffering if that's required.
If an eventually consistent view is an issue (because order of operations is important, for example) and the full reload proves to be an expensive operation, you could manage some persistent queue of message events as far back as it makes sense to do so (until a full reload would be preferable) and take a page from message buses and event sourcing, with an onus on the client in a "dumb broker/smart consumer"-style approach.
It's not an exact match of your case, but credit where credit is due, there's a well thought out example of handling queuing up SignalR events here: https://stackoverflow.com/a/56984518/13374279 You'd have to adapt that some and give a numerical order to the queued events.
The initial state load and any subsequent events could have an aggregate version attached to them; at any time that the client receives an event from SignalR, it can compare its currently known state against what was received and determine whether it has missed events, be it from a disconnection or a delay in the hub connection starting up after the initial fetch; if the client's version is out of date and within the depth of your queue, you can issue a request to the server to replay the events out to that connection to bring the client back up to sync.
Some reading into immediate consistency vs eventual consistency may be helpful to come up with a plan. Hope this helps!

Asynchronous processing .NET SQL Server?

After many years of programming, I need to do something asynchronously for the very first time (because it takes several minutes and the web page times out -- don't want the user waiting that long anyway). This action is done by only a few people but could be done a few times per day (for each of them).
From a "Save" click on an ASP.NET web page using LINQ, I'm inserting a record into a SQL Server table. That then triggers an SSIS package to push that record out to several other databases around the country.
So..
How can I (hopefully simply) make this asynchronous so that the user can get on with other things?
Should this be set up on the .NET side or on the SQL side?
Is there a way (minutes later) that the user can know that the process has completed and successfully? Maybe an email? Not sure how else the user can know it finished fine.
I read some threads on this site about it but they were from 2009 so not sure if much different now with Visual Studio 2012/.NET Framework 4.5 (we're still using SQL Server 2008 R2).
It is generally a bad idea to perform long-running tasks in ASP.Net. For one thing, if the application pool is recycled before the task completes, it would be lost.
I would suggest writing the request to a database table, and using a separate Windows Service to do the long-running work. It could update a status column in the database table that could be checked at a later time to see if the task completed or not, and if there was an error.
You could use Service Broker on the SQL side; it'sa SQL Server implementation of Message Queueing.
Good examples here and here
What you do is create a Service Broker service and define some scaffolding (queues, message types, etc).
Then you create a service "Activation" procedure which is basically a stored procedure that consumes messages from queue. This SP would receive for example a message with an ID of a record in a table, and would then go on and do whatever needs to be done to it, perhaps sending an email when it's done, etc.
So from your code-behind, you'd call a simple stored procedure which would insert the user's data into a table, and send a message to the queue with for e.g the ID of the new record, and then immediately return. I suppose you should tell the user upfront that this could take a few minutes and they'll receive an email, etc.
The great thing about Service Broker is message delivery is pretty much guaranteed - even if your SQL Server falls over right after the message is queued, when you bring it back up the activation SP will just kick off again, so it's very robust.

Disconnected meteor application

I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

sending an email, but not now

I'm writing an application where the user will create an appointment, and instantly get an email confirming their appointment. I'd also like to send an email the day of their appointment, to remind them to actually show up.
I'm in ASP.NET (2.0) on MS SQL . The immediate email is no problem, but I'm not sure about the best way to address the reminder email. Basically, I can think of three approaches:
Set up a SQL job that runs every night, kicking off SQL emails to people that have appointments that day.
Somehow send the email with a "do not deliver before" flag, although this seems like something I might be inventing.
Write another application that runs at a certain time every night.
Am I missing something obvious? How can I accomplish this?
Choice #1 would be the best option, create a table of emails to send, and update the table as you send each email. It's also best not to delete the entry but mark it as sent, you never know when you'll have a problem oneday and want to resend out emails, I've seen this happen many times in similar setups.
One caution - tightly coupling the transmission of the initial email in the web application can result in a brittle architecture (e.g. SMTP server not available) - and lost messages.
You can introduce an abstraction layer via an MSMQ for both the initial and the reminder email - and have a service sweeping the queue on a scheduled basis. The initial message can be flagged with an attribute that means "SEND NOW" - the reminder message can be flagged as "SCHEDULED" - and the sweeper simply needs to send any messages that it finds that are of the "SEND NOW" or that are "SCHEDULED" and have a toBeSentDate >= the current date. Once the message is successfully sent - the unit of work can be concluded by deleting the message from the queue.
This approach ensures messages are not lost - and enables the distribution of load to off-peak hours by adjusting the service polling interval.
As Rob Williams points out - my suggestion of MSMQ is a bit of overkill for this specific question...but it is a viable approach to keep in mind when you start looking at problems of scale - and you want (or need) to minimize/reduce database read/write activity (esepcially during peak processing periods).
Hat tip to Rob.
For every larger project I usually also create a service which performs regular or periodical tasks.
The service updates its status and time of last execution somewhere in the database, so that the information is available for applications.
For example, the application posts commands to a command queue, and the service processes them at the schedule time.
I find this solution easier to handle than SQL Server Tasks or Jobs, since it's only a single service that you need to install, rather than ensuring all required Jobs are set up correctly.
Also, as the service is written in C#, I have a more powerful programming language (plus libraries) at hand than T-SQL.
If it's really pure T-SQL stuff that needs to be handled, there will be a Execute_Daily stored procedure that the service is going to call on date change.
Create a separate batch service, as others have suggested, but use it to send ALL of the emails.
The web app should record the need to send notifications in a database table, both for the immediate notice and for the reminder notice, with both records annotated with the desired send date/time.
Using MSMQ is overkill--you already have a database and a simple application. As the complexity grows, MSMQ or something similar might help with that complexity and scalability.
The service should periodically (every few minutes to a few hours) scan the database table for notifications (emails) to send in the near future, send them, and mark them as sent if successful. You could eventually leverage this to also send text messages (SMS) or instant messages (IMs), etc.
While you are at it, you should consider using the Command design pattern, and implement this service as a reusable Command executor. I have done this recently with a web application that needs to keep real estate listing (MLS) data synchronized with a third-party provider.
Your option 2 certainly seems like something you are inventing. I know that my mail system won't hold messages for future delivery if you were to send me something like that.
I don't think you're missing anything obvious. You will need something that runs the day of the appointment to send emails. Whether that might be better as a SQL job or as a separate application would be up to your application architecture.
I would recommend the first option, using either an SQL or other application to run automatically every day to send the e-mails. It's simple, and it works.
Microsoft Office has a delivery delay feature, but I think that is an Outlook thing rather than an Exchange/Mail Server thing, so you're going to have to go with option 1 or 3. Or option 4 would be to write a service. That way you won't have to worry about scheduled tasks to get the option 3 application to run.
If you are planning on having this app hosted at a cheap hosting service (like GoDaddy), then what I'd recommend is to spin off a worker thread in Global.asax at Application_Start and having it sleep, wake-up, send emails, sleep...
Because you won't be able to run something on the SQL Server machine, and you won't be able to install your own service.
I do this, and it works fine.

Resources