choosing between a Windows service and a web app - asp.net

We have an ASP.NET website where user adds items to database.
There are several sites on same server, each with its own database.
I need to implement a mechanism to check database for the state of each item.
If item is unprocessed, submit it to a third party web-service.
I see two options:
put the code in a webapp
put the code in a Windows service
The first option has the advantage that the code knows which database to connect to.
With Windows service, it has to be aware of all databases, so it's harder to maintain. Also, if I have only one Windows service, it will have to use threads to process items in each database in parallel.
Maybe there's another way beside these two?
What are the other issues, and what would you recommend?
Please explain your choice.

This sounds like a good place for a message queue to be involved. Each item would be wrapped into a message and placed in the queue. The "item processor" (a service?) would subscribe to the queue and perform some work using each item as it arrives. How the messages get placed on the queue is up to you, but for an example you could have each site publish the "new item" message to the queue.
Queues can be a bit of an intimidating concept at first, but frameworks such as MassTransit can help. Well worth learning.

I believe Windows Service is a good option compared to a web app, mainly because a web app would have to be triggered manually by someone, while a Windows Service can be running at all times, checking for updates.
There's another option, if you have access to each of the existing site's code. Why not write a Web Service that will submit data to your third party web service. Then in each of the existing web sites, modify the logic that stores changes in the database to also post the changes to your custom Web Service (or even skip the custom WS and call the third party directly).

Related

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

ASP.NET sync long process w/ Requirements

I am working with an e-commerce platform, and I have a task to synchronize with some remote accounting software. The task requires syncing orders, products, inventory...etc. With large amounts of data being synced,the process can take awhile. So, I don't think asp.net application would be the best place to handle this. So, the requirements are:
To be able to schedule this process to run overnight
To be able to manually fire off this process and pass into it some variables like order numbers to export.
Possibly get back status info when fired off manually.
Has to work on .net 3.5
Issues: Can't use a windows service because the site is hosted remotely on a shared service, and the host won't allow a service.
Ideas: I'm having a really hard time finding the best way to handle this outside asp.net that fits all requirements, but I do have access to their FTP and thought possibly a console app that hosts a web-service may work, and I can put Quartz scheduler in global file to fire off service from the site.
Anyway, please offer some thoughts and experiences if you have them on which methods have worked for you.
Can't use a windows service because the site is hosted remotely on a shared service, and the host won't allow a service.
That might be a problem. Does this hosting service provide any other kind of scheduling functionality? If not then you may need to consider changing your hosting services.
You're correct in that ASP.NET is not the tool you'd use for scheduling tasks. A web application is a request/response system (and is very much at the mercy of the hosting process, IIS usually for ASP.NET). So you need some way to schedule the task to execute at regular intervals. Windows Services, Windows Task Scheduler, or some other task scheduling tool.
As for the requirement to be able to invoke the process manually, that's a simple matter of separating the invocation of the logic from the logic itself. Picture the following components:
A module which performs the logic, not bound to any UI or any way of invoking it. Basically a Class Library project (or part of one).
A Windows Service or Console Application which references the Class Library and invokes the logic.
A Web Application which references the Class Library and invokes the logic.
Once you've sorted out how to schedule the Console Application, just schedule it and it's all set. If the process returns some information then the Console Application can also perform any notifications necessary to inform people of that information.
The Web Application can then also have an interface somewhere to invoke the process manually. Since the process "can take a while" then of course you won't want the interface to wait for it to complete. This can result in timeouts and leave the system in an unknown state. Instead you'd want to return the UI to the user indicating that the process has started (or been queued) and that they will be notified with the results when it completes. There are a couple of options for this...
You can use a BackgroundWorker to actually invoke the process. When the process completes, send a notification to the user who invoked it.
You can write a record to a database table to "queue" the process and have something like a Windows Service or scheduled Console Application (same scenario as above) which regularly polls that table for queued tasks, performs the task, and sends the notification. (Of course updating the status in the table along the way so it doesn't perform it twice.)
There are pros and cons either way, it's really up to you how you'd like to proceed. Ultimately you're looking at two main things here:
Separate the logic itself from the scheduling/invocation of the logic.
Utilize a scheduling system to schedule tasks. (If your hosting provider doesn't have one, find one that does.)

Can an ASP.NET application handle NServiceBus events?

Most if not all of the NSB examples for ASP.NET (or MVC) have the web application sending a message using Bus.Send and possibly registering for a simple callback, which is essentially how I'm using it in my application.
What I'm wondering is if it's possible and/or makes any sense to handle messages in the same ASP.NET application.
The main reason I'm asking is caching. The process might go something like this:
User initiates a request from the web app.
Web app sends a message to a standalone app server, and logs the change in a local database.
On future page requests from the same user, the web app is aware of the change and lists it in a "pending" status.
A bunch of stuff happens on the back-end and eventually the requests gets approved or rejected. An event is published referencing the original request.
At this point, the web app should start displaying the most recent information.
Now, in a real web app, it's almost a sure thing that this pending request is going to be cached, quite possibly for a long period of time, because otherwise the app has to query the database for pending changes every time the user asks for the current info.
So when the request finally completes on the back-end - which might take a minute or a day - the web app needs, at a minimum, to invalidate this cache entry and do another DB lookup.
Now I realize that this can be managed with SqlDependency objects and so on, but let's assume that they aren't available - perhaps it's not a SQL Server back-end or perhaps the current-info query goes to a web service, whatever. The question is, how does the web app become aware of the change in status?
If it is possible to handle NServiceBus messages in an ASP.NET application, what is the context of the handler? In other words, the IoC container is going to have to inject a bunch of dependencies, but what is their scope? Does this all execute in the context of an HTTP request? Or does everything need to be static/singleton for the message handler?
Is there a better/recommended approach to this type of problem?
I've wondered the same thing myself - what's an appropriate level of coupling for a web app with the NServiceBus infrastructure? In my domain, I have a similar problem to solve involving the use of SignalR in place of a cache. Like you, I've not found a lot of documentation about this particular pattern. However, I think it's possible to reason through some of the implications of following it, then decide if it makes sense in your environment.
In short, I would say that I believe it is entirely possible to have a web application subscribe to NServiceBus events. I don't think there would be any technical roadblocks, though I have to confess I have not actually tried it - if you have the time, by all means give it a shot. I just get the strong feeling that if one starts needing to do this, then there is probably a better overall design waiting to be discovered. Here's why I think this is so:
A relevant question to ask relates to your cache implementation. If it's a distributed or centralized model (think SQL, MongoDB, Memcached, etc), then the approach that #Adam Fyles suggests sounds like a good idea. You wouldn't need to notify every web application - updating your cache can be done by a single NServiceBus endpoint that's not part of your web application. In other words, every instance of your web application and the "cache-update" endpoint would access the same shared cache. If your cache is in-process however, like Microsoft's Web Cache, then of course you are left with a much trickier problem to solve unless you can lean on Eventual Consistency as was suggested.
If your web app subscribes to a particular NServiceBus event, then it becomes necessary for you to have a unique input queue for each instance of your web app. Since it's best practice to consider scale-out of your web app using a load balancer, that means that you could end up with N queues and at least N subscriptions, which is more to worry about than a constant number of subscriptions. Again, not a technical roadblock, just something that would make me raise an eyebrow.
The David Boike article that was linked raises an interesting point about app pools and how their lifetimes might be uncertain. Also, if you have multiple app pools running simultaneously for the same application on a server (a common scenario), they will all be trying to read from the same message queue, and there's no good way to determine which one will actually handle the message. More of then than not, that will matter. Sending commands, in contrast, does not require an input queue according to this post by Udi Dahan. This is why I think one-way commands sent by web apps are much more commonly seen in practice.
There's a lot to be said for the Single Responsibility Principle here. In general, I would say that if you can delegate the "expertise" of sending and receiving messages to an NServiceBus Host as much as possible, your overall architecture will be cleaner and more manageable. Through experience, I've found that if I treat my web farm as a single entity, i.e. strip away all acknowledgement of individual web server identity, that I tend to have less to worry about. Having each web server be an endpoint on the bus kind of breaks that notion, because now "which server" comes up again in the form of message queues.
Does this help clarify things?
An endpoint(NSB) can be created to subscribe to the published event and update the cache. The event shouldn't be published until the actual update is made so you don't get out of sync. The web app would continue to pull data from the cache on the next request, or you can build in some kind of delay.

Client queue filled by server queue over the internet with ASP.Net and WPF client

I am trying to find the right approach for an application, that I am trying to develop.
Situation:
ASP.Net Website. User can make a request on a page. The request must result in an item in a qeue on the server. The qeue targeted is specific for each customer.
WPF client at customer site. The WPF client has a local qeue. The qeue gets filled by either polling the qeue on the webserver or getting a message from the web server. The WPF client uses the qeue to display items as specified in the qeue.
Each WPF client user has it's own account and can only access the qeue that is meant for him.
I dont have any constraints yet as to which solution to use, as long as it is .Net technology and the customer only requires my deployment package and the .Net framework. I can't hassle customers to install something like MSMQ.
I think a database on the webserver containing all the requests could do the trick, but I am wondering if there are any other slick methods that could be better.
Cheers, Momoski
You are going to want to have your clients pull from the web server/service and not try to push updates out to your clients. There is way to much complexity for a push solution unless you have complete control over all systems involved (i.e. network, firewalls, etc...).

Best architecture for an emergency alert system

I'm developing a software system which receives information (which is saved to a database) and when any information is received (new insert in a specific table) an alert should be seen in the screen in the information center, so proper action can be taken.
I'm writing an application with ASPNET MVC, SQL Server 2008 Express, SQL Agent free for that version of SQL Server, Entity Framework 4, Visual Studio 2010, etc.
Right now, I've set the database and a SQL job that monitors the table each minute, if there is a new records an email is sent to same addresses. My problem is...What then? Which would be the best architecture to follow?
A couple of option I thought about are:
1) In the job connect to a web service and that web service an the web service can open a popup
2) The web page could be pooling the database table to know if there are new records
Is there any way to make push to the web page instead of the page pooling the database server?
I know maybe windows application would fit better here, but right now I must stick with ASPNET MVC as I already started and don't want to create another application.
Thanks! Daniel
Is there any way to make push to the web page instead of the page pooling the database server?
HTML5 WebSockets. Draft, pretty new, specification is still subject to change, to all browsers implement it. You will need a WebSocket Server. If you go that route make sure you read this guy's blog. He is behind Laharsub which is a must try server.
I'm pretty sure you are able to use Silverlight to push data down to the client. Here is a pretty good overview that I read a while back. HTML5 might be a better way to go. But with such limited support it's almost not worth it at this point. Granted the Silverlight application might be out of reach to, but it's still a possibility.
I would suggest that you look into (complex) event processing, or stream processing -- at least to get the feeling for architecture of these systems.
The idea is to capture a stream of events before they reach database, route them (process) within the event processor and put them in the DB from there -- treating the DB as only one of event destinations (subscribers).
Take a look at Streambase, ruleCore, and many others.
These were all developed for the type of scenario you described.
Try to see the problem from the other angle. Develop a web client that reads the database every minute and compare to last pull ...

Resources