Performing bulk processing in ASP.NET page - asp.net

We need the ability to send out automatic emails when certain dates occur or when some business conditions are met. We are setting up this system to work with an existing ASP.NET website. I've had a chat with one of the other devs here and had a discussion of some of the issues.
Things to note:
All the information we need is already modelled in the ASP.NET website
There is some business-logic that is required for the email generation which is also in the website already
We decided that the ideal solution was to have a separate executable that is scheduled to run overnight and do the processing and emailing. This solution has 2 main problems:
If the website was updated (business logic or model) but the executable was accidentally missed then the executable could stop sending emails, or worse, be sending them based on outdated logic.
We are hoping to use something like this to use UserControls to template the emails, which I don't believe is possible outside of an ASP.NET website
The first problem could have been avoided with build and deployment scripts (which we're looking into at the moment anyway), but I don't think we can get around the second problem.
So the solution we decided on is to have an ASP.NET page that is called regularly by SSIS and to have that do a set amount of processing (say 30 seconds) and then return. I know an ASP.NET page is not the ideal place to be doing this kind of processing but this seems to best meet our requirements. We considered spawning a new thread (not from the worker pool) to do the processing but decided that if we did that we couldn't use the page returned to signify a success or failure. By processing within the page's life-cycle we can use the page content to give an indication of how the processing went.
So the question is:
Are there any technical problems we might have with this set-up?
Obviously if you have tried something like this any reports of success/failure will be appreciated. As will suggestions of alternative set-ups.
Cheers,

Don't use the asp.net thread to do this. If the site is generating some information that you need in order to create or trigger the email-send then have the site write some information to a file or database.
Create a Windows service or scheduled process that collects the information it needs from that file or db and run the email sending process on a completely seperate process/thread.
What you want to avoid is crashing your site or crashing your emailer due to limitations within the process handler. Based on your use of the word "bulk" in the question title, the two need to be independent of each other.

I think you should be fine. We use the similar approach in our company for several years and don’t get a lot of problems. Sometimes it takes over an hour to finish the process. Recently we moved the second thread (as you said) to a separate server.

Having the emailer and the website coupled together can work, but it isn't really a good design and will be more maintenance for you in the long run. You can get around the problems you state by doing a few things.
Move the common business logic to a web service or common library. Both your website and your executable/WCF service can consume it, and it centralizes the logic. If you're copying and pasting code, you know there's something wrong ;)
If you need a template mailer, it is possible to invoke ASP.Net classes to create pages for you dynamically (see the BuildManager class, and blog posts like this one. If the mailer doesn't rely on Page events (which it doesn't seem to), there shouldn't be any problem for your executable to load a Page class from your website assembly, build it dynamically, and fill in the content.
This obviously represents a significant amount of work, but would lead to a more scalable solution for you.

Sounds like you should be creating a worker thread to do that job.

Maybe you should look at something like https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/

You can and should build your message body (templated message body) within domain logic (it means your asp.net application) when some business conditions are met and send it to external service which should only send your messages. All messages will have proper informations.
For "when certain dates occur" scenario you can use simple solution for background tasks (look at Craig answer) and do the same as above: parse template, build message and fast send to specified service.
Of course you should do this safe then app pool restarts does not breaks your tasks.

Related

How to test webservice simulating several concurrent requests

I've got this situation.
I need to test a web application that was created with asp.net and c#.
This application has one special process that does the heaviest and the most important process in all the application. This process is part of the webservice that I need to test.
Now I've been asked to come up with any kind of program to simulate a specific number of simultaneous requests to the process, to see how it reacts to a certain number of user trying to call and use the same process.
I've never done anything like this before, but some ideas have come to mind, like maybe creating a little program in visual studio using the BackgroundWorker class and in this backgroundworker call the webservice , then call this backgroundworker the number of times specified by the user.
As I mentioned, I've never done anything similar, so I'm open to suggestions.
What would you do if you had do something similar?
Don't write this yourself, go straight to a web load testing tool.
See also
Performing a Stress Test on Web Application?

Web App architecture questions

Background:
I am an intermediate web app developer working on the .Net Platform. Most of my work has been defined pretty well for me by my peers or superiors and I have no problem following instructions and getting the job done.
The task at hand:
I was recently asked by an old friend to redo his web app from scratch. His app is extremely antiquated and he is getting overwhelmed by it breaking all the time. The app in question is an inventory / CRM application and currently each customer requires a new install of the app (usually accomplished by deploying it on a different domain on the same server and pointing to a new database).
Currently if any client wants any modifications to the forms such as additional fields, new features, etc my friend goes in and manually adds those fields to the forms, scripts, database etc. As a result all installs of this application are unique. There is no one singular source repository and no one single version of this app. Generally new features are overtime rolled into the other sites, but still this is done on an individual site by site basis.
I will be approaching this on a very modular basis. Initially I will be coding a module that will query an external web service for some data, display and store it, and periodically update it automatically. The next module will likely be for storing and displaying inventory data. This way I want to over time duplicate the current feature set of his app 100% but do it incrementally.
The Million Dollar Questions
I want to make the app have user
configurable form fields. The user
should be able to go to an admin
page, create a new forms page of a
certain category, and then specify
what fields he wants in there. He
could say 'create a new text field
called Item # and make it a
requirement" and that will get
stored somewhere. All forms will be
dynamically rendered to screen based
on what the user has configured. Is
this a good way to go about the
problem of having no idea what a
customer could want in a form? and
thus be able to store and display
form data of any sort ? What sort of
design pattern should I follow here?
I am familiar with asp.net and
the .net framework in general and
have decent knowledge of javascript,
html, silverlight, jquery, c# etc
etc. I can work my way around web
apps in a good way, but I am not
sure what sort of framework or tech
I should use to accomplish this
task. Would ASP.net 3.5 webforms be
the way to go? or should I look into
ASP.NET MVC? Do I use jquery and ajax for
complete decoupling of frontend and
backend ? or will a normal asp.net
page with some spattering of ajax
thrown in working with a codebehind
be the order of the day?
Just looking for general advice before I start.
I am currently thinking of using ASP.NET 3.5 webforms, jquery for clientside animation, ui, manipulation and data validation, and sqlserver + a .net or wcf webservice for backend.
Your advice is much appreciated as always.
I've recently implemented a white-label ecommerce system for an insurance company that allowed each partner to choose their own set of input fields, screens, and order the flow of the application to suit their individual needs.
Although it wasn't rocket science, it added complexity and increased development time.
Consider the user configuration aspect very carefully In hindsight both my client and their clients in turn, would have been happy with a more rigid system.
As for the tech side of your question, I developed my project in VS2005, using asp.net webforms and webservices with a SQLserver back end, so the stack that you're looking at is definitely capable of delivering a working product. ASP.net MVC will almost certainly help as far as testability goes.
The biggest thing I would change now if I was going to start again would be to replace the intermediate webservices with message based services using nServiceBus, MassTransit or the like. While the webservices worked fine, message based communication should be quicker and more reliable.
Finally, before you start to code, make sure that you understand the current system's functionality inside and out. If the new system doesn't do something that the old system did, it will be pretty obvious to the end users straight away.

Should I use a Windows Service or an ASP.NET Background Thread?

I am writing a web application in ASP.NET 3.5 that takes care of some basic data entry scenarios. There is also a component to the application that needs to continuously poll some data and perform actions based on business logic.
What is the best way to implement the "polling" component? It needs to run and check the data every couple of minutes or so.
I have seen a couple of different options in the past:
The web application starts a background thread that will always run while the web application does. (The implementation I saw started the thread in the Application_Start event.)
Create a windows service that is always running
What are the benefits to either of these options? Are there additional options?
I am leaning toward a windows service because it is separated and can run on a different server (more scalable) as well as there is more control over when it is started/stopped, etc. However, I feel like the compactness of having the "background" logic running in the process of the web application might make the entire solution more understandable.
I'd go for the separate Windows service primarily for the reasons you give:
You can run it on a different server if necessary.
You can start and stop it independently of the web site.
I'd also add that it could well have some impact on the performance of the web site itself - something you want to avoid.
The buzz-word here is "separation of concerns". The web site is concerned with presenting the data to the user, the service with checking the integrity of the data.
You can also update the web site and service independently of each other should you need to.
I was going to suggest that you look at a scheduled task and let Windows control when the process runs, but I re-read your question and noted that you wanted the checks to run every couple of minutes. The overhead of starting the process might be too great in this case - though some experimentation would probably prove this one way or the other.
If you use a scheduled task there's also the possibility that you could start the next check before the current one has finished - something you can code for if you're in complete control.
Why not just use a console app that has no ui? Can do all that the windows service can and is much easier to debug and maintain. I would not do a windows service unless you absolutely have to.
You might find that the SQL Server job scheduler sufficient for what you want.
Console application does not do well in this case. I wrote a TAPI application which has to stay in the background and intercept incoming calls. But it did it only once because the tapi manager got GCed and was never available for the second incoming call.

When to use a page method versus creating a web service?

Our team is trying to figure out some guidelines for using pagemethods vs. creating an actual asmx web service. Seems to me that pagemethods are primarily for one off type calls that are specific to the page, where as asmx are intended are intended represents more of a reusable set of operations and services. Does this sound correct?
Yes. If yo intend to have something thats going to be used by multiple application it is wise to create it as a separate service, so you are not repeating code between applications and also if have to change you change in a single place.
Simple example,
If you have lets say a authentication need, and you have 2 app, one web and one windows.
If the user base is going to be the same, it does not make sense to go in the Web App create an authentication code/page, the go to you windows app, and do the same all over again. The reason is, what if have to change the hash code for exemple, you would have to go to the web change it, then go to windows change it, and also redeploy window, now
if you have a service, you go to the service change it, and everything now works with the new model, and a big plus, you don't have to redeploy the windows app.
Thats all folks...
Even if you're only working on one page and the functionality in question is only used on that one page, sometimes it's better to move the functionality to a separate web service for performance. i recently worked on a page that would make hundreds of calls to a single page method. i noticed a huge increase in performance when i moved it off to a web service simply because you're not dealing with the entire lifecycle of the page. if you're doing something small though, use page methods to keep everything simple.
Update: ArmedMonkey is correct and page methods do NOT go through the page life cycle.

Single ASP.net site with Multiple Instances & web.configs

We have a legacy ASP.net powered site running on a IIS server, the site was developed by a central team and is used by multiple customers. Each customer however has their own copy of the site's aspx files plus a web.config file. This is causing problems as changes made by well meaning support engineers to the copies of the source aspx files are not being folded back into the central source, so our code base is diverging. Our current folder structure looks something like:
OurApp/Source aspx & default web.config
Customer1/Source aspx & web.config
Customer2/Source aspx & web.config
Customer3/Source aspx & web.config
Customer4/Source aspx & web.config
...
This is something I'd like to change to each customer having just a customised web.config file and all the customers sharing a common set of source files. So something like:
OurApp/Source aspx & default web.config
Customer1/web.config
Customer2/web.config
Customer3/web.config
Customer4/web.config
...
So my question is, how do I set this up? I'm new to ASP.net and IIS as I usually use php and apache at home but we use ASP.net and ISS here at work.
Source control is used and I intend to retrain the support engineers but is there any way to avoid having multiple copies of the source aspx files? I hate that sort of duplication!
If you're dead-set on the single app instance, you can accomplish what you're after using a custom ConfigurationSection in your single web.config. For the basics, see:
http://haacked.com/archive/2007/03/12/custom-configuration-sections-in-3-easy-steps.aspx
http://msdn.microsoft.com/en-us/library/2tw134k3.aspx
Example XML might be:
<YourCustomConfigSection>
<Customers>
<Customer Name="Customer1" SomeSetting="A" Another="1" />
<Customer Name="Customer2" SomeSetting="B" Another="2" />
<Customer Name="Customer3" SomeSetting="C" Another="3" />
</Customers>
</YourCustomConfigSection>
Now in your ConfigSection Properties, expose Name, SomeSetting, and Another. When the Property is accessed or set, use a condition (request domain or something else that uniquely identifies the Customer) to decide which to use.
With the proper implementation, the app developers don't need to be aware of what's going on behind the scenes. They just use CustomSettings.Settings.SomeSetting and don't worry about which Customer is accessing the app.
I know it might seem annoying, but the duplication is actually a good thing. The problem here is with your process, not with the way the systems are setup.
Keeping the sites separate is actually a good thing. Whilst it looks like "duplication" it's actually not. It's separation. Making changes in the production code by your support engineers should be actively discouraged.
You should be looking at changing your process to change once deploy everywhere. This will make everything a lot easier for you in the long run.
To actually answer your question, the answer is no, you can't do it. The reason is that web.config isn't designed to store user level settings, it's designed to store per application instance settings. In your case, you need an application instance per user which means separate config files.
For your system to work, you need to be able to preemptively tell the application which config file to use, which isn't possible without some sort of input from the user.
Use an external source control application and keep rolling out updates as required.
It isn't really a good idea to let your live site be updated by support engineers in real time anyway.
Depending on what is actually in the web config, and what settings differ between customers, you could opt to use a single web config, and store other customer specific configuration options in a database or some other custom xml/text file. As long as the specific customer settings in the web.config don't have to do anything with how IIS operates, and you are just using it to store values, then this solution might work out well for you.
Thank you all again for your answers. After reading through them and having a think what I think I will do is leave the multiple instances alone for now and I will try to improve our update process first. then I will develop a new version of the application that has the user configuration information in the database layer and then pick the user based on the request domain or URL as someone suggested. That way I can have a single application instance supporting multiple different client configurations cleanly.
As most of the client configuration data is really presentation or data source related, nothing complicated. I think we ended up with multiple application instances mostly because the original programmer hadn't been expecting multiple customers and didn't design for that so when someone came along later and added a second customer they just duplicated the application which is wasteful as each instance is about 99.99% identical to the original.
I am implementing this as we speak.
In the main web.config, I have 1 item per installation. It points me toward the custom config file I built for each client (and toward the custom masterpage, css, images, etc).
Using WebConfigurationManager.OpenWebConfiguration, I open the new webconfigs in their subdirectories. I determine which one to use by using System.Web.HttpContext.Current.Request.Url.OriginalString, and determining the uRL that called me. Based on that URL, I know which web.config to use.
From that point forward the clients all use the same codebase. They have their own databases too.
The idea of having to update 30-40 installations when we make an update scares the death out of me. We do not want to support 30-40 codebases, so there won't be customization beyond the master page, css, and images.
I wrote a custom class lib that knows how to switch to the proper webconfig, and read the custom section I built with all our settings.
The only issue I have now is the FormsAuthentication Cookie. I need to be able to switch that as well. Unfortunately, the property for the name is read only
If I understand correctly, it sounds like you have multiple deployments (one for each client) where the only difference is the web.config, right?
First off, although I don't know your unique situation, I would generally urge you to stay with separate installs. It usually allows much more flexibility. Off the top of my head: are you ever going to have customizations, or different clients running different versions? Are you sure? The easiest way to stay flexible here is to keep going with separate installs.
In my opinion, it isn't ugly at all if your practices are aligned properly. Based on some things you mentioned, you have trouble in that area - obviously, possible source control buy-in/training issues. But you are aware of that. I would also take a hard look at your deployment procedures and so on. I have a feeling you might have further issues in that area, and I mean absolutely no offense.
That said, let's say you want to move forward with this.
You didn't say whether all the clients share a single common database, but I'm thinking no, since designing that type of system is often not worth the extra complexity (which can be severe in systems of any size) so people often opt to keep them separate.
What that means is that you have store your connection string somewhere. Usually that would be web.config... So that seems to break our plan.
Really, the apparent elegance of this situation is almost always wildly offset by the challenges it introduces. If I thought about it hard enough, I could maybe find a way around this by introducing another database that intelligently manages connection strings or maybe delving into keeping all your login info directly in web.config (which is possible but... not ideal), however my gut says the work will be wasted because some day you will end up going back to how you're doing it now.
Also: changing code directly in production is obviously not the best practice here. But you if you are on a monolithic shared platform with any amount of traffic, that can never ever ever happen. Food for thought.
Let me know if I'm missing something!

Resources