doing database write after the response - asp.net

I have a web service that receives requests from users and returns some json. I need to save the json string in the database so for the moment, the write query occurs before the response is sent back.
Is there a way to send the response first and then do the write query, after the response left the web service?
Thanks.

There's a couple of different options here - they all have tradeoffs, though, and would be pretty esoteric. You don't mention why you want to do this, so I'm guessing performance. If that's the case, I think you're barking up the wrong tree - a simple write is almost certainly not your performance problem.
So, off the top of my head:
Queuing, as Ragesh mentions, would be a nice approach. This gets you similar semantics of a transaction, while off loading the write. You still have to write to the queue, though, which may be about the same overhead as writing to the DB.
You could spawn a new thread (using either the ThreadPool or System.Threading.Thread - there's some debates about which is preferable in ASP.NET) to handle the write. This can generally work, but you may have issues with unhandled exceptions, app domain restarts, etc.
You could store the JSON data into a static or Application variable, then use a Timer to periodically write them to the DB. This will be multithreaded code, so you will need to synchronize read/writes to the collection.
Similar to #3, store the JSON data into Cache and use the invalidation callback to write to the DB.
Lots of variations on store somewhere (memory, disk, flat DB table, etc.), process later (ASP.NET, scheduled task, Windows Service, Sql Agent, etc.).
#frenchie says: a response starts by reading the json string from the db and ends with writing it back. In other words, if the user sends a request, the json string that's going to be read must be the one that was written in the previous response.
That complicates things, since inherent in async work is not knowing when something is done. If you require the async portion (writing back to the DB) to be done before handling the next request, you'll have to execute a wait to make sure it actually completed. In order to do that, you'll need to keep server side state on the client - not exactly a best practice as far as services go (though, it sounds like you're already doing that with these JSON request/response pairs).
Given the complications, I would make sure that you've done your profiling and determined it is indeed a performance problem.

You can do schedule a query work like
ThreadPool.QueueUserWorkItem(state =>
this.AsynchronousExecuteReference());
// and run
static void AsynchronousExecuteReference()
{
// run here your sql update
}
One other example using Thread inside an class and you can pass parameters to it.
public class RunThreadProcess
{
// Some parametres
public int cProductID;
// my thread
private Thread t = null;
// start it
public Thread Start()
{
t = new Thread(new ThreadStart(this.work));
t.IsBackground = true;
t.SetApartmentState(ApartmentState.MTA);
t.Start();
return t;
}
// actually work
private void work()
{
// do thread work
all parametres are available here
}
}
And here is how I run it
var OneAction = new RunThreadProcess();
OneAction.cProductID = 100;
OneAction.Start();
Do not worry about memory, CG knows that this process is used until the thread ends, so I have check it and CG not delete it and wait the thread to ends.

You should look at using message queues like MSMQ, ActiveMQ or RabbitMQ to do this. When you receive your request, you'll put the relevant data in to the queue, and send your response to the client. At the other end of the queue, you'll have some process that reads from the queue and inserts data in to your database.

this is missing the point of a request/response. unless you want to get into async commands like a service bus, but that's pub/sub, not request/response. the point of request/response is to do the work on the server after receiving the request and before sending the response. even if the work is sending an async message to a service bus.

You could try moving your web service URL to an ASPX page where the lifecycles come in to play.
In the code-behind, call your routine that does the main portion of the work in Page_Load or Page_Prerender (or whenever is appropriate prior to the response being sent) and then do your DB work in the Page_Unload event which occurs after the response has been sent (http://msdn.microsoft.com/en-us/library/ie/ms178472.aspx).

Related

Asp.net web api + entity framework: multiple requests cause data conflict

I'm developing an app with VS2013, using EF6.02, and Web API 2. I'm using the ASP.NET SPA template, and creating a RESTful api against an entity framework data source backed by a sql server. (In development, this resides on the SQL Server local instance.)
I've got two API methods so far (one that just reads data, one that writes data), and I'm testing them by calling them in the javascript. When I only call a single method in my script, either one works perfectly. But if I call both in script (without waiting for either's callback to fire), I get bad results and different exceptions in the debugger. Some exceptions state that the save can't be completed because there are pending transactions. Another exception stated something about a conflict with other threads. And sometimes, the read operation fails with a null pointer exception when trying to read a result set.
"New transaction is not allowed because there are other threads running in the session."
This makes me question if I'm correctly getting a new DBContext per request. My code for this looks like:
static Startup()
{
context = new Data.SqlServer.AppDbContext();
...
}
and then whenever instantiating a unit of work, I access Startup.context.
I've tried to implement the unit of work pattern, and each request shares a single UOW object which has a single DBContext object.
My question: Do I have additional responsibility to ensure that web requests "play nicely" with eachother? I hope that this is a problem that others have already dealt with. Perhaps the errors that I'm seeing are legitimate in the sense that if one user's data is being touched, it is temporarily in an invalid state and if other requests come in at that exact moment, they indeed will fail (and I should code anticipating these failures). I guess that even if each request has its own DBContext, they still share the same underlying SQL data source so perhaps that's causing issues.
I can try to put together a testcase, but I get differing behavior depending on where I put breakpoints and how long I spend on them, reaffirming to me that this is timing related.
Thanks for any help or suggestions...
-Ben
Your problem is where you are setting your context. The Startup method is for when the entire application starts, thus any request made will all use the same context. This is not a per request setup, but rather a per application setup. As to why you are getting the errors, EntityFramework is NOT thread-safe. Since IIS spawns many threads to handle concurrent request, your single context is being used across multiple threads.
As for a solution, you can look into
-Dependency Injection frameworks (such as Ninject or Unity)
-place a using statement in your UnitOfWork classes
using(var context = new Data.SqlServer.AppDbContext()){//do stuff}
-Or, I have seen instances of people creating a class that gets the context for that request and stores it in the HttpContext.Cache[] element (using a unique name so you can retrieve it in another class easily), making it so that you will reuse the same context for the same request. Something like this:
public AppDbContext GetDbContext()
{
var httpContext = HttpContext.Current;
if (httpContext == null) return new AppDbContext();
const string contextTypeKey = "AppDbContext";
if (httpContext.Items[contextTypeKey] == null)
{
httpContext.Items.Add(contextTypeKey, new AppDbContext());
}
return httpContext.Items[contextTypeKey] as AppDbContext;
}
To use the above method, make a simple call var context = GetDbContext();
Note
We have all of the above methods, but this is specifically to the third method. It seems to work well with two caveats. First, do not use this in a using statement as it will not be available to any other classes during the scope of the request (you dispose it). And secondly, ensure that you have a call on Application_EndRequest that does actually dispose of it. We saw these little buggers hanging around after the request ended in memory causing a huge spike in memory usage.

handling XMLHttpRequest abort on asp.net

I use asynchronous XMLHttpRequest to call a function in ASP.net web service.
When I call an abort method on the XMLHttpRequest, after the server has received the request and processing it, the server continues processing the request.
Is there a way to stop the request processing on the server?
Generally speaking, no, you can't stop the request being processed by the server once it has started. After all, how would the server know when a request has been aborted?
It's like if you navigated to a web page but browsed to another one before the first one had loaded. That initial request will, at least to some extent (any client-side work will of course not take place), be fulfilled.
If you do wish to stop a long-running operation on the server, the service that is being invoked will need to be architected such that it can support being interrupted. Some psuedo code:
void MyLongRunningMethod(opId, args)
{
work = GetWork(args)
foreach(workItem in work)
{
DoWork(workItem)
//Has this invocation been aborted?
if(LookUpSet.Contains(opId))
{
LookUpSet.Remove(opId)
return
}
//Or try this:
if(Response.IsClientConnected)
{
HttpContext.Current.Response.End();
return;
}
}
}
void AbortOperation(opId)
{
LookUpSet[opId] = true
}
So the idea here is that MyLongRunningMethod periodically checks to see if it has been aborted, returning if so. It is intended that opId is unique, so you could generate it based on the session Id of the client appended with the current time or something (in Javascript, new Date().getTime() will get you the number of milliseconds since the epoch).
With this sort of approach, the server must maintain state (the LookUpSet in my example), so you will need some way of doing that, such as a database or just storing it in memory. The service will also need to be architected such that calling abort does not leave things in a non-working state, which of course depends very heavily on what it does.
The other really important requirement is that the data can be split up and worked on in chunks. This is what allows the service to be interruptable.
Finally, if some operation is to be aborted, then AbortOperation must be called - simply aborting the XMLHttpRequest invocation won't do help as the operation will continue until completion.
Edit
From this question: ASP.Net: How to stop page execution when browser disconnects?
You could also check the Response.IsClientConnected property to try and determine whether the invocation had been aborted.
Generally speaking, the server isn't going to know that a client has disconnected until it attempts to send data to it. See Best practice to detect a client disconnection in .NET? and Instantly detect client disconnection from server socket.
As nick_w wrote you can't stop the request being processed by the server once it has started. But there is ability to implement solution which will give you ability to cancel server task. Dino Esposito has several great articles about how such things can be implemented:
Canceling Server Tasks with ASP.NET AJAX
And in the following articles to implement pooling to server Dino Esposito describes how to use SignalR library:
Build a Progress Bar with SignalR;
Long Polling and SignalR
So if you really need to cancel some task on server these articles can be used as starting point to implement required solution.

Starting a thread in an ASP.NET WebService

I have an IIS hosted WCF webservice.
It has a method on it (let's call it "ConfirmOrder"). When this method is called, I want to
1. Do some quick stuff to the database, resulting in an OrderId
2. Start a new thread that will do some slow work (e.g. generate an email and send it)
3. Return the OrderId from 1. synchronously to the client.
4. Eventually, when it's finished, the new thread created in 2. will have done all the rest of the processing and sent the email.
Questions:
(1) I did have code like:
// do printing and other tasks
OrderConfirmedThreadHelper helper = new OrderConfirmedThreadHelper(userSession, result);
// some things first (like generating barcodes) in this thread
Logger.Write(basket.SessionId, String.Format("Before ConfirmOrderSync"), LogCategoryEnum.Sales, System.Diagnostics.TraceEventType.Verbose);
helper.ConfirmOrderSync();
Logger.Write(basket.SessionId, String.Format("After ConfirmOrderSync"), LogCategoryEnum.Sales, System.Diagnostics.TraceEventType.Verbose);
// slower things (like rendering, sending email) in a separate thread
Thread helperThread = new Thread(new ThreadStart(helper.ConfirmOrderAsync));
helperThread.Start();
return result;
but it seemed to cause problems; at least, the service kept locking up. Is this a bad thing to do?
(2) I tried changing it to
// slower things (like rendering, sending email) in a separate thread
ThreadPool.QueueUserWorkItem(new WaitCallback(helper.ConfirmOrderAsync));
but the ThreadPool thread seems to be being killed as soon as the main thread has finished, because it's a Background thread.
Is there a better way of doing this - short of writing a whole new windows service to communicate with?
If the second thead finishes after the request thread (the one that comes from the browser) you're in problems, since it'll get reclaimed by the runtime and terminated.
If you can afford to wait (if it's only going to send an email i'll be a couple of seconds) you can use ManualResetEvent to synchronize one thread to wait for the other to finish and clean up gracefully.
If you can't wait, well the best choice in this case for the mail process is one of the following
A Windows Service.
An .ashx you can call from your client code with a jquery ajax call passing the necessary data to send the mail.
A batch job (a scheduled task, a sql server job, etc) that reads pending mails to be sent from the DB and sends them. It would run every X minutes, so you wouldn't have to worry
Hope that helps!

.NET page caching but still receive query string

Is it possible to cache a page render on an iis web server, but still receive and write query string values (that don't affect output) to the database? So that the page render does not have to wait for the database trip to execute in order to serve the page? If possible, how do I implement?
For example, we track various affiliate and search marketing data via query strings, and in the master page code behind, we write the given query string data to the database. The output of the page doesn't change at all for the user (however we may set a cookie based off the qs parameter).
My understanding is that the page render has to wait for the database trip to fully execute in order to render the page. Is that even true?
Yes in general though it can depend on how one handles the caching.
First, you should move that tracking stuff to where it belongs -- a HttpModule. Page need not concern itself. Second, what you probably want to look into is some sort of fire and forget service call or message queueing. This makes the database write a non-blocking operation rather than a blocking operation.
Some options for making the operation non-blocking:
if you are actually writing to a web service, there is an underappreciated [OperationContract(IsOneWay = true)] decoration. Tells the generated proxy to fire and forget the call, will not wait for a response.
Another option would be to use the Asynchronous ADO.NET bits, especially BeginExecuteNonQuery. If you don't handle the callback this should just execute off your thread.
You could always just spawn a thread and deal with it in a non-blocking manner yourself. Just be real careful about handling errors on this thread -- unhandled exceptions will take out the app domain.

Passing HTTP authenticated principal onto another worker thread

We have a web front end on our business layer server.
Certain pages in our web application instantiate very long running tasks (could be up to 10+ minutes). The way that these requests are handled is like so: -
(on the HTTP request thread)
we make a connection to the business server.
we create a new thread to make the long running call passing in the connection object.
The HTTP request then completes, passing a handle back to the browser,
the browser periodically polls the web server to get updates on the long running task progress.
All requests to the business server are authenticated - the connection's user principal page must have permission to call the method on the business server.
This mechanism works fine as long as our web application is running in Classic mode.
When we run in pipeline mode, we get ObjectDisposedExceptions when the browser polls.
System.ObjectDisposedException: Safe handle has been closed
at System.StubHelpers.StubHelpers.SafeHandleC2NHelper(Object pThis, IntPtr CleanupWorkList)
at Microsoft.Win32.Win32Native.GetTokenInformation(SafeTokenHandle TokenHandle, UInt32 TokenInformationClass, SafeLocalAllocHandle TokenInformation, UInt32 TokenInformationLength, ref UInt32 ReturnLength)
at System.Security.Principal.WindowsIdentity.GetTokenInformation(SafeTokenHandle tokenHandle, TokenInformationClass tokenInformationClass, ref UInt32 dwLength)
at System.Security.Principal.WindowsIdentity.get_User()
at System.Security.Principal.WindowsIdentity.GetName()
at System.Security.Principal.WindowsIdentity.get_Name()
the problem appears to be that the windows principal used to make the connection is disposed when the original request ends (which is understandable - in fact I am surprised that the code worked at all!).
As a way around this problem I was wondering if it was possible to either create a duplicate of the HTTP request principal and use that to create the connection (and dispose of it when the long running task completes) or would it be possible to impersonate the HTTP request principle on the worker thread even after the principal is disposed?
Update
(My comment under Aliostad's question was incorrect: the test page did fail. I managed to confuse myself sufficiently that I wrote my test page so that it did not exercise the same code path as the real (faulting) code. Nevermind!)
I have written a "workaround" for this problem: -
I am in the fortunate position of knowing what roles/groups the business server logic will be querying for before the call to the business server is made. So my workaround is to create a new generic principal based upon the request's principal's membership of these roles. The long running task is run using the generic principal.
I am not 100% happy with this workaround because it is very much a "hack" - i.e. I can see that it would easily fall down if some logic did the (eminently sensible) check of verifying that the principal's identity is authenticated.
So I would still very much appreciate any help / insight into this issue.
Thanks
OK, here is my catch on this.
First of all, if you create a thread, all the current thread's security context will be copied to the new thread - by default. This operation is heavy but much needed (as you can imagine most things will not work without it). In case you need to prevent it and you do not need the copying of context, there is a way to do it and it has been explained in Richter's C# via CLR. Lucky enough, he has shared this very bit of the book here and basically calling a static method to prevent context to be flowed:
ExecutionContext.SuppressFlow();
I cannot think this is being called in WCF although using Reflector, I found a single use of it in here:
[SecuritySafeCritical]
private IAsyncResult BeginGetContext(bool startListening)
{
Exception exception;
do
{
exception = null;
try
{
try
{
if (ExecutionContext.IsFlowSuppressed())
{
return this.listener.BeginGetContext(this.onGetContext, null);
}
using (ExecutionContext.SuppressFlow())
{
return this.listener.BeginGetContext(this.onGetContext, null);
}
}
// .... the rest
Interestingly enough, this is used in 3 places one of them in SharedHttpTransportManager.
Now all this might look like we have found the issue and it is a bug but I very much doubt it.
My hunch is that there is a process recycling happening in between and the context is lost. The way to prove or disprove this would be to use perfmon to register all process recycles and find out if any was in between.
My solution is basically - which you might not like! - to simply insert an item into a queue (MSMQ or a simple database queue) and have a windows service reading it. With this operation being so important, I would never trust IIS to carry out to the finish.
Hope this is useful to you.

Resources