Make a final call to the Database when user leaves website (ASPX)? - asp.net

I have a system set up to lock certain content in a database table so only one user can edit that content at a time. Easy enough and that part is working fine. But now I'm at a road block of how to send a request to "unlock" the content. I have the stored procedure to unlock the content, but how/where would I call it when the user just closes their browser?

You also can't know when the user turns off his computer. You have to do it the other way around.
Require that the lock be renewed periodically. Only the web site would do the periodic renewal. If the user stops using the web site, then the lock expires.
Otherwise, require the user to explicitly unlock the content. Other users who want to edit the content can then go yell at the first user when they can't do their jobs. Not a technological solution, but still a good one. Shame works.

The best thing you can really do is add something to your Session_End in your global.asax. Unfortunately, this won't fire until the session times out.
When the user clicks the "X" in their browser, there isn't anyway to guarantee the browser will send you anything back.

A quick note on the Session_End approaches. If you use this method, then you have to ensure
That sessionstate is InProc, eg. add something like this to your Web.config
<sessionState mode="InProc" timeout="timeout_in_minutes"/>
Make sure that you've setup IIS as to not recycle worker processes during normal operation (see for instance this blog post).
Edit:
Not directly answering the question directly, but another approach would be to use Optimistic concurrency control on the data in question.

There is such event as "user closes browser".
Nevertheless, I can think of two workarounds:
Use Javascript/Ajax to permanently
(lets say every 10 seconds) call a
method in your page. The DateTime of
your last query needs to be stored
somewhere. Now you write a windows
service that checks every second
which session are timed out. Perform
your custom action there.
Use the global.asax Session_End()
-Event. (cannot be used with every SessionState, look up for which ones
it is usable)

Trying to leave a stackoverflow answer page pops up an "are you sure" dialog. Perhaps during the on-page-leave event that SO uses (or however SO does this), you can send a final request with an XmlHttpRequest object. This won't cover if the browser process closes unexpectedly (use session_onend for that), but it will at least send the "I'm closed" event earlier

I think your one stored procedure can do the locking and unlocking (used with "Select #strNewMax As NewMax")...
Here is an example from a system I have:
Declare #strNewMax Char
Select #strNewMax = 'N'
BEGIN TRANSACTION
/* Lock only the rows for this Item ID, and hold those locks throughout the transaction. */
If #BidAmount > (Select Max(AB_Bid_AMT) from AuctionBid With(updlock, holdlock) Where AB_AI_ID = #AuctionItemId)
Begin
Insert Into AuctionBid (AB_AI_ID, AB_Bid_AMT, AB_Emp_ID, AB_Entry_DTM)
Select #AuctionItemId, #BidAmount, #EmployeeId, GetDate()
Select #strNewMax = 'Y'
End
COMMIT TRANSACTION
Select #strNewMax As NewMax
This will insert a record as the next highest bid, all while locking the entire table, so no other bids are processed at the same time. It will return either a 'Y' or 'N' depending on if it worked or not.
Maybe you can take this and adjust it to fit your application.

Related

Session compression. Negative and positive sides

In web.config you can enable session compression.
<sessionState mode="InProc" customProvider="DefaultSessionProvider" compressionEnabled="true" >
What are positive and negative sides of this action?
Well, on the positive side, you need less space.
On the negative side, it needs time to compress, so it's slower.
Let me add, that in my opinion, if you use sessions at all, you've made an architectural mistake (exceptions my apply to this rule, but very very rarely).
It's not a good idea, because if a page writes something in a session, this gets overwritten if I simultanously open the same page in another browser window (it's the same session).
And because InProc sessions expire when you change something in the web.config file, you can create unlimited number of bugs for EVERY currently active user...
Plus you loose inProc sessions, if the VM gets moved to another server (cloud environments, failover, dynamic scaleOut).
Also, the InProc provider doesn't require objects to be marked as serializable.
If you change to, for example, an SQL session provider, you'll get exceptions in all places where you put an object that hasn't been marked as serializable into the session.
For example, when you need to query all the locations a user may access (according to portofolio rights in T_SYS_LocationRights):
You get the UserID from the formsAuth-cookie, then use it as the parameter:
DECLARE #userID integer
SET #userID = 12435
SELECT * FROM T_Locations
WHERE (1=1)
AND
(
(
SELECT ISNULL(MAX(CAST(T_SYS_LocationRights.LR_IsRead AS integer)), 0)
FROM T_SYS_LocationRights
INNER JOIN T_User_Groups
ON T_User_Groups.USRGRP_GRP = T_SYS_LocationRights.LR_GRANTEE_ID
WHERE T_SYS_LocationRights.LR_LC_UID = T_Locations.LC_UID
AND T_User_Groups.USRGRP_USR = #userID
) = 1
)
Don't just query something after the maxim:
if you'll ever need it, it's already there.
Design a web-application (which is multi-threaded by design) after that maxim, is a very bad idea.
If you don't need it, don't query it.
If you need it, query it.
If you needed it, don't store it in the session, it's better to query it again, if necessary
You can win much more time by executing all database operations at once, get all the data you need into a System.Data.DataSet (in one query-operation, one connection open-and-close), and then use that. When the page reloads, you can always reload the data (as a matter of fact, you even should).
Don't use the session as cache. It's not the cache

Asynchronous validation in QWizard

I'm writing a wizard UI based on the QWizard Qt object. There's one particular situation where I want the user to log in to a service using host, username, and password. The rest of the wizard then manipulates this service to do various setup tasks. The login may take a while, especially in error cases where the DNS name takes a long time to resolve -- or perhaps it may not even resolve at all.
So my idea is to make all three fields mandatory using the registerField mechanism, and when the user hits Next, we show a little throbber on the wizard page saying "Connecting to server, please wait..." while we try to connect in the background. If the connection succeeds, we advance to the next page. If not, we highlight the offending field and ask the user to try again.
However, I'm at a loss for how to accomplish this. The options I've thought of:
1) Override validatePage and have it start a thread in the background. Enter a wait inside validatePage() that pumps the Qt event loop until the thread finishes. You'd think this was the ugliest solution, but...
2) Hide the real Next button and add a custom Next button that, when clicked, dispatches my long running function in a thread and waits for a 'validation complete' signal to be raised by something. When that happens, we manually call QWizard::next() (and we completely bypass the real validation logic from validatePage and friends.) This is even uglier, but moves the ugliness to a different level that may make development easier.
Surely there's a better way?
It's not as visually appealing, but you could add a connecting page, and move to that page. If the connection succeeds, call next() on the wizard, and if the connection fails, call previous() and highlight the appropriate fields. It has the advantage of being relatively straightforward to code.
My final choice was #2 (override the Next button, simulate its behavior, but capture its click events manually and do the things I want to with it.) Writing the glue to define the Next button's behavior was minimal, and I was able to subclass QWizardPage with a number of hooks that let me run my task ON the same page, instead of having to switch to an interstitial page and worry about whether to go forwards or backwards. Thanks Caleb for your answer though.
I know this has already been answered (a long time ago!) but in case anyone else is having the same challenge. Another method for this is to create a QLineEdit, initiate it as empty and set it as a mandatory registered field. This will mean that "Next" is not enabled until it is filled with some text.
Run your connection task as normal and when it completes use setText to update the QLineEdit to "True" or "Logged in" or anything other than empty. This will then mean the built in isComplete function will be passed as this previously missing mandatory field is now complete. If you never add it to the layout then it won't be seen and the user won't be able to interact with it.
As an example ...
self.validated_field = QLineEdit("")
self.registerField('validated*', self.validated_field)
and then when your login process completes successfully
self.validated_field.setText("True")
This should do it and is very lightweight. Be sure though that you consider the scenario where a user then goes back to that page and whether you need to reset the field to blank. If that's the case then just add in the initialisePage() function to set it back to blank
self.validated_field.setText("")
Thinking about it you could also add the line edit to the display and disable it so that a user cannot update it and then give it a meaningful completion message to act as a status update...
self.validated_field = QLineEdit("")
self.validated_field.setDisabled(True)
self.validated_field.setStyleSheet("border:0;background-color:none")
self.main_layout.addWidget(self.validated_field)
self.registerField('validated*', self.validated_field)
and then when you update it..
self.validated_field.setText("Logged in")

Flush separate Castle ActiveRecord Transaction, and refresh object in another Transaction

I've got all of my ASP.NET requests wrapped in a Session and a Transaction that gets commited only at the very end of the request.
At some point during execution of the request, I would like to insert an object and make it visible to other potential threads - i.e. split the insertion into a new transaction, commit that transaction, and move on.
The reason is that the request in question hits an API that then chain hits another one of my pages (near-synchronously) to let me know that it processed, and thus double submits a transaction record, because the original request had not yet finished, and thus not committed the transaction record.
So I've tried wrapping the insertion code with a new SessionScope, TransactionScope(TransactionMode.New), combination of both, flushing everything manually, etc.
However, when I call Refresh on the object I'm still getting the old object state.
Here's some code sample for what I'm seeing:
Post outsidePost = Post.Find(id); // status of this post is Status.Old
using (TransactionScope transaction = new TransactionScope(TransactionMode.New))
{
Post p = Post.Find(id);
p.Status = Status.New; // new status set here
p.Update();
SessionScope.Current.Flush();
transaction.Flush();
transaction.VoteCommit();
}
outsidePost.Refresh();
// refresh doesn't get the new status, status is still Status.Old
Any suggestions, ideas, and comments are appreciated!
I've had a similar problem before, related to isolation levels. The default isolation level was set to "Snapshot", and when running on SQL Server, this meant that the first transaction would not see anything that changed since it started. Maybe it's your isolation level?
If not, try creating a brand-new TransactionScope straight after disposing the one above, and see if you can read it back as New. If you can't, it's probably nothing to do with the outside transaction.
Hope that helps.
Marcus

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Page View Counter like on StackOverFlow

What is the best way to implement the page view counter like the ones they have here on the site where each question has a "Views" counter?
Factoring in Performance and Scalability issues.
I've made two observations on the stackoverflow views counter:
There's a link element in the header that handles triggering the count update. For this question, the markup looks like this:
<link href="/questions/246919/increment-view-count" type="text/css" rel="stylesheet" />
I imagine you could hit that url to update the viewcount without ever actually viewing the page, but I haven't tried it.
I had a uservoice ticket, where the response from Jeff indicated that views are not incremented from the same ip twice in a row.
The counter i optimized works like this:
UPDATE page_views SET counter = counter + 1 WHERE page_id = x
if (affected_rows == 0 ) {
INSERT INTO page_views (page_id, counter) VALUES (x, 1)
}
This way you run 2 query for the first view, the other views require only 1 query.
An efficient way may be :
Store your counters in the Application object, you may persist it to file/DB periodically and on application close.
Instead of making a database call everytime the database is hit, I would increment a counter using a cache object and depending on how many visits you get to your site every day you have send the page hits to the database every 100th hit to the site. This is waay faster then updating the database on every single hit.
Or another solution is analyzing the IIS log file and updating hits every 30min through a windows service. This is what I have implemented and it work wonders.
You can implement an IHttpHandler to do that.
I'm a fan of #Guillaume's style of implementation. I use a transparent GIF handler and in-memory queues to batch-up sets of changes that are then periodically flushed using a seperate thread created in global.asax.
The handler implements IHttpHandler, processes the request parameters e.g. page id, language etc., updates the queue, then response.writes the transparent GIF.
By moving persistent changes to a seperate thread than the user-request you also deal much better with potential serialization issues from running multiple servers etc.
Of course you could just pay someone else to do the work too e.g. with transparent gifs.
For me the best way is to have a field in the question table and update it when the question is accessed
UPDATE Questions SET views = views + 1 WHERE QuestionID = x
Application Object: IMO is not scalable because the may end with lots of memory consumption as more questions are accessed.
Page_views table: no need to, you have to do a costly join after

Resources