I have a web method exposed in web-service in ASP.net. On request i am fetching record from database and start the transaction. in transaction i update the record and perform other operations and commit the transaction. now another request came from web method and get the same record while another transaction is going on.
I am using dirty read with (noLock) , if i remove nolock causing time out. i am using ASP.net with vb and sql server 2008 R2.
Try to lock the record when you are ready to update and keep the lock time to the minimum. If you need to detect if the record was updated between the read and write operation, grab a timestamp when reading the record and see if the timestamp was changed when ready to write the changes. If the timestamp is not the same, you have detected that some other thread updated the record, and your changes no longer valid.
Related
AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes
if i have a very long run UPDATE query that takes hours and I happen to cancel in middle of when it's running.
I got this message below:
"User requested cancel of current operation"
Will Oracle automatically roll back the transactions?
Will DB lock be still acquired if I cancel the query? If so, how to unlock?
How to check which Update query is locking the database?
Thanks.
It depends.
Assuming that whatever client application you're using properly implemented a query timeout and that the error indicates that the timeout was exceeded, then Oracle will begin rolling back the transaction when the error is thrown. Once the transaction finishes rolling back, the locks will be released. Be aware, though, that it can easily take at least as long to roll back the query as it took to run. So it will likely be a while before the locks are released.
If, on the other hand, the client application hasn't implemented the cancellation properly, the client may not have notified Oracle to cancel the transaction so it will continue. Depending on the Oracle configuration and exactly what the client does, the database may detect some time later that the application was not responding and terminate the connection (going through the same rollback process discussed above). Or Oracle may end up continuing to process the query.
You can see what sessions are holding locks and which are waiting on locks by querying dba_waiters and dba_blockers.
After many years of programming, I need to do something asynchronously for the very first time (because it takes several minutes and the web page times out -- don't want the user waiting that long anyway). This action is done by only a few people but could be done a few times per day (for each of them).
From a "Save" click on an ASP.NET web page using LINQ, I'm inserting a record into a SQL Server table. That then triggers an SSIS package to push that record out to several other databases around the country.
So..
How can I (hopefully simply) make this asynchronous so that the user can get on with other things?
Should this be set up on the .NET side or on the SQL side?
Is there a way (minutes later) that the user can know that the process has completed and successfully? Maybe an email? Not sure how else the user can know it finished fine.
I read some threads on this site about it but they were from 2009 so not sure if much different now with Visual Studio 2012/.NET Framework 4.5 (we're still using SQL Server 2008 R2).
It is generally a bad idea to perform long-running tasks in ASP.Net. For one thing, if the application pool is recycled before the task completes, it would be lost.
I would suggest writing the request to a database table, and using a separate Windows Service to do the long-running work. It could update a status column in the database table that could be checked at a later time to see if the task completed or not, and if there was an error.
You could use Service Broker on the SQL side; it'sa SQL Server implementation of Message Queueing.
Good examples here and here
What you do is create a Service Broker service and define some scaffolding (queues, message types, etc).
Then you create a service "Activation" procedure which is basically a stored procedure that consumes messages from queue. This SP would receive for example a message with an ID of a record in a table, and would then go on and do whatever needs to be done to it, perhaps sending an email when it's done, etc.
So from your code-behind, you'd call a simple stored procedure which would insert the user's data into a table, and send a message to the queue with for e.g the ID of the new record, and then immediately return. I suppose you should tell the user upfront that this could take a few minutes and they'll receive an email, etc.
The great thing about Service Broker is message delivery is pretty much guaranteed - even if your SQL Server falls over right after the message is queued, when you bring it back up the activation SP will just kick off again, so it's very robust.
If I have sessions backed by SQL Server and run a command sequence like
HttpContext.Current.Session['user']
HttpContext.Current.Session['user']
Will this make 2 requests to the session DB table to fetch the value, or does asp.net do anything special with the Session object to prevent multiple DB hits?
Definitely YES.
I have SQL server session state setup and i ran Profiler on it. And could clearly see optimized DB calls.
If fact there are optimizations for getting multiple session items in one shot.
Like the below code will also result in SINGLE optimized set of calls (Note: Its not a plain single DB call to get session item)
HttpContext.Current.Session['user']
HttpContext.Current.Session['userTwo']
NOTE: Tested in .NET 4
You can implement your own session state provider if you need.
http://msdn.microsoft.com/en-us/library/ms178587.aspx
How, or maybe where, is the session timeout handled when you set SQL Server as state handler in an ASP.NET application?
Is it the .NET framwork who after loading session objects from the DB does a judgement on whether or not the objects are expired, or is it a job on the SQL Server itself that takes care of this? The reason I suspect (or even concidered) the latter possibility, is that the script that created the ASPState mentioned something about a ASPState_Job_DeleteExpiredSessions-element.
If it is so that it is an SQL Server job that cleans up, how often does this job trigger and how does it align with the timeout parameter in web.config?
From the article linked to by Quantum Elf:
SqlSessionStateStore doesn't actively monitor the Expires field. Instead, it relies on an external agent to scavenge the database and delete expired sessions—sessions whose Expires field holds a date and time less than the current date and time. The ASPState database includes a SQL Server Agent job that periodically (by default, every 60 seconds) calls the stored procedure DeleteExpiredSessions to remove expired sessions.
This means that it's the SQL Server that handles expriation and session object purging, and the SQL Job Agent in particular.
The ASP.NET session state timeout is still configured in web.config/machine.config regardless of whether you do state in-process or store it in SQL Server.