When I am trying to execute the insert procedure,I am geeting the following in data accesslayer.Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Plaese give me a solution
While it's possible that you have an extremely long INSERT statement that takes multiple seconds to execute, what seems more likely (especially as you tried increasing the timeout property) is that some other transaction has locked the table you are trying to insert into.
There are dozens of ways you could get into that situation. If this doesn't usually happen, investigate whether someone else could be locking that table. It could be as simple as another developer leaving open a debugging session stopped at a breakpoint while he or she went to lunch.
If this is happening over and over again, every time you run, and there's no lock on the table that you can detect, it's possible that you're conflicting with yourself at runtime: opening a transaction, changing something, and while the transaction is still open, executing a new command against the same database affecting one of the tables affected by the transaction.
If that's the case, you can do a couple of things: re-design how you're doing this two-step process so you only need one connection open at a time, or try doing both activities in the same transaction.
Also, be sure you are doing proper housekeeping with SqlConnection, SqlTransaction, and SqlDataReader objects: close or dispose of them as soon as you can. You might look up the C# using statement if you are not familiar with it.
Related
From sqlite FAQ I've known that:
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
So, as far as I understand I can:
1) Read db from multiple threads (SELECT)
2) Read db from multiple threads (SELECT) and write from single thread (CREATE, INSERT, DELETE)
But, I read about Write-Ahead Logging that provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.
Finally, I've got completely muddled when I found it, when specified:
Here are other reasons for getting an SQLITE_LOCKED error:
Trying to CREATE or DROP a table or index while a SELECT statement is
still pending.
Trying to write to a table while a SELECT is active on that same table.
Trying to do two SELECT on the same table at the same time in a
multithread application, if sqlite is not set to do so.
fcntl(3,F_SETLK call on DB file fails. This could be caused by an NFS locking
issue, for example. One solution for this issue, is to mv the DB away,
and copy it back so that it has a new Inode value
So, I would like to clarify for myself, when I should to avoid the locks? Can I read and write at the same time from two different threads? Thanks.
For those who are working with Android API:
Locking in SQLite is done on the file level which guarantees locking
of changes from different threads and connections. Thus multiple
threads can read the database however one can only write to it.
More on locking in SQLite can be read at SQLite documentation but we are most interested in the API provided by OS Android.
Writing with two concurrent threads can be made both from a single and from multiple database connections. Since only one thread can write to the database then there are two variants:
If you write from two threads of one connection then one thread will
await on the other to finish writing.
If you write from two threads of different connections then an error
will be – all of your data will not be written to the database and
the application will be interrupted with
SQLiteDatabaseLockedException. It becomes evident that the
application should always have only one copy of
SQLiteOpenHelper(just an open connection) otherwise
SQLiteDatabaseLockedException can occur at any moment.
Different Connections At a Single SQLiteOpenHelper
Everyone is aware that SQLiteOpenHelper has 2 methods providing access to the database getReadableDatabase() and getWritableDatabase(), to read and write data respectively. However in most cases there is one real connection. Moreover it is one and the same object:
SQLiteOpenHelper.getReadableDatabase()==SQLiteOpenHelper.getWritableDatabase()
It means that there is no difference in use of the methods the data is read from. However there is another undocumented issue which is more important – inside of the class SQLiteDatabase there are own locks – the variable mLock. Locks for writing at the level of the object SQLiteDatabase and since there is only one copy of SQLiteDatabase for read and write then data read is also blocked. It is more prominently visible when writing a large volume of data in a transaction.
Let’s consider an example of such an application that should download a large volume of data (approx. 7000 lines containing BLOB) in the background on first launch and save it to the database. If the data is saved inside the transaction then saving takes approx. 45 seconds but the user can not use the application since any of the reading queries are blocked. If the data is saved in small portions then the update process is dragging out for a rather lengthy period of time (10-15 minutes) but the user can use the application without any restrictions and inconvenience. “The double edge sword” – either fast or convenient.
Google has already fixed a part of issues related to SQLiteDatabase functionality as the following methods have been added:
beginTransactionNonExclusive() – creates a transaction in the “IMMEDIATE mode”.
yieldIfContendedSafely() – temporary seizes the transaction in order to allow completion of tasks by other threads.
isDatabaseIntegrityOk() – checks for database integrity
Please read in more details in the documentation.
However for the older versions of Android this functionality is required as well.
The Solution
First locking should be turned off and allow reading the data in any situation.
SQLiteDatabase.setLockingEnabled(false);
cancels using internal query locking – on the logic level of the java class (not related to locking in terms of SQLite)
SQLiteDatabase.execSQL(“PRAGMA read_uncommitted = true;”);
Allows reading data from cache. In fact, changes the level of isolation. This parameter should be set for each connection anew. If there are a number of connections then it influences only the connection that calls for this command.
SQLiteDatabase.execSQL(“PRAGMA synchronous=OFF”);
Change the writing method to the database – without “synchronization”. When activating this option the database can be damaged if the system unexpectedly fails or power supply is off. However according to the SQLite documentation some operations are executed 50 times faster if the option is not activated.
Unfortunately not all of PRAGMA is supported in Android e.g. “PRAGMA locking_mode = NORMAL” and “PRAGMA journal_mode = OFF” and some others are not supported. At the attempt to call PRAGMA data the application fails.
In the documentation for the method setLockingEnabled it is said that this method is recommended for using only in the case if you are sure that all the work with the database is done from a single thread. We should guarantee than at a time only one transaction is held. Also instead of the default transactions (exclusive transaction) the immediate transaction should be used. In the older versions of Android (below API 11) there is no option to create the immediate transaction thru the java wrapper however SQLite supports this functionality. To initialize a transaction in the immediate mode the following SQLite query should be executed directly to the database, – for example thru the method execSQL:
SQLiteDatabase.execSQL(“begin immediate transaction”);
Since the transaction is initialized by the direct query then it should be finished the same way:
SQLiteDatabase.execSQL(“commit transaction”);
Then TransactionManager is the only thing left to be implemented which will initiate and finish transactions of the required type. The purpose of TransactionManager – is to guarantee that all of the queries for changes (insert, update, delete, DDL queries) originate from the same thread.
Hope this helps the future visitors!!!
Not specific to SQLite:
1) Write your code to gracefully handle the situation where you get a locking conflict at the application level; even if you wrote your code so that this is 'impossible'. Use transactional re-tries (ie: SQLITE_LOCKED could be one of many codes that you interpret as "try again" or "wait and try again"), and coordinate this with application-level code. If you think about it, getting a SQLITE_LOCKED is better than simply having the attempt hang because it's locked - because you can go do something else.
2) Acquire locks. But you have to be careful if you need to acquire more than one. For each transaction at the application level, acquire all of the resources (locks) you will need in a consistent (ie: alphabetical?) order to prevent deadlocks when locks get acquired in the database. Sometimes you can ignore this if the database will reliably and quickly detect the deadlocks and throw exceptions; in other systems it may just hang without detecting the deadlock - making it absolutely necessary to take the effort to acquire the locks correctly.
Besides the facts of life with locking, you should try to design the data and in-memory structures with concurrent merging and rolling back planned in from the beginning. If you can design data such that the outcome of a data race gives a good result for all orders, then you don't have to deal with locks in that case. A good example is to increment a counter without knowing its current value, rather than reading the value and submitting a new value to update. It's similar for appending to a set (ie: adding a row, such that it doesn't matter which order the row inserts happened).
A good system is supposed to transactionally move from one valid state to the next, and you can think of exceptions (even in in-memory code) as aborting an attempt to move to the next state; with the option to ignore or retry.
You're fine with multithreading. The page you link lists what you cannot do while you're looping on the results of your SELECT (i.e. your select is active/pending) in the same thread.
I have an application, which runs all the time and receives some messages (rate of them varies from several per second to none per hour). Every message should be put into a SQLite database. What's the best way to do this?
Opening and closing the database on each message doesn't sound good: if there are tens of them per second, it will be extremely slow.
On the other hand, opening the database once and just writing to it can lead to loss of data if the process unexpectedly terminates.
It sounds like whatever you do, you'll have to make a trade-off.
If safety is your top-most concern, then update the database on each message and take the speed hit.
If you want a compromise, then update the database write every so many messages. For instance, maintain a buffer and every 100th message, issue an update, wrapped in a transaction.
The transaction wrapping is important for two reasons. First, it maximizes speed. Second, it can help you recover from errors if you employ logging.
If you do the batch update above, you can add an additional level of safety by logging each message as it comes to a file. You will reset this log every time a database update is successfully issues. That way, if an update fails, you know it failed on the entire block (since you are using transactions) and your log will have the information that did not update. This will allow you to re-issue the update, or even see if there was a problem with the data that caused the failure. This of course assumes that keeping a log is cheaper than updating the database, which can be the case depending on how you are connecting.
If your top rate is "several per second" then I dont see a real problem with opening and closing the db. This is especially true if its critical that the data be recorded right away in case of server failure.
We use SQLite in a reporting product and the best performance we have been able to eek out is recording rows in blocks of several thousands at a time. Our default is around setting is 50k. That means our app waits around until 50k rows of data is collected then commits it as one transaction.
There is an easy algorithm to adjust your application's behaviour to the message rate:
When you have just written a message, check if there is any new message.
If yes, write that message too, and repeat.
Only when you have run out of immediately available messages, commit the transaction and close the database.
In that manner, every message will be saved immediately, unless the message rate becomes too high for that.
Note: closing the database will not increase data durability (that's what transaction commit is for), it will just free up a little bit of memory.
Website using .NET Framework v3.5, SQL Server 2008, written in C#
I have a stored procedure which I have added to my DBML by dragging it across from the server explorer.
In it's properties it returns Auto-generated type.
The procedure takes < 1 second to run from within SQL Mgmt Studio for all inputs.
However from the code for 1 particular input (which takes < 1 second in the Mgmt studio) it hangs and then throws:
System.Data.SqlClient.SqlException: Timeout expired.
This didn't always happen for this one input! It used to also work fine when called from the code. The last time it didn't work I deleted and re-added the same stored procedure to the DBML. This "fixed" it, and that input ran fine and in the same time as all the others. However this is not an adequate fix! It has happened again and I can't keep deleting and re-adding as required.
I made no changes to the data that's being returned during the point at which it was "fixed", so I can't think what the problem could be. Any help on this would be much appreciated!
Exception says it times out but it is
not timing out
If it says it's timing out, it's timing out. The only question is "why"?
Run a SQL Server Profiler trace against your database and see what query is actually going to the server. It's possible that another query is being issued too. It's possible there is another transaction interfering in your production scenario.
It turns out that this is parameter sniffing - this is explained in another post: Executing stored proc from DotNet takes very long but in SSMS it is immediate
Also, be sure that the stored procedure is not being held up inside of a transaction, waiting for another process to complete. I just ran across this with a Linq to Sql stored procedure being called multiple times within a transaction. It gave me a timeout expired error and I just realized it was waiting for a previous call to complete, and thus timing out.
i have a .NET page that will perform calculation by calling a stored proc in SQL Server whn user click on a button. Normally, the calculation takes approxiamately 2 mins to complete and upon completion it will update a field in the table. Now i have a issue that, when user accidentally close the browser, the stored proc will still be running and update the table. I thought that the page should stop running, stop updating the field in table instead of keep on running even though the browser was closed?? Any ideas how to stop the process??
When a page request results in a call to the database, the page will wait for it to finish, but the database has no knowlegde of the page. So if the page stops waiting for whatever reason, the database will happily continue working until finished. In fact, the page request also has no knowlegde about whether the browser is still open or not, so if the user closes the browser, the page request itself will still execute until finished. It's only that nobody will listen to the result.
No.
The server knows nothing about whether the browser is still open. The browser just fires the process, and by the time the page is downloaded, it's interaction with the server is complete.
You can use a timer ajax call to allow the server to determine if the browser is still open and active.
In order to stop your stored procedure when this happens you will need to make some major modification to it. The only real way is to store a parameter in the database to indicate that the stored procedure should continue running and then check that parameter throughout your stored proc. If the browser is closed you then update this parameter and you can change your stored procedure to rollback/exit based on this value.
However, all of this is fairly complex for very little gain... Is it a problem that the stored procedure continues to run?
If you need to support some form of cancellation for this, you'll need to make a number of changes to your asp.net page and your calculation. Your calculation would have to look for a flag (maybe in the same table where it stored the result).
Then, in your code, you need to fire off the procedure execution asynchronously. Now, in order to do things cleanly, you really need the whole page to process asynchonously, and to wake periodically, check the Request.IsClientConnected, and if they're no longer connected, set the flag to cancel the calculation.
It's a fair chunk of work, and easy to get wrong. Also, your strategy for implementing this would vary wildly depending on whether your application needs to support 10 users or thousands (do you sleep in the .Net thread pool, and thus limit scalability of your application, or have a dedicated thread to poll the IsClientConnected property, and work out which calculation to abort?)
Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.