I'm trying to backup my sqlite database from a cronjob that runs in 5 minute intervals. The database is "live", so there are running queries at the time I want to perform the backup.
I want to be sure, that the database is in good shape when I backup it so that I can rely on the backup.
My current strategy (in pseudocode):
function backup()
{
#try to acquire the lock for 2 seconds, then check the database integrity
sqlite3 mydb.sqlite '.timeout 2000' 'PRAGMA integrity_check;'
if (integrity is ok and database was not locked)
{
#perform the backup to backup.sqlite
sqlite3 mydb.sqlite '.timeout 2000' '.backup backup.sqlite'
if (backup could be performed)
{
#Check the consistency of the backup database
sqlite3 backup.sqlite 'PRAGMA integrity_check;'
if (ok)
{
return true;
}
}
}
return false;
}
Now, there are some problems with my strategy:
If the live database is locked, I run into problems because I cannot perform the backup then. Maybe a transaction could help there?
If something goes wrong between the PRAGMA integrity_check; and the backup, I'm f*cked.
Any ideas? And by the way, what is the difference between the sqlite3 .backup and a good old cp mydb.sqlite mybackup.sqlite ?
[edit] I'm running nodejs on an embedded system, so if someone suggests the sqlite online backup api with some ruby wrapper - no chance ;(
If you want to backup while queries are running you need to use the backup API. The documentation has a worked out example of an online backup of a running database (example 2). I don't understand the Ruby reference, you can integrate it in your program or do it as a small C program running besides the real application -- I've done both.
An explicit integrity_check on the backup is overkill. The backup API guarantees that the destination database is consistent and up-to-date. (The flip side of that coin is that if you update the DB too often while a backup is running, the backup might never finish.)
It is possible to use 'cp' to make a backup, but not of a running database. You need to have an exclusive lock for the entire duration of the backup, so it's not really 'live'. You also need to be careful to copy all of sqlite's temp files as well as the main database.
I'd expect the sqlite3 ".backup" command to use the backup API.
If you cannot use the backup API, you must use another mechanism to prevent the database file from being modified while you're copying it.
Start a transaction with BEGIN IMMEDIATE:
After a BEGIN IMMEDIATE, no other database connection will be able to write to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes can continue to read from the database, however.
Related
I am modifying a project that opens a sqlite database through sqlalchemy.
I am running multiple processes of this application, which seems to cause some (locking?) issues. The processes stall waiting for IO (state D in top).
The database is only queried / read, never written to. Unfortunately, the 5GB database file is on a NFS directory, so reading is not instantaneous.
Can I set the database to read-only? Will sqlite/sqlalchemy then avoid the locking/transaction mechanism? Setting isolation_level="READ UNCOMMITTED" seems not to have been enough. It's not clear to me from the documentation how and whether I should set SERIALIZABLE.
Alternatively, I guess I could copy the db to memory first, but it is not clear how to hand the memory database over to sqlalchemy. Should I copy on connect?
I was able to solve it by copying the database into memory, by replacing
ENGINE = create_engine('sqlite:///' + DATABASE_FILE)
with:
ENGINE = create_engine('sqlite:///')
import sqlite3
filedb = sqlite3.connect('file:' + DATABASE_FILE + '?mode=ro', uri=True)
print("loading database to memory ...")
filedb.backup(ENGINE.raw_connection().connection)
print("loading database to memory ... done")
filedb.close()
followed by
BASE = declarative_base()
SESSION = sessionmaker(bind=ENGINE)
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
Why would the TrackedMessages_Copy_BizTalkMsgBoxDb SQL Agent job start failing with "Query processor could not produce a query plan"?
Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. [SQLSTATE 42000] (Error 8622).
Our SQL guys are talking about amending the stored proc. but we've told them to treat BizTalk db's as a black box
It should go without saying, but before anything, make sure to backup your databases. In fact, if your regular backup jobs are running, you may be able to restore a backup and compare things to when it was working on this server. That said -
Check the SQL Agent Job to make sure no additional steps have been added/no plan has been forced/no hints are being used; it should just have one step called 'Purge' that calls the procedure below with the DB server and DTA database name as parameters.
Check the procedure (BizTalkMsgBoxDb.dbo.bts_CopyTrackedMessagesToDTA) to make sure it hasn't been altered.
If this is a production or otherwise sensitive box, back up the DBs and restore them to a local dev environment before proceeding!
If this is not production, see if you can run the procedure (perhaps in a transaction that you rollback) directly in SSMS. See if you get any better errors. Add print statements to see if you can find out exactly where it's getting conflicting hints.
If the procedure won't run, consider freeing the procedure cache (DBCC FREEPROCCACHE) and seeing if the procedure will run.
If it runs in your dev environment from a backup, you may have to start looking at server/database settings. I can't think of which ones off the top of my head that would cause this error though.
For what it's worth, well intentioned DBAs break BizTalk frequently. They decide that an index is missing or not properly covering, or that security could be improved, or that the database should be treated like other databases they administer are treated. I've seen DBAs do really silly things to the BizTalk databases that get very hard to diagnose.
Did you try updating the statistics on the database table referenced by the stored procedure (which is run by the SQL Server Agent job? The query planner uses those to decide how best to execute your SQL.
I have created a C# windows application with vs2010 and I'm using a SQL Server CE database. I'm looking for a way to backup my database programmatically.
Is there a way to export/import my entire database (as a .sdf file) or just copy it to another location and then import it and replace the current one?Could anyone provide me the code in order to do this?
I'm relatively new to this but I'm guessing this is not something as difficult as it sounds. I couldn't find a clear answer anywhere so any help would be appreciated!
For this task we use the SQL Server Management Objects (SMO) objects that have access to backup/restore functions using asp.net code.
Base on the article How to: Back Up Databases and Transaction Logs here is a part of the sample that make the backup:
// Create the Backup SMO object to manage the execution
Backup backup = new Backup();
// Add the file to backup to
backup.Devices.Add(new BackupDeviceItem(backupPath, DeviceType.File));
// Set the name of the database to backup
backup.Database = databaseName;
// Tell SMO that we are backing up a database
backup.Action = BackupActionType.Database;
backup.Incremental = false;
// Specify that the log must be truncated after the backup is complete.
backup.LogTruncation = BackupTruncateLogType.Truncate;
// Begin execution of the backup
backup.SqlBackup(server);
As per this how-to, I've successfully configured IIS on my XP-SP3 dev box for SQL Server 2008 Express to save ASP.NET session state information. I'm just using SQL Server because otherwise on every recompile, I was losing the session state which was obnoxious (having to re-login). But, I'm facing an annoying issue in that every time I restart SQL there's this error, and sometimes one or two other very similar friends:
The SELECT permission was denied on the object 'ASPStateTempSessions',
database 'tempdb', schema 'dbo'.
To fix the error, I just open Management Studio and edit the User Mapping for the login/dbo I'm using on the ASPState db, and re-add tempdb to that user with all but deny permissions. Apparently, once the right permissions are there, ASP.NET is able to automatically create the tables it uses. It just can't run that CreateTempTables sproc until the right security is there.
THE QUESTION...
Is there a way to not have to re-do this on every restart of the SQL Server?
I don't really care right now about keeping the temp data across restarts, but I would like to not have to go through this manual step just to get my web app working on localhost, which uses session state variables throughout. I suppose one could resort to some kind of stored procedure within SQL Server to accomplish the task for this machine when the service starts, to not have to do it manually. I'd accept such an answer as a quick fix. But, I'm also assuming there's a better recommended configuration or something. Not seeing an answer to this on the how-to guide or elsewhere here on StackOverflow.
Both answers seem valid; but with most things Microsoft, its all in the setup...
First uninstall the ASPState database by using the command:
aspnet_regsql –ssremove –E -S .
Note:
-E is to indicate you want to use integrated security connection.
-S informs what SQL server and SQL instance to use, and the "." (dot) specifies default local instance
Then re-install using the command:
aspnet_regsql –ssadd –sstype p –E -S .
Note:
The sstype has three options, t | p | c ... the first "t", tells the installer to host all stored procedures in the ASPState database, and all data in the tempdb. The second option "p" tells the installer to persist data to the ASPState database. The last option "c" allows you to specify a different 'custom' database to persist the session state data.
If you reinstall using the "-sstype p" you then need only to supply datareader/datawriter to the ASPState database for the user that's making the connection (in most cases, the application pool's identity in IIS).
The added benefit of persisting the data is that session state is retained even after a restart of the service. The only drawback is that you need to ensure the agent cleanup job is pruning old sessions regularly (it does this by default, every minute).
Important:
If you are running a cluster, you must persist session data. You're only option is to use sstype 'p' or 'c'.
Hope this sheds light on the issue!
For the record, I did find a way to do this.
The issue is that the tempdb is recreated from the model db each time the service restarts. The gist of the solution is to create a stored procedure that does the job, and then make that procedure run at startup.
Source code (credit to the link above) is as follows:
use master
go
-- remove an old version
drop proc AddAppTempDBOwner
go
-- the sp
create proc AddAppTempDBOwner as
declare #sql varchar(200)
select #sql = 'use tempdb' + char(13) + 'exec sp_addrolemember ''db_owner'', ''app'''
exec (#sql)
go
-- add it to the startup
exec sp_procoption 'AddAppTempDBOwner', 'startup', 'true'
go
Well done for finding the strangest way possible to do this.
The correct answer is as follows:
use master
go
EXEC sp_configure 'Cross DB Ownership Chaining', '1'
go
RECONFIGURE
go
EXEC sp_dboption 'ASPState', 'db chaining', 'true'
go