I'm needing to know if I'll need to take other measures to make sure the backup is complete (or at least not malformed), or if I can safely rely on .dump returning an up to date dump that I can later use to restore the database. For instance, if I run .dump at the same moment as someone else performing an insert/update, what happens?
The sqlite3 tool uses a transaction around the entire execution of the .dump command, so it's atomic.
Related
As far as I can tell both commands create a clone of the db in SQLite format in the end, so why are there two commands to do this?
.backup uses the SQLite Backup API to create a clone atomically, even if the database is in use.
.clone copies the database by just running SQL commands. As far as I can tell, no transactions are done on the source database when doing the clone, so it has a chance of getting partially updated data mid way through the clone.
A process crashed while trying to commit an insert of 2-3 million rows. Now, I'm finding the database file is locked for all activity, including reading, dumping, and running an integrity check.
Can I unlock this database in some way or get sqlite3 to recover the file?
Is there any way of recovering the contents of this database file? I do have a backup from 24 hours ago, but a variety of other data will be lost if I have to do a complete rollback to an older version of the database.
Code below:
$ sqlite3 dbFile.sqlite
SQLite version 3.19.3 2017-06-08 14:26:16
Enter ".help" for usage hints.
sqlite> PRAGMA integrity_check;
Error: database is locked
sqlite> .tables
Error: database is locked
sqlite> SELECT * FROM dbTable LIMIT 1;
Error: database is locked
Running fuser on the database file indicates that no processes are currently attempting to access the file. There is a hot file in the same directory (i.e. dbFile.sqlite-journal exists).
Using stat, I see that the folder in question is of type panfs, which seems likely to be the cause of the issue. What can I do here?
Notably not a duplicate of this question, as the database is (1) not locked by any running process, as mentioned above, (2) locked for reading as well, and (3) due to the full read/write lock the .dump solution in this answer is inapplicable (as .dump fails).
Closest related question is this question, although that one is on a different NFS filesystem.
Apparently, the PanFS server(s) did not notice the crash, and so still report the write lock.
You could try to unmount and re-mount the file system on this machine, if possible.
Alternatively, copy both files (database and journal) elsewhere. When you open that database, SQLite will roll back the partial transaction.
I've recovered data from a formatted hard drive for use in a lawsuit. The data is Skype logs, which are stored in SQLite3 databases. Unfortunately, the disk was formatted and a new copy of OS X was installed on the drive. I scanned the drive and found the files I am looking for, but it seems that the database I'm after is corrupt.
I tried the following command I found by searching on SO:
$ sqlite3 mydata.db ".dump" | sqlite3 new.db
Unfortunately, dumping this way excludes the table of records I'm looking for (Messages). Since I can get the format of the DB from Skype by just logging in with another account and generating a new main.db for it, do I have any additional options for extracting the contents of the corrupt DB? Failing that, is there a way to export the raw contents of the database in a text file or something? I only care about grabbing certain messages, which I can search for.
When the database is corrupt, the ".dump" command extracts all of the usable information, but then ends with a ROLLBACK because it encountered corruption.
Instead, store the output in a file:
$ sqlite3 mydata.db ".dump" > mydata.dump
Then, you can view the data directly in that file, or you can change the last line from "ROLLBACK" to "COMMIT" using a text editor. After that, you can load the valid portion of the data into a database using:
$ sqlite3 new.db < mydata.dump
First check for the PRAGMA integrity_check in command console and click on play button, note down the errors and repair them seperately or try exporting and then importing SQL file to new database and restart the database, it generally removes the slug files stored in the cache. If the above method does not work out then you can try SQLite database recovery tool https://www.recoveryandmanagement.com/repair-sqlite-database-manually/
I'd like to know how the .dump command affects other applications connected to the same database. I'd like to know this for the following journal modes:
DELETE (the default mode)
WAL (write-ahead-logging)
From reading other posts on this forum .backup uses the online backup API of SQLite. It would be great to have this confirmed as well.
Thanks in advance!
The .dump command reads the contents of the database normally, just as if you would do a bunch of SELECT queries inside a transaction.
This means that when not using WAL, other connections cannot write as long as the dump is running.
How do I make a backup of a PLSQL db?
The question is do you really want to do it from PL/SQL?
Assuming you are using an oracle DB they have commands that will dump your DB into a file. It is dumped in such a way that the DB (tables and all) can be re-created from scratch (so you can re-create a secondary backup [not that I would recommend this way]).
Here is the FAQ
http://www.orafaq.com/wiki/Import_Export_FAQ
Your question requires some clarification.
Do you want to:
a) create a copy of the database for use somewhere else
b) create a backup of the database for backup and recoverability purposes
If you are looking to simply create a copy of the database somewhere else, then the use of the import and export utilities (imp and exp, or impdp and expdp in 10g) should be sufficient.
If you are looking to backup the database for recoverability purposes, then you should really be looking into the use of RMAN, which is Oracle's enterprise backup solution. Docs can be found here: RMAN Quick Start Guide
When I do physical backups I use RMAN, not PL/SQL, since it's THE tool for this kind of job. However, here's a link that maybe can help you. http://psst0101.wordpress.com/2008/01/23/move-a-tablespace/
EXP command will help do this job
Syntax:
EXP schema_user_name/schema_pwd file=file_name.dmp
you can even export (take back up) of individual db objects