As far as I can tell both commands create a clone of the db in SQLite format in the end, so why are there two commands to do this?
.backup uses the SQLite Backup API to create a clone atomically, even if the database is in use.
.clone copies the database by just running SQL commands. As far as I can tell, no transactions are done on the source database when doing the clone, so it has a chance of getting partially updated data mid way through the clone.
Related
I have a website solution that uses on SQLite database for each tenant account. Without going into much depth about why we chose this solution, we chose it due to SQLite support on distributed/offline systems.
All databases are manipulated using the same PHP file structure. I wish to update the database version iteratively for all accounts so that they are all at the same version number.
I have a script that loops over each, and can use either PHP(Yii) or the shell to execute queries.
I would like to wrap the manipulation to each database in a transaction. It appears as though DDL commands may already be auto-commit in nature.
Question: How to accomplish a SQLite DB upgrade which, if it fails, will report a failure? Using that report, I could prompt the system to re-attempt or report an error to an admin. Ideally, I would like to wrap the whole upgrade in a transaction to prevent inconsistencies, but I'm fairly certain that this is not possible.
One thought I had was to backup the database temporarily during upgrade, and delete it on success.
As CL stated in the comments, DDL is fully transactional in SQLite. So, for each database in our system, we wrap it in a transaction, and it executes atomically. We leverage the application to ensure that all databases upgrade successfully if any fail during upgrade.
I'd like to know how the .dump command affects other applications connected to the same database. I'd like to know this for the following journal modes:
DELETE (the default mode)
WAL (write-ahead-logging)
From reading other posts on this forum .backup uses the online backup API of SQLite. It would be great to have this confirmed as well.
Thanks in advance!
The .dump command reads the contents of the database normally, just as if you would do a bunch of SELECT queries inside a transaction.
This means that when not using WAL, other connections cannot write as long as the dump is running.
I'm needing to know if I'll need to take other measures to make sure the backup is complete (or at least not malformed), or if I can safely rely on .dump returning an up to date dump that I can later use to restore the database. For instance, if I run .dump at the same moment as someone else performing an insert/update, what happens?
The sqlite3 tool uses a transaction around the entire execution of the .dump command, so it's atomic.
I have been developing locally for some time and am now pushing everything to production. Of course I was also adding data to the development server without thinking that I hadn't reconfigured it to be Postgres.
Now I have a SQLite DB who's information I need to be on a remote VPS on a Postgres DB there.
I have tried dumping to a .sql file but am getting a lot of syntax complaints from Postgres. What's the best way to do this?
For pretty much any conversion between two databases the options are:
Do a schema-only dump from the source database. Hand-convert it and load it into the target database. Then do a data only dump from the source DB in the most compatible form of SQL dump it offers. Try loading that into the target DB. When you hit problems, script transformations to the dump using sed/awk/perl/whatever and try again. Repeat until it loads and the results match.
Like (1), hand-convert the schema. Then write a script in your preferred language that connects to both databases, SELECTs from one, and INSERTs into the other, possibly with some transformations of data types and representations.
Use an ETL tool like Talend or Pentaho to connect to both databases and convert between them. ETL tools are like a "somebody else already wrote it" version of (2), but they can take some learning.
Hope that you can find a pre-written conversion too. Heroku one called sequel that will work for SQLite -> PostgreSQL; is it available without Heroku and able to function without all the other Heroku infrastructure and code?
After any of those, some post-transfer steps like using setval() to initialize sequences is typically required.
Heroku's database conversion tool is called sequel. Here are the ruby gems you need:
gem install sequel
gem install sqlite3
gem install pg
Then this worked for me for a sqlite database file named 'tweets.db' in the current working directory:
sequel -C sqlite://tweets.db postgres://pgusername:pgpassword#localhost/pgdatabasename
PostgreSQL supports "foreign data wrappers", which allow you to directly access any data source through the DB, including sqlite. Even up to automatically importing the schema. You can then use create table localtbl as (select * from remotetbl) to get your data into the actual PG storage.
https://wiki.postgresql.org/wiki/Foreign_data_wrappers
https://github.com/pgspider/sqlite_fdw
I am working on a firm application in which I need to create a local database on my device.
I create my local database through create statement[ It works well]
Then I use that file and perform insert operation through fire-fox sqlite plugin, I need to insert aprox 2000 rows at a time so I can not use code. I just run insert manually through sqlite plugin in fir-fox.
After that I just use that file in my place of my local database.
When I run select query through my code, It show Exception:java.lang.Exception: Exception: In create or prepare statement in DBnet.rim.device.api.database.DatabaseException: SELECT distinct productline FROM T_Electrical ORDER BY productline: file is encrypted or is not a database
I got the solution of this problem, I was doing a silly mistake by creating a file manually by right click in my RES folder, that is not correct. We need to create the database completely from SQlite plugin, then it will work fine. "Create data base from SQLITE(FIle too) and perform insertion operation from SQLITE, then it will work fine"
This is very rare problem, but i think it might be helpful for someone like me....!:)
You should check to see if there is a version problem between the SQLite used by your Firefox installation and that on the BlackBerry. I think I had the same error when I tried to build a database file with SQLite version 2.
You also shouldn't need to create the database file on the device. To create large tables I use a Ubuntu machine and the sqlite3 command line. Create the file, create the tables, insert the data and build indexes. Then I just copy the file onto the device in the proper directory.
For me it was a simple thing. One password was set to that db. I just used it and prolem got solved.