I am using latest Sqlite version 3.8.3.1 on Ubuntu host.
Here i am trying to create in-memory database in using c program and the option explained in the following link: http://www.sqlite.org/inmemorydb.html
Here database is created using following function call:
sqlite3_open("file::memory:?cache=shared", &db);
Here database file file::memory:?cache=shared is created on hard drive locally.
Here why sqlite3 is creating database file on the hard drive for in-memory option?
Let me know if i am doing something wrong?
URI filenames work only with sqlite3_open_v2.
You can make use of this to create database on RAM.
It's the in-memory implementation supported by python
import sqlite3
con = sqlite3.connect(':memory:')
Related
How to import SQLite data into DuckDB? Or is it possible to query the SQLite data files directly from DuckDB? A presentation from author of DuckDB mentioned such a feature.
yes it is possible to scan SQLite db files directly by using the sqlite extension
You first will need to install and load it
INSTALL sqlite_scanner
LOAD sqlite_scanner
CALL sqlite_attach('your_sqlite_db.db');
Then you should be able to query the sqlite tables.
I have a huge problem... I am developing desktop app with SQLite but during copy/paste process I lost a power and process was terminated so base was lost. However, I found a way to recover it but base is encrypted. When I try to open connection using conn.Open(); I get this error. If I try to open it with DB Browser for SQLite it asks me a SQLCipher encryption password so it seams to me that data is lost..
Is there any default password ?
Why did this happen and how to prevent it from happening again ?
What can I do ?
Thanks in advance.
Also check that SQLite version you're "connecting" with aligns with the DB file version.
For example, here's a DB file written by SQLite version 3+:
$ file foobar.db
foobar.db: SQLite 3.x database, last written using SQLite version 3027002
And here I also have 2 versions of sqlite:
$ sqlite -version
2.8.17
$ sqlite3 -version
3.27.2 2019-02-25 16:06:06 bd49a8271d650fa89e446b42e513b595a717b9212c91dd384aab871fc1d0alt1
Obviously in hindsight, opening foobar.db with sqlite version 2 will fail, yielding the same error message:
$ sqlite foobar.db
Unable to open database "foobar.db": file is encrypted or is not a database
But all is good with the correct version:
$ sqlite3 foobar.db
SQLite version 3.27.2 2019-02-25 16:06:06
Enter ".help" for usage hints.
sqlite>
sqlite> .databases
main: /tmp/foobar.db
sqlite>
The error message is a catch-all, simply meaning that the file format was not recognized.
Ok, finally found a solution that works so posting the answer if anybody will have same trouble as I did..
First of all, use good recover software. For repairing the database I found 3 solutions that work without backup :
Open corrupted database using DB Browser an Export Database to SQL. Name it however you want. Then, create new database and import database from SQL.
There is software that repairs corrupted databases. Download one and use it to repair the database.
Download "sqlite3" from sqlite.org and in command line navigate to folder where "sqlite3" is unzipped. Then try to dump the entire database with .dump, and use those commands to create a new database:
sqlite3 corrupt_table_name.sqlite ".dump" | sqlite3 new.sqlite
I had the same error when I was trying to access a db dump in another system compared to compared to where it was obtained. When I tried to open on a dev machine, it threw the reported error in this thread:
$ sqlite3 db_dump.sqlite .tables
Error: file is encrypted or is not a database
This turned out to be due to the differences in the sqlite version between those systems. This dev system version was 3.6.20.
The version on the production system was 3.8.9. Once I had the sqlite3 upgraded to same version, I was able to access all its data. I am just showing below the tables are displayed as expected:
# sqlite3 -version
3.8.9
# sqlite3 db_dump.sqlite .tables
capture diskio transport
consumer filters processes
This error is rather misleading to begin with, though.
If you've interacted with the database at some point while specifying journal_mode = WAL, and then later try to use the database from a client that does not support WAL (< v3.7.0), this error can also come up.
As noted in the SQLite documentation under Backwards Compatibility, to resolve that without having to recreate the database, explicitly set the journal mode to DELETE:
PRAGMA journal_mode=DELETE;
Your database did not become encrypted (this is only one of the two options in the error message).
Your data recovery tool did not recover the correct data; what you have in the file is something else.
You have to restore the database file from the backup.
The issue is with sqlcipher version upgrade in my case, Whenever I update my pod it automatically upgrade the sqlcipher and the error occurred.
For a quick fix just manually add the SDK instead of Pod install. And for a proper solution use this link GitHub Solution
I have an application in C# that uses System.Data.SQLite. In my case I use a recent version of SQL Lite database, by now I can see that the new versiĆ³n has released, and int sqlite.org webpage says that is recommended to upgrade the database.
My question is how to upgrade without lost the information in my actual database.
How can I chech the version of the data
Thanks.
EDIT: what I mean is when I create a new database with the sqlite3 library, I guess that the database file, my database.db has a version. When I update the sqlite3 library, I am update the sqlite3 command line, but the database file still has the version that had when I created it.
So if in the new versions for example add new features to the database, for example triggers, foreign keys and so on, if I am not wrong, this features must be in the database file, not in the sqlite3 library, because when I access to the database for example with entity framework, I don't use sqlite3 library, I use System.SQLite.Data library.
am I wrong? the datafile is never update and only the library can be updated?
Thanks.
Upgrading the SQLite library will not have any effect on your database file.
Changes like foreign keys do not affect the database file.
The last change that affected the file format was a long time ago.
Very basic question, having a hard time finding an explanation online.
I have a file code.sql that can be run on two different databases, a.db3 and b.db3. I used sqlite a.db3 to open the database in sqlite3. How do I run code.sql on it?
Use the .read code.sql command, or call sqlite3 with the file as input: sqlite3 a.db3 < code.sql.
I am guessing that you are trying to use the sqlite3 command line tool that you can dowload from the sqlite.org website.
I recomend that you use, instead, sqlitestudio http://sqlitestudio.one.pl
This has a feature to execute SQL from a file on a database:
Use DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.
You can download the DB Browser at SQLite https://sqlitebrowser.org/
I have been developing locally for some time and am now pushing everything to production. Of course I was also adding data to the development server without thinking that I hadn't reconfigured it to be Postgres.
Now I have a SQLite DB who's information I need to be on a remote VPS on a Postgres DB there.
I have tried dumping to a .sql file but am getting a lot of syntax complaints from Postgres. What's the best way to do this?
For pretty much any conversion between two databases the options are:
Do a schema-only dump from the source database. Hand-convert it and load it into the target database. Then do a data only dump from the source DB in the most compatible form of SQL dump it offers. Try loading that into the target DB. When you hit problems, script transformations to the dump using sed/awk/perl/whatever and try again. Repeat until it loads and the results match.
Like (1), hand-convert the schema. Then write a script in your preferred language that connects to both databases, SELECTs from one, and INSERTs into the other, possibly with some transformations of data types and representations.
Use an ETL tool like Talend or Pentaho to connect to both databases and convert between them. ETL tools are like a "somebody else already wrote it" version of (2), but they can take some learning.
Hope that you can find a pre-written conversion too. Heroku one called sequel that will work for SQLite -> PostgreSQL; is it available without Heroku and able to function without all the other Heroku infrastructure and code?
After any of those, some post-transfer steps like using setval() to initialize sequences is typically required.
Heroku's database conversion tool is called sequel. Here are the ruby gems you need:
gem install sequel
gem install sqlite3
gem install pg
Then this worked for me for a sqlite database file named 'tweets.db' in the current working directory:
sequel -C sqlite://tweets.db postgres://pgusername:pgpassword#localhost/pgdatabasename
PostgreSQL supports "foreign data wrappers", which allow you to directly access any data source through the DB, including sqlite. Even up to automatically importing the schema. You can then use create table localtbl as (select * from remotetbl) to get your data into the actual PG storage.
https://wiki.postgresql.org/wiki/Foreign_data_wrappers
https://github.com/pgspider/sqlite_fdw