sqlite3 disk I/O error on cli - sqlite

sqlite version 3.6.20, running through VNC.
Starting sqlite3 cli session. When trying to run commands ".tables", ".databases", "create table" I get "Error: disk I/O error". I don't know how to get more accurate description. I want to write in my home directory where I have permissions.
I tried some suggested fixes in .sqliterc with journal mode and temp storage - they do not help. Some commands like "PRAGMA synchronous = OFF;" also cause disk io error.
.output /dev/null
PRAGMA journal_mode = MEMORY;
PRAGMA locking_mode = EXCLUSIVE;
PRAGMA temp_store_directory = '/home/username/tmp';
How to find out more about error and solve this?

It was VMWARE related error. Solution: move files to /tmp. Sqlite works there.
It is described here https://dba.stackexchange.com/questions/93575/sqlite-disk-i-o-error-3850

Related

How can I quit sqlite from a command batch file?

I am trying to create a sealed command for my build pipeline which inserts data and quits.
So far I have created my data files
things-to-import-001.sql and 002 etc, which contains all the INSERT statements I'd like to run, with a file per table.
I have created a command file to run them
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.quit
However when I run my command
sqlite3 -init ./import-all.sql ./database.sqlite
..the data is inserted, but the program remains running and shows the sqlite> prompt, despite the .quit command. I have also tried using .exit 0.
From the sqlite3 --help
-init FILENAME read/process named file
Docs: https://www.sqlite.org/cli.html#reading_sql_from_a_file
How can I tell sqlite to exit once my inserts have finished?
I have managed to find a dirty workaround for this issue.
I have updated my import file to include a bad command, and executed using -bail to quit on first error.
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.fakeErrorToQuitWithBail
Then you can execute with
sqlite3 -init import-all.sql -bail
and it should quit with
Error: unknown command or invalid arguments: "fakeErrorToQuitWithBail". Enter ".help" for help
Try using ".exit" at the place of ".quit". For some reason SQLite dont doccumented this commands.
https://www.tutorialspoint.com/sqlite/sqlite_commands.htm

SQLite3 database or disk is full on csv imports

This issue has been discussed on a number of threads, but none of the proposals seem to apply to my case.
I have a very large sqlite database (4Tb). I am trying to import csv files from the terminal
sqlite3 -csv -separator " " /data/mydb.db ".import '|cat *.csv' mytable"
I intermittently receive SQLite3 database or disk is full errors. Re-running the command after an error usually succeeds.
Some notes:
/data has 3.2Tb free
/tmp has 1.8Tb free.
*.csv takes up approximately 802Gb.
Both /tmp and /data are using ext4 which has a maximum file size of 16tb.
The only process accessing the database is the one mentioned above.
PRAGMA integrity_check returns ok.
Test on both
-sqlite3 --version - 3.38.1 2022-03-12 13:37:29 38c210fdd258658321c85ec9c01a072fda3ada94540e3239d29b34dc547a8cbc and 3.31.1 2020-01-27 19:55:54 3bfa9cc97da10598521b342961df8f5f68c7388fa117345eeb516eaa837balt1
OS - Ubuntu 20.04
Any thoughts on what could be happening?
(Unless there is an informed reason for why I am exceeding the limits sqlite, I would prefer to avoid suggestions that I move to a client/server RDBMS.)
i didn't figure it out, but someone else did, am pretty sure this will "fix it" until you reach 8TB-ish:
sqlite3 ... "PRAGMA main.max_page_count=2147483647; .import '|cat *.csv' mytable"
However the invocation
sqlite3 ... "PRAGMA main.journal_mode=DELETE; PRAGMA main.max_page_count; PRAGMA main.max_page_count=2147483647; PRAGMA main.page_size=65536;VACUUM; import '|cat *.csv' mytable;"
should allow the db to grow to ~200TB, but that VACUUM command, which is needed to apply the new page_size, requires a lot of free space to run, and will probably use a long time =/
good news is that you only need to run that once and it should be a permanent change to your db, your next invocation only needs sqlite3 ... "import '|cat *.csv' mytable;"
notably, this will probably break again around ~200TB

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

How to vacuum sqlite database?

I want to know how to vacuum sqlite database.
I tried a syntax MANUAL VACUUM command for the whole database from command prompt:
$sqlite3 database_name "VACUUM;";
But it's giving error as:
near "database_name": syntax error.
and also AUTO VACUUM:
PRAGMA auto_vacuum = INCREMENTAL;
And tried it for a particular table as:
VACUUM table_name;
But no result.
You don't to specify the table name in the syntax. Only VACUUM works.
Also, it will clean the main database only and not any attached database files.
For more info, refer to the SQLite documentation.
Give the command like this:
$sqlite3 database_name 'VACUUM;'
As a matter of fact, this is the way to do also other queries from command line:
$sqlite3 database_name 'select * from tablename;'
You can use the full path to the db:
$sqlite3 /path/to/db/foo.db 'VACUUM;'
Run the command:
VACUUM;
if you use DB Browser for Sqlite or
open Sqlite from command prompt:
cd C:\your_folder
C:\Users\your_user\AppData\Local\Android\Sdk\platform-tools\sqlite3.exe -line your_db_name.db
and run
VACUUM;

SVN - SQLite - disk I/O error

When trying to commit to my SVN repository, I got the following error:
Working copy 'Z:\prace-pj\projects\other\CopyRT' locked.
So I run the clean up command and then the commit succeeded, but at the end of the response message, there was the following error:
Error bumping revisions post-commit (details follow):
disk I/O error, executing statement 'RELEASE s11'
Now when I try to e.g. update the repository, it says that it is stil locked. When I clean up and try to update again, I get an error like this:
disk I/O error, executing statement 'RELEASE s2'
sqlite: disk I/O error
What should I do to fix this?
For others reference, I just had this same error and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
Then I just needed to run:
svn cleanup
This resolved the error for me.
have you tried:
svn unlock --force path/to/workingcopy
? Seems it can be pointed at a url if the problem is in the repository itself... I've only used an unlock operation via the tortoise gui before, but I assume it just wraps the svn command anyway.
hope that helps

Resources