I have this error in my application log:
sqlite3.OperationalError: database or disk is full
As plenty of disk space is available and my SQLite database does not appear to be corrupted (integrity_check did not report any error), why is this happening and how can I debug it?
I am using the Lustre filesystem (with flock set), and until now, it worked perfectly.
Versions are:
Python 2.6.6
SQLite 3.3.6
It's probably too late for the original poster, but I just had this problem and couldn't find an answer so I'll document my findings in the hope that it will help others:
As it turns out, an SQLite database actually can get full even if there's plenty of disk space, because it has a limit for the number of pages in a database file:
http://www.sqlite.org/pragma.html#pragma_max_page_count
In my case the value was 1073741823, which meant that in combination with a page size of 1024 Bytes the database maxed out at 1 TB and returned the "database or disk is full" error.
The good news is that you can raise the limit; for example double it by issuing PRAGMA max_page_count = 2147483646;.
The limit doesn't seem to be saved in the database file, though, so you have to run it in your application every time you open the database.
By default, SQLite uses /tmp temporary directory (not the memory). If /tmp is too small you will get disk full. In that case change the temporary directory like that: export TMPDIR=<big file system>.
I had same problem too.
Your host or PC's storage is full so delete some files in your system then problem is gone.
Related
I have an embedded system running with a RTOS and is using C language.
I am using Sqlite to maintain a file(let's call it sqlLiteFile.db) on a File System residing on a NAND. The Sqlite version is 3.8.5
Earlier, I was creating a new database for this file every time the system comes up. So, it was a volatile file. I had no issues at that time.
However, now I made the sqlLiteFile.db to be persistent. So, every time system reboots, it opens the same file and starts writing. This works fine for some time, and survives few reboots. But, after a while, the Sqlite query starts reporting SQLITE_CORRUPT error. However, the write operation to sqlite still works fine, it is the query which starts reporting error. I can see the write operation successful, using a debugger. Also, the size of the file in the file system keeps increasing.
When I download the file and use Sqlite browser, I can not open the file anymore. When I use some other tool to convert the sqlLiteFile.db to sqlLiteFile.txt, I can see an error at the bottom: /**** ERROR: (11) database disk image is malformed *****/
Any suggestions on how to prevent this corruption would be helpful.
Edit:
Further I did try doing clean shutdowns which closes the database using sqlite3_close() prior to rebooting. This time the database survived a little longer through reboots, but it got corrupted again eventually. So, it seems it is more then just about closing the database before exiting the application. Probably the size?
Update:
The system reboots(and re-opening/closing the sqlite database) doesn't cause corruption, but it happens after the database size reaches a certain amount(~55 Kb)
It did seem that fsync() was doing something to the sqlite database. Taking out fsync() functionality didn't cause sqlite corruption. Also, I was opening and reading the database while downloading and at the same time sqlite database was written. Both these factor or fsync() alone was causing file system corruption. I still need to figure out a better way to perform fysnc(), but now I know exactly what was causing the corruption.
I have an MS access DB with 35 linked tables, a few queries and another 35 reports.
The database has no physical tables in it as all data for its tables are coming from the linked back-end MS Access database. The problem now is, the front-end database size is above 1 GB. How and why? And how can I fix it?
First, please try HansUp's suggestion.
But, if that doesn't shrink it as much as you expect, try this:
Make a file called decompile.bat. In it, put the command:
"C:\Program Files (x86)\Microsoft Office\Office14\MSACCESS.EXE" "C:\Your\Path\To\YourFrontEnd.accdb" /nostartup/decompile
Edit the paths to suit. Put this batch file in the same directory as your front-end.
To use:
Run decompile.bat by double-clicking on it in Windows Explorer
In Access:
Hit Alt-F11 to go to the Visual Basic Editor
Click Debug, then Compile
Save, then exit the Visual Basic Editor
In the main Access window, click Database Tools, Compact and Repair Database
When finished, exit Access
You should notice that your front-end is dramatically smaller.
That's a very late addition but I wonder why didn't anyone point at the cause of the proplem! Microsoft says the cause is:
If you do not release a recordset's memory each time that you loop
through the recordset code, DAO may recompile, using more memory and
increasing the size of the database.
And the solution:
Use the Close method of the Recordset object to explicitly close the
recordset's memory when you no longer need the recordset. If the
database has increased in size because you did not use the Close
method of the Recordset object, you can reduce the size of the
database by running the Compact and Repair utility (on the Tools
menu).
https://learn.microsoft.com/en-us/office/troubleshoot/access/prevent-database-bloat-dao
I've used the "Compact and repair tool" from the "Database tools" ribbon once and it did reduced the size of -an almost empty- database from 2.0 GB (Max. Size) to just 3.41 MB!
I just ran across this. We use access databases to store project data. For the most part they stay less than a megabyte. Then I was getting issues from a client that storage cost was going up on a server because some databases were growing to the tens and hundreds of megabytes. So I search my stuff and found a database at a gig!
I'm using Office 7, and I'm pretty sure it is still in 365.
Solution:
From the Office button in access, (upper left). Go to Manage -> Backup Database. For me, it copied that 1 gig database to a 720k database. And, of course, it opened fine and all the data was there. I found that Compact and Repair did not come anywhere close to the results of backup
I don't know why Microsoft lets this occur, I could not find a reasonable explanation for it.
Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.
Background
I have a database thats been corrpted, and want to save so much of the data possible.
I have tried sql dump the data with numerous of tools, without success.
Always same error message:
Error: database disk image is malformed
I'm pretty sure this did happen due to a power failure.
Approach?
Now the the database is in fact a file. And I'm thinking if its possible to treat it so and try to save so much data as possible.
I guessing when the db is opened by a tool or program it first check its headers.
In my case I get the error message right away. I'm assuming that the headers are corrupt, or miss matching. And due to that no tool will try to read the payload.
In the documents http://www.sqlite.org/fileformat2.html there are explanations for the header offsets.
Questions: Is this is an reasonable approach? And if it possible to repair, modify or exchange headers on the corrupted db. And how do I do it?
Despite several replies in multiple threads on SO to the contrary, SQLite databases can be recovered from corruption!
I have requested an update from the SQLite team in their FAQ (http://www.sqlite.org/faq.html#q20), but in the meantime, here are a couple of options.
The FAQs state:
"...If SQLITE_SECURE_DELETE is not used and VACUUM has not been run, then some of the deleted content might still be in the database file, in areas marked for reuse. But, again, there exist no procedures or tools that we know of to help you recover that data."
and:
"...Depending how badly your database is corrupted, you may be able to recover some of the data by using the CLI to dump the schema and contents to a file and then recreate. Unfortunately, once humpty-dumpty falls off the wall, it is generally not possible to put him back together again."
There are in fact at least two excellent tools to do data recovery for whole SQLite databases and individual records, and they can help in cases of hardware failure, software errors or human problems. It will not be 100% pristine, but the situation is not hopeless
PhotoRec is open source and multi-platform. While historically, it was used for images and pdfs, it now supports SQLite recovery (http://www.cgsecurity.org/wiki/File_Formats_Recovered_By_PhotoRec), along with 220+ binary file types. If a database (or entire directory) is deleted, PhotoRec can often restore the db file to a sane-enough state to be opened and exported. There are pre-compiled versions of the app freely available for Windows, Mac and Linux.
In addition, the commercial product Epilog by CCL Forensics can do very advanced record recovery, including retrieving data from the write-ahead log (WAL) transaction files. It is a few hundred dollars, but it can do fairly amazing forensic reconstruction on SQLite data (both native binary db files as well as raw disk images).
Both the above have saved my hide several times, so passing this along for others who may have lost hope in deleted/corrupted SQLite databases (as well as genuine forensics for popular use cases, like mobile phones, browsers, address books, etc.).
Once you've regenerated/exported data, it's always a good idea to verify your backup schemes and definitely run a pragma integrity_check periodically, along with vacuuming.
I have requested that the official FAQ be updated to at least mention that one can google "sqlite recovery" or something if it's verboten to mention other projects/products by name.
Cheers.
How can I change an SQLite database from read-only to read-write?
When I executed the update statement, I always got:
SQL error: attempt to write a readonly database
The SQLite file is a writeable file on the filesystem.
There can be several reasons for this error message:
Several processes have the database open at the same time (see the FAQ).
There is a plugin to compress and encrypt the database. It doesn't allow to modify the DB.
Lastly, another FAQ says: "Make sure that the directory containing the database file is also writable to the user executing the CGI script." I think this is because the engine needs to create more files in the directory.
The whole filesystem might be read only, for example after a crash.
On Unix systems, another process can replace the whole file.
I solved this by changing owner from "root" to my own user, on all files in Database's folder.
Just do ls -l on said folder, and if any of the files is owned by root, just change it to your user, like:
# For each file do:
sudo chown "$USER":"$USER" /path/to/my-folder/file.txt
# Or "R"ecursive.
sudo chown -R "$USER":"$USER" /path/to/my-folder
(this error message is typically misleading, and is usually a general permissions error)
On Windows
If you're issuing SQL directly against the database, make sure whatever application you're using to run the SQL is running as administrator
If an application is attempting the update, the account that it uses to access the database may need permissions on the folder containing your database file. For example, if IIS is accessing the database, the IUSR and IIS_IUSRS may both need appropriate permissions (you can try this by temporarily giving these accounts full control over the folder, checking if this works, then tying down the permissions as appropriate)
This error usually happens when your database is accessed by one application already, and you're trying to access it with another application.
To share personal experience I encountered with this error that eventually fix both. Might not necessarily be related to your issue but it appears this error is so generic that it can be attributed to gazillion things.
Database instance open in another application. My DB appeared to have been in a "locked" state so it transition to read only mode. I was able to track it down by stopping the a 2nd instance of the application sharing the DB.
Directory tree permission - please be sure to ensure user account has permission not just at the file level but at the entire upper directory level all the way to / level.
Thanks
If using Android.
Make sure you have added the permission to write to your EXTERNAL_STORAGE to your AndroidManifest.xml.
Add this line to your AndroidManifest.xml file above and outside your <application> tag.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
This will allow your application to write to the sdcard. This will help if your EXTERNAL_STORAGE is where you have stored your database on the device.
On win10 after a system crash, try to open db with DB Browser, but read only.
Simply delete the journal file.
In Linux command shell, I did:
chmod 777 <db_folder>
Where contains the database file.
It works. Now I can access my database and make insert queries.
On Windows:
tl;dr: Try opening the file again.
Our system was suffering this problem, and it definitely wasn't a permissions issue, since the program itself would be able to open the database as writable from many threads most of the time, but occasionally (only on Windows, not on OSX), a thread would get these errors even though all the other threads in the program were having no difficulties.
We eventually discovered that the threads that were failing were only those that were trying to open the database immediately after another thread had closed it (within 3 ms). We speculated that the problem was due to the fact that Windows (or the sqlite implementation under windows) doesn't always immediately clean up up file resources upon closing of a file. We got around this by running a test write query against the db upon opening (e.g., creating then dropping a table with a silly name). If the create/drop failed, we waited for 50 ms and tried again, repeating until we succeeded or 5 seconds elapsed.
It worked; apparently there just needed to be enough time for the resources to flush out to disk.
On Ubuntu, change the owner to the Apache group and grant the right permissions (no, it's not 777):
sudo chgrp www-data <path to db.sqlite3>
sudo chmod 664 <path to db.sqlite3>
Update
You can set the permissions for group and user as well.
sudo chown www-data:www-data <path to db.sqlite3>
If <db_name>.sqlite-journal file exists in the same folder with DB file, that means your DB is opened currently and in the middle of some changes (or it had been at the moment when DB folder was copied). If you try to open DB at this moment error attempt to write a readonly database (or similar) could appear.
As a solution, wait till <db_name>.sqlite-journal disappears or remove it (is not recommended on the working system)
I had this problem today, too.
It was caused by ActiveSync on Windows Mobile - the folder I was working in was synced so the AS process grabbed the DB file from time to time causing this error.
On Linux, give read/write permissions to the entire folder containing the database file.
Also, SELinux might be blocking the write. You need to set the correct permissions.
In my SELinux Management GUI (on Fedora 19), I checked the box on the line labelled httpd_unified (Unify HTTPD handling of all content files), and I was good to go.
I'm using SQLite on ESP32 and all answers here are "very strange"....
When I look at the data on the flash of the ESP I notice there is only one file for the whole db (there is also a temp file).
In this db file we have of course the user tables but also the system tables so "sqlite_master" for example which contain the definiton of the tables.
So, it's seems hard to belive this can be a "chmod" problem, because if the file is read only, even creating table would be impossible as SQLite would be unable to write the "sqlite_master" data...
So I think our friend user143482 is trying to acesse a "read only" table. In SQLite source code we can see a function named tabIsReadOnly with this comment:
/* Return true if table pTab is read-only.
**
** A table is read-only if any of the following are true:
**
** 1) It is a virtual table and no implementation of the xUpdate method
** has been provided
**
** 2) It is a system table (i.e. sqlite_master), this call is not
** part of a nested parse and writable_schema pragma has not
** been specified
**
** 3) The table is a shadow table, the database connection is in
** defensive mode, and the current sqlite3_prepare()
** is for a top-level SQL statement.
*/