i'm working with IronPython 2.6 for .Net4 and sqlite3 module from: IronPython.SQLite.
i have a written a GUI program what runs in four frames of an MDI window. Every of the four programs receives data from a serialport and stores this data in a sqlite database. One database per program.
Between inserting this data on receive into the database the program querys the database every 100ms for the latest data items.
I'm already using a mutex call for the cursor.execute() command to prevent problems with simultaneous commands (insert or select).
During runtime the program (sporadically) runs into an exception.
When trying to query data:
System.Exception: database disk image is malformed
or when trying to insert data:
System.Exception: database or disk is full
Is it possible, that an database query short after an database insert (or the way around) could cause such exceptions and destroy the database?
It would be very kind of you, if you could give me a kind of advice how to solve this issue.
Related
I have an sqlite3 database that has one process inserting a single row every second or so. Is it possible to execute a long running query on this database in another process? When I try I'm often getting "database locked" errors. Do both the reader and the writer require exclusive access to the database? Will a very long read-only query cause the writes to fail?
Try using the WAL journal mode.
The code in my project now:
local lsqlite3 = require "lsqlite3complete"
self.db_conn = lsqlite3.open("cost.db")
function showrow(udata,cols,values,names)
assert(udata=='test_udata')
for i=1,cols do
print('',names[i],values[i])
end
return 0
end
self.db_conn:exec('select * from cost',showrow,'test_udata')
It is no problem to select the cost records from the code above, but if I change like below and try to open it in memory:
self.db_conn = lsqlite3.open_memory("cost.db")
The code has no error but there is no records or tables inside when I do the query. How can I change my code so that I can open and put my database inside memory? Since I would like to access my data quickly in memory instead of keep connecting to a database.
A memory database is one that exists only in memory. That is, it doesn't get its data from a file. Because of that, open_memory doesn't take any parameters.
If you want to use a database that lives in a file, then that means accessing that file.
You should not need to "keep connecting to a database". You connect to it once at the beginning of the application and keep it open until your application terminates.
I am using sqlite3 Database for the database management for my AM1808 ARM9 based microprocessor.
I am using EMbedded Linux (V10.10 Lucid) and Gcc compiler for ARM.
My scenario is as following,
I have GSM moduled interfaced on UART. I am continuously synchronize my data with the server in background.
I am also accessing SQLIte database for other processes like read,write,view etc..
I have a single database connection.
I have simultaneous access to the Sqlite3.For the Multihandling sqlite3 with a single connection i have used Mutexs for the database lock. I have also used SQLITE3_BUSY flag for checking and all that.
Still i am missing my inserted record in the database, means database is not giving any error for the inserting record to the database.
So i can find that problem.I stuck here and i can not further proceed.
Please guide me. If you need than please tell me i will provide my code snippet.
I have a SQLite database that is used by two processes. I am wondering, with the most recent version of SQLite, while one process (connection) starts a transaction to write to the database will the other process be able to read from the database simultaneously?
I collected information from various sources, mostly from sqlite.org, and put them together:
First, by default, multiple processes can have the same SQLite database open at the same time, and several read accesses can be satisfied in parallel.
In case of writing, a single write to the database locks the database for a short time, nothing, even reading, can access the database file at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option is available, in which reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, refer to the SQLite documentation.
SQLite3 explicitly allows multiple connections:
(5) Can multiple applications or multiple instances of the same
application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
For sharing connections, use SQLite3 shared cache:
Starting with version 3.3.0, SQLite includes a special "shared-cache"
mode (disabled by default)
In version 3.5.0, shared-cache mode was modified so that the same
cache can be shared across an entire process rather than just within a
single thread.
5.0 Enabling Shared-Cache Mode
Shared-cache mode is enabled on a per-process basis. Using the C
interface, the following API can be used to globally enable or disable
shared-cache mode:
int sqlite3_enable_shared_cache(int);
Each call sqlite3_enable_shared_cache() effects subsequent database
connections created using sqlite3_open(), sqlite3_open16(), or
sqlite3_open_v2(). Database connections that already exist are
unaffected. Each call to sqlite3_enable_shared_cache() overrides all
previous calls within the same process.
I had a similar code architecture as you. I used a single SQLite database which process A read from, while process B wrote to it concurrently based on events. (In python 3.10.2 using the most up to date sqlite3 version). Process B was continually updating the database, while process A was reading from it to check data. My issue was that it was working in debug mode, but not in "release" mode.
In order to solve my particular problem I used Write Ahead Logging, which is referenced in previous answers. After creating my database in Process B (write mode) I added the line:
cur.execute('PRAGMA journal_mode=wal') where cur is the cursor object created from establishing connection.
This set the journal to wal mode which allows for concurrent access for multiple reads (but only one write). In Process A, where I was reading the data, before connecting to the same database I included:
time.sleep(0.5)
Setting a sleep timer before a connection was made to the same database fixed my issue with it not working in "release" mode.
In my case: I did not have to manually set any checkpoints, locks, or transactions. Your use case might be different than mine however, so research is most likely required. Nevertheless, I hope this post helps and saves everyone some time!
I am completely new to SQLite and I intend to use it in a M2M / client-server environment where a database is generated on the server, sent to the client as a file and used on the client for data lookup.
The question is: can I replace the whole database file while the client is using it at the same time?
The question may sound silly but the client is a Linux thin client and to replace the database file a temporary file would be renamed to the final file name. In Linux, a program which has still open the older version of the file will still access the older data since the old file is preserved by the OS until all file handles have been closed. Only new open()s will access the new version of the file.
So, in short:
client randomly accesses the SQLite database
a new version of the database is received from the server and written to a temporary file
the temporary file is renamed to the SQLite database file
I know it is a very specific question, but maybe someone can tell me if this would be a problem for SQLite or if there are similar methods to replace a database while the client is running. I do not want to send a bunch of SQL statements from the server to the client to update the database.
No, you cannot just replace an open SQLite3 DB file. SQLite will keep using the same file descriptor (or handle in Windows-speak), unless you close and re-open your database. More specifically:
Deleting and replacing an open file is either useless (Linux) or impossible (Windows). SQLite will never get to see the contents of the new file at all.
Overwriting an SQLite3 DB file is a recipe for data corruption. From the SQLite3 documentation:
Likewise, if a rogue process opens a
database file or journal and writes
malformed data into the middle of it,
then the database will become corrupt.
Arbitrarily overwriting the contents of the DB file can cause a whole pile of issues:
If you are very lucky it will just cause DB errors, forcing you to reopen the database anyway.
Depending on how you use the data, your application might just crash and burn.
Your application may try to apply an existing journal on the new file. Sounds painful? It is!
If you are really unlucky, the user will just get back invalid results from any queries.
The best way to deal with this would be a proper client-server implementation where the client DB file is updated from data coming from the server. In the long run that would allow for far more flexibility, while also reducing the bandwidth requirements by sending updates, rather than the whole file.
If that is not possible, you should update the client DB file in three discrete steps:
Send a message to the client application to close the DB. This allows the application to commit any changes, remove any journal files and clean-up its internal state.
Replace/Overwrite the file.
Send a message to the client application to re-open the DB. You would have to setup all prepared statements again, though.
If you do not want to close the DB file for some reason, then you should have your application - or even a separate process - update the original DB file using the new file as input. The SQLite3 backup API might be of interest to you in that case.