Rsqlite leaves files locked after computer shutdown - r

I have just started to dabble into the world of sqlite, primarily through R's interface package 'RSQLite'.
However, it seems like putting the computer to sleep while working on the sqlite data base left it locked, and trying to close the connection afterwards does not do anything. Furthermore, as the files is considered in use I'm not able to delete/overwrite the data base (or its -journal file, using Unlocker is not an option as I don't have those rights on the network drive the sqlite base is on).
Any offers on either manually how to force close the connection of the data base, or delete the -journal file successfully?

Was able to reset the network drive, which deleted the temporary -journal file and released the data base.

Related

OpenEdge Progress 10.1B export

I have look at other that have been trying to get data from an OpenEdge Progress database.
I have the same problem, but there is a backup routine on the windows file server that dump the data every night. I have the *.pbk and a 1K *.st file. How can I get the data out of the dump file in a form I can use?
Or is't not possible?
Thanks.
A *.pbk file is probably a backup (ProBacKup). You can restore it on another system with compatible characteristics (same byte order, same release of Progress OpenEdge). Sometimes that is helpful if the other system has better connectivity or licensing.
To extract the data from a database, either the original or a restored backup, you have some possibilities:
1) A pre-written extract program. Possibly provided by whoever created the application. Such a program might create simple text files.
2) A development license that permits you to write your own extract program. The output of the "showcfg" command will reveal whether or not you have a development license.
3) Regardless of license type you can use "proutil dbName -C dump tableName" to export the data but this will result in binary output that you probably will not be able to read or convert. (It is usually used in conjunction with "proutil load").
4) Depending again on the license that you have you might be able to dump data with the data administration tool. If you have a runtime only license you may need to specify the -rx startup parameter.
5) If your database has been configured to allow SQL access via ODBC or JDBC you could connect with a SQL tool and extract data that way.

IBM DB2 SQL1730N when trying to move database from one machine to another via backup/restore

I've been trying to implement the DB2 mechanism for restoring backups from one machine onto another, as part of our disaster recovery mechanism and for some kinds of problem diagnosis. I am making a copy of the keystore files and copying them back into DB2 before restoring, so everything should be OK.
And in most scenarios I've tested, it is OK.
However, if the machine I'm restoring onto has been used before (eg for other testing), then trying to restore one of the databases is failing oddly. For example, I'm trying to restore a backup from 11/26 onto another machine which was last used on 11/28, and DB2 is saying:
SQL1730N The command or operation failed because the master key label does not exist in the keystore file. Label being used: "DB2_SYSGEN_db2inst1_MYDATABASENAME_2017-11-28-20.58.07". File type number: "DB CFG". File name: "SQLDBCONF".
Note that the datestamp in that label is later than the backup I'm restoring, so the complaint makes some sense ... except that I don't understand where that master key label actually is, why it's persisting, and why copying in the keystore files before doing the restore wasn't sufficient to prevent it.
IBM does have some documentation on this error, at https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.messages.sql.doc/doc/msql01730n.html and https://developer.ibm.com/answers/questions/319149/how-to-change-the-master-key-password-after-restor.html ... but I'm not really a DB2 user and I'm having trouble figuring out what they're suggesting I change.
Can someone sketch a sequence of operations which will make a backdated backup from one machine reliably load onto another? I'm sure there is a Best Practice for DB2 backup and restore out there somewhere which would address this, but I've been hunting entirely too long and not finding it... and I can't believe I'm the only one who finds DB2's backup/restore system confusing.
I'm using DB2 10.5, if that helps at all.

Sqlite query reports SQLITE_CORRUPT

I have an embedded system running with a RTOS and is using C language.
I am using Sqlite to maintain a file(let's call it sqlLiteFile.db) on a File System residing on a NAND. The Sqlite version is 3.8.5
Earlier, I was creating a new database for this file every time the system comes up. So, it was a volatile file. I had no issues at that time.
However, now I made the sqlLiteFile.db to be persistent. So, every time system reboots, it opens the same file and starts writing. This works fine for some time, and survives few reboots. But, after a while, the Sqlite query starts reporting SQLITE_CORRUPT error. However, the write operation to sqlite still works fine, it is the query which starts reporting error. I can see the write operation successful, using a debugger. Also, the size of the file in the file system keeps increasing.
When I download the file and use Sqlite browser, I can not open the file anymore. When I use some other tool to convert the sqlLiteFile.db to sqlLiteFile.txt, I can see an error at the bottom: /**** ERROR: (11) database disk image is malformed *****/
Any suggestions on how to prevent this corruption would be helpful.
Edit:
Further I did try doing clean shutdowns which closes the database using sqlite3_close() prior to rebooting. This time the database survived a little longer through reboots, but it got corrupted again eventually. So, it seems it is more then just about closing the database before exiting the application. Probably the size?
Update:
The system reboots(and re-opening/closing the sqlite database) doesn't cause corruption, but it happens after the database size reaches a certain amount(~55 Kb)
It did seem that fsync() was doing something to the sqlite database. Taking out fsync() functionality didn't cause sqlite corruption. Also, I was opening and reading the database while downloading and at the same time sqlite database was written. Both these factor or fsync() alone was causing file system corruption. I still need to figure out a better way to perform fysnc(), but now I know exactly what was causing the corruption.

QSQLDatabase (using SQLite) takes long time to open a database

I have developed an application win QT which uses SQLIte database. The copy of database is located on each site.
On one site let's say site 'BOB1' it works perfectly without any problem. But when we try to use it on another site lets say 'BOB2' it takes long time to open a database connection(approx 2000 milliseconds).
I thought that perhaps there is a network problem, So they tried to use the server of the site 'BOB1' as their server, which works fine. But when i tried to use the server of the site 'BOB2' from the site 'BOB1', I have the same problem. So i thought it may not be the network issue.
Another thing that came to my mind was that, perhaps there is a problem of DNS resolution. But when i tried to ping the server using IP and hostname, the response time is the same.
Any idea or pointer that what can be the problem.
PS: Server + database file path is specified in the setDatabasePath() fuinction using enviornment variables.
Consider copying the database to the local machine (eg temp folder if transient, or other suitable location if permanent). You can safely use either file copy, or consider using the qt backup API to ensure that the transfer happens successfully (plus you get the option of progress feedback)
https://sqlite.org/backup.html
You could even "backup" the file from the remote server to in-memory if the file is small and you say you're reading only?
You can see some sample code here on how to import an sqlite DB into a Qt QSqlDatabase. Note that when you do this, you want to make sure the version of sqlite native API that you're using is the same as that compiled into Qt, or you may get error messages from sqlite or Qt.

Write-Ahead Logging and Read-Only mode compatible in SQLite3?

Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.

Resources