Disk I/O errors even after removing the file - sqlite

I'm getting various errors when running tests on an SQLite database when I specify the database to be located at /tmp/a. Errors include SQLITE_IOERR_DELETE_NOENT (5898) and SQLITE_READONLY_DBMOVED (1032). These errors persist even after I remove the file /tmp/a. Strangely enough, if I specify the database to be located at /tmp/b, then everything works as expected. Also, noteworthy is that the errors at /tmp/a occur after database initialization, meaning that the database has been initialized with various tables; this implies that writing does work to a certain point. Also, file permissions look identical for both files:
$ ls -ahlv /tmp/a
-rw-r--r-- 1 rik users 24K Jan 15 18:00 /tmp/a
$ ls -ahlv /tmp/b
-rw-r--r-- 1 rik users 24K Jan 15 18:00 /tmp/b
Maybe this some issue with SSD caching? I'll keep an eye on it and update this post if new things happen.

Your temporary file name may not be random enough to not have another rogue process interact with the same file. You may want either to use or more random file name to avoid conflict on the same file. Or if the file is really temporary you may want to use the :memory: file name which implies to not write anything on the disk and keep the full database in memory.

Related

BerkeleyDb : safe removing files of environment

I use BerkeleyDb in a project.
I have some environments composed by several file. Sometimes, I need to remove some of these files.
When I remove file with the filesystem, opening the environment raise an error No such file or directory.
Do exists a way to safely removing a file in BerkeleyDb environment ?
To remove a database, you need to be absolutely certain that no references to the database exist in the environment. The most foolproof way to do this is as follows:
Use db->remove() to remove the database from inside your application.
Use dbenv->txn_checkpoint() to flush all changes, checkpoint the log, and then flush the log.
Use dbenv->txn_checkpoint() with the DB_FORCE flag to push one more checkpoint through, ensuring that when the environment is recovered that it doesn't attempt to recover databases that predated the last checkpoint.
Step 3 sounds insane, I know. And maybe it isn't needed any more. But it certainly was needed in the not-too-distant past. Certainly steps 1 and 2 are needed. You'll need to experiment to see if step 3 is necessary for your application.

Unix - server gets polluted - find out were new files get stored

My server has no available space left on disk. Yesterday, I deleted 200 GB Data, today it is full again. Some Process must write some files. How do I find out where possibly new huge files are stored?
Check df to check partition usage.
Use du to find sizes of folders.
I tend to do this:
du -sm /mount/point/* | sort -n
This gives you a list with the size of folders in MB in the /mount/point folder.
Also if you have X you can use baobab or similar utilies to explore disk usage.
PS: check the log files. For example if you have Tomcat installed it tends to generate crazy amount of log if not configured properly.

How to find out which script is running at what particular time in unix?

in my application server,some files are getting deleted from one folder exactly at 1 am everyday.i have checked the crontab.wms file and there is no script which runs at 1 am.
How to find out which script is deleting the files.
Exactly 1AM makes cron a prime suspect, but processes can be launched from other places (e.g. init). Also, if the directory can be mounted elsewhere then your server may not be deleting the files. And if malware is causing this, the origin of the process could be intentionally hidden. Some information about where the files are and what the files are could be useful clues.
Repeatedly running ps -aef for several seconds may uncover the culprit. I would run it hundreds of times without sleeping between starting just before 1AM. There can be a lot of processes to examine.
You may also repeatedly run this:
/usr/sbin/lsof +d <fullNameOfTheDirectory>
to list processes that have opened the specific directory (or files in the directory). This could give a more concise list, but you have to be lucky to be probing at exactly the time the process is using the directory. You may need to try over many nights and you will want both ps and lsof.
If the files do not belong to root, you can chown root before 1AM. If the delete succeeds then you know the process is root.
I assume the deletion is messing you up. You can archive the files before 1AM and restore them when they go missing, assuming the files are fairly static. Or, you can remove write permissions for a few minutes to see if that thwarts the process (you should still see it accessing the directory). These are kludges, but could patch things up until you can really solve it.

Can I get a callback / do I know when SQLite has created write-ahead log files? I want to chmod them

I have an elevated process and I want to make sure the SQLite files that it creates are readable by other processes. For some reason umask doesn't seem to do what I want (set permissions of sqlite file created by process).
I'm using write-ahead logging, so -wal and -shm files are created in addition to the database file. I want all 3 to be chmodded correctly.
I wonder if it's possible to get in after the SQLite file is created and chmod it.
Possible approaches:
touch all 3 files before SQLite tries to create them, then chmod and hope the mask stays the same
Intercept when the files are created and chmod them.
Work out how to get umask to work for the process.
Mystery option four.
What's the best way to go?
Questions for approaches:
Will SQLite be OK with this?
Do we know when all 3 files are created? Is there some kind of callback I can give a function pointer to? Do we know if the same wal and shm files are around forever? Or are they deleted and re-created?
You can touch the database file before opening it. (When you use the sqlite3 command-line tool to open a new file, but do nothing but begin; and commit;, SQLite itself will create a zero-sized file.)
If you want to intercept file operations, you can register your own VFS.
The -wal and -shm files are created dynamically, but SQLite will give them the same permission bits as the main database file. The comments for robust_open() in os_unix.c say:
If the file creation mode "m" is 0 then set it to the default for
SQLite. The default is SQLITE_DEFAULT_FILE_PERMISSIONS (normally
0644) as modified by the system umask. If m is not 0, then
make the file creation mode be exactly m ignoring the umask.
The m parameter will be non-zero only when creating -wal, -journal,
and -shm files. We want those files to have exactly the same
permissions as their original database, unadulterated by the umask.
In that way, if a database file is -rw-rw-rw or -rw-rw-r-, and a
transaction crashes and leaves behind hot journals, then any
process that is able to write to the database will also be able to
recover the hot journals.

How can I write to a SQLite database file in a SourceForge project's web space?

I have a small Perl-based CGI application, which I am running in the project web space provided for a SourceForge project. This application stores data in a SQLite (v. 3) database file.
When I run test scripts from a shell, I can read from and write to this SQLite file. However, when the CGI code is executed by Apache, it has read-only access. Write operations result in a log file error:
error.log.web-2:[Wed Oct 27 14:40:22 2010] [error] [client 127.0.0.1] DBD::SQLite::db do failed: unable to open database file
For testing purposes, I have cranked the permissions for that SQLite file all the way up to 777. No difference.
However, there are some funny caveats to SourceForge's project web space, and I wonder if I'm being tripped up by that. Generally, the main web server filesystem is read-only to Apache. If you have files that need to be writable at runtime, you're supposed to store them in a special "persistent" directory elsewhere... and create symlinks from your web space to the actual files under that directory.
I have done this, and the permissions are set to 777 for both the symlink and the actual SQLite file under the "persistence" location. I know this mechanism works in general, because I'm doing the same thing with cache and log files and it works there.
I'm wondering if there's anything funky about SQLite itself, along the lines of it not wanting to open a symlink (rather than a raw file) for writing.
I believe the answer to this question is that it can't be done. Further research into SQLite tells me that the driver must get a lock on the database file before it can do any write operations. This type of lock cannot be obtained when the actual file is on a different machine with its filesystem cross-mounted.
I believe this is the case with SourceForge project web space hosting. It looks like the (writable) "persistent" directory is actually on a totally separate machine from the read-only web server filesystem.
In short, if you stumble across this question because you're having the same issue... either look for different web space hosting, or else it may be time to re-work your app and step up to MySQL or some other DB (SourceForge gives you free MySQL hosting anyway).
Another issue is if you have permissions for the specific db file but you don't have permission to make the temporary files in the directory. (Mixed permissions, or too restrictive permissions)
https://www.sqlite.org/tempfiles.html
If you can't write the temporary files then you can't do any writes on a sqlite database file. If you switch it to a :memory: database you could get by or maybe use the pragma mentioned by #bob.faist PRAGMA temp_store = MEMORY, but really you should diagnosis and fix the permissions problem if possible.
Use these commands to see if you have permission to write in those file locations.
ls -l app.db
getfacl app.db
ls -l -d . # check the directory to see if you can write the temp files there
getfacl .
Use chmod or setfacl -m to fix the files or folders to let you write to them.
Also check your diskspace.
df -k
If it shows that your partition where the database file is located or is trying to write its files to are full, you could also get these kinds of issues.
Hope that helps.

Resources