I have seen some applications using proprietary databases crash or experience data corruption if the application was running when the OS (Windows in this case) performs a disk defragmentation. My question is this: Does SQLite (sqlite3) suffer from this issue? In other words, would it ever be dealing with the disk on a block level, or just on a file level?
SQLite uses only the OS's file access functions, so as long as the defrag tool works with concurrent file accesses, you should be fine.
Related
It is not a good idea to use a SQLite database, for write access, on a CIFS share. Understood.
I have a need to do so on a very infrequent basis. The database is written very infrequently on the Windows server (Actually windows 10, and like once ever few weeks) and equally infrequently from the Linux (Ubuntu 16.04.02 if it matters) server. The chances of simultaneous writes is near zero (which is not zero of course).
As I understand it (and I am not sure I do) using the "nobrl" option on the mount allows this to work (and indeed it does work for me), but does so by disabling locking entirely (right? Unless there are other types?).
Is there a technique, without deploying code on the Windows side, to ensure that this is in fact safe -- options for SQLite for example, that might not be the default. Locking the entire database is perfectly acceptable during the update on the ubuntu side, performance is not an issue, and simultaneous access is not required. The main restriction is I cannot change the process on the windows side.
I can't seem to find an application to monitor SQLite DB performance. Currently I have a test server that uses SQLite. I'm primarily concerned with obtaining a benchmark of storage requirements and performance for scaling this server to production.
I know for MySQL there is the standard Nagios for monitoring (changing to mySQL is not an option at this point). Is there anything analogous for SQLite?
SQLite has functions like sqlite3_status() and sqlite3_db_status(), but those do not really give you the information you want, and might not even be available in all languages.
Anyway, SQLite is an embedded library, so you'd have to monitor your actual application. Tools like Nagios allow to monitor a server's CPU load and disk usage, but you can also use any other tool of your OS.
We have an application that can use both Postgres and SQLite as its backend, depending on what the customer requires: High load and concurrency, or easy setup.
But I fear that some customers will be too clever for their own good, and to get around the (slightly) complex Postgres installation, they use SQLite, but put the file on a network disk.
Then the inevitable happens and the database gets corrupted, because SQLite is not meant for that, and we have Postgres support for that very reason.
Is there an ideal way to prevent SQLite from working on a network drive? There are some questionable ideas like looking for a \\ at the beginning, or the colon in "C:\" (it's purely a windows app), or parsing for IPs, but none of these are fool-proof and can be circumvented by making a link to a network disk.
We have a asp.net website that allows users to import data from a CSV file. Recently we moved to a from a dedicated server to an Azure Virtual Machine and it is taking much longer. The hardware specs of the two systems are similar.
It used to take less than a minute for data to import now it can take 10 - 15 minutes. The original file upload speed is fine it is looping through the data and organizing it in the SQL database that takes the time.
Why is the Azure VM with similar specs taking so much longer and what can I do to fix it?
Our database is using Microsoft SQL Server 2012 installed on the same VM as the website.
Very hard to make a comparison between the two environments. Was the previous environment virtualized? It might do with speed of the hard disks, the placement of the Sql Server files, or some other infrastructural setup (or simply the iron). I would recommend have a look into the performance of the machine under load (resource monitor). This kind of operation is usually both processor and i/o intense. This operation should be done in parallell as well.
Hth
//Peter
Due to the differences in file structure etc. between platforms, I was wondering if the database creation (with connection strings) need to be platform specific? Or if there's maybe a way to create a database from OnAppLoad() platform agnostic?
The SQLite file format is fully portable.
A database in SQLite is a single disk file. Furthermore, the file
format is cross-platform. A database that is created on one machine
can be copied and used on a different machine with a different
architecture. SQLite databases are portable across 32-bit and 64-bit
machines and between big-endian and little-endian architectures.
You do not need to worry about it at all. Things to worry about that are not platform related are few and can include the WAL journal-mode due to lack of backward compatibility.
You can also read:
http://www.sqlite.org/atomiccommit.html#sect_9_0
and:
http://www.sqlite.org/lockingv3.html#how_to_corrupt