is this a reasonable target load for SQLite? - sqlite

We are in analysis mode for an embedded Linux system which will use an ARM9 processor around 400 megahertz.
It would have multiple sensors and we would need to write logs to a SQLite 3 database. We estimate the maximum load imaginable would be between 100 and 200 database inserts per second.
Is this reasonable or should we go have our heads examined?

I think it is unreasonable to write logs into db. SQL databases are developed with assumption of frequently reading and not writing.

Related

Why is writing from R to Azure SQL Server is very slow

I am trying to build out some data repositories in an Azure Managed Instance/Sql Server DB. I have been shocked by how slow the write process is, from R/RStudio. As an example, it took 65 minutes to write a table to Azure and less than one minute to write it to my local machine Sql Server.
It appears to write about 20 rows per second, regardless of the number of columns (if I refresh a query w/in SSMS, It's adding about 20 rows each time I run).
I have read in other threads that this could be due to the performance tier. I saw things about the B and A and P tiers. The only information I see on tiers in our account is "General Purpose" and "Business Critical". We have General Purpose (Gen5) with 8 cores and 512 GB of storage, of which we are utilizing less than 10%. While performing one of these write operations from R the overall CPU Utilization is less than 1%.
I am able to read tables from Azure back to R/RStudio quickly. Only writing is significantly hampered.
All of this makes it feel like it's going much slower than it should, as if there was a throttling effect or something. It is so slow that I cannot effectively get historical data there-- I allowed several things to run last night and they all timed out.

Is it possible to host SQLite as a separate process

If we have 500 processes accessing SQLite, is it possible to host it as a separate process, so all 500 processes do not have to perform IO.
These processes can attach to one instance of SQLite and access data. Is it possible twith SQLite
The short answer is no. SQLite isn't a client/server database, it's just code linked into your application/process. There are 3rd party client/server implementations of SQLite, but I've never used one and can't speak to their quality. It sounds like you may be better off looking at client/server dbs such as PostgreSQL or MySQL.
It might also be worth reading Appropriate Uses For SQLite to see if your particular use case is a good fit for SQLite or not.

Accessing huge data from application

Before starting application, I just would like to know the feasibility here.
I have data around 15GB (text and some Images) stored in SQLite database of my SD Card, I need to access it from my application. Data will get increased on daily basis and may reach till 64 GB.
Can any one tell me limitations in accessing such huge database stored in SD card from the application?
SQLite itself supports databases in that range like 16-32GB (it may start working slower, but it should still work).
However, you are likely to hit a limit of FAT32 maximum file size, which is just 4GB - and this will be tough to overcome. SQLite allows to use attached databases which allow you to split it into smaller chunks, but this is really cumbersome.
If you can format your SD card as ext4, or use internal storage as ext4, then you should not really have big problems.

The Case of the Missing '14 second SQLite database' performance

I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.

SQLite Abnormal Memory Usage

We are trying to Integrate SQLite in our Application and are trying to populate as a Cache. We are planning to use it as a In Memory Database. Using it for the first time. Our Application is C++ based.
Our Application interacts with the Master Database to fetch data and performs numerous operations. These Operations are generally concerned with one Table which is quite huge in size.
We replicated this Table in SQLite and following are the observations:
Number of Fields: 60
Number of Records: 1,00,000
As the data population starts, the memory of the Application, shoots up drastically to ~1.4 GB from 120MB. At this time our application is in idle state and not doing any major operations. But normally, once the Operations start, the Memory Utilization shoots up. Now with SQLite as in Memory DB and this high memory usage, we don’t think we will be able to support these many records.
Q. Is there a way to find the size of the database when it is in memory?
When I create the DB on Disk, the DB size sums to ~40MB. But still the Memory Usage of the Application remains very high.
Q. Is there a reason for this high usage. All buffers have been cleared and as said before the DB is not in memory?
Any help would be deeply appreciated.
Thanks and Regards
Sachin
A few questions come to mind...
What is the size of each record?
Do you have memory leak detection tools for your platform?
I used SQLite in a few resource constrained environments in a way similar to how you're using it and after fixing bugs it was small, stable and fast.
IIRC it was unclear when to clean up certain things used by the SQLite API and when we used tools to find the memory leaks it was fairly easy to see where the problem was.
See this:
PRAGMA shrink_memory
This pragma causes the database connection on which it is invoked to free up as much memory as it can, by calling sqlite3_db_release_memory().

Resources