SQLite Abnormal Memory Usage - sqlite

We are trying to Integrate SQLite in our Application and are trying to populate as a Cache. We are planning to use it as a In Memory Database. Using it for the first time. Our Application is C++ based.
Our Application interacts with the Master Database to fetch data and performs numerous operations. These Operations are generally concerned with one Table which is quite huge in size.
We replicated this Table in SQLite and following are the observations:
Number of Fields: 60
Number of Records: 1,00,000
As the data population starts, the memory of the Application, shoots up drastically to ~1.4 GB from 120MB. At this time our application is in idle state and not doing any major operations. But normally, once the Operations start, the Memory Utilization shoots up. Now with SQLite as in Memory DB and this high memory usage, we don’t think we will be able to support these many records.
Q. Is there a way to find the size of the database when it is in memory?
When I create the DB on Disk, the DB size sums to ~40MB. But still the Memory Usage of the Application remains very high.
Q. Is there a reason for this high usage. All buffers have been cleared and as said before the DB is not in memory?
Any help would be deeply appreciated.
Thanks and Regards
Sachin

A few questions come to mind...
What is the size of each record?
Do you have memory leak detection tools for your platform?
I used SQLite in a few resource constrained environments in a way similar to how you're using it and after fixing bugs it was small, stable and fast.
IIRC it was unclear when to clean up certain things used by the SQLite API and when we used tools to find the memory leaks it was fairly easy to see where the problem was.

See this:
PRAGMA shrink_memory
This pragma causes the database connection on which it is invoked to free up as much memory as it can, by calling sqlite3_db_release_memory().

Related

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

Can I tell if application has memory leak only based on it's memory consumption?

I was told on one of environments ASP.NET application consumes even up to 64GB of RAM. I don't know how long it takes to consume it and I have not tried to monitor this app with any kind of tool yet. But I suspect that this is some memory leak. My colleague said that maybe it is not and that it's possible that GC decides not to garbage collect because it still has 64GB RAM left.
From what I understand it's not possible to use that much of RAM without some extensive caching built in and I have not seen this in this applications' source code. I know GC can decide to grow Generation 0 when it sees that it needs more space but in order to consume 64GB this memory must be used by either Gen2 or LOH right? This is Business Intelligence app and it does store some data in Session between postbacks so that it does not hit data warehouse every time but still 64GB of RAM consumed seems suspicious to me.

Accessing huge data from application

Before starting application, I just would like to know the feasibility here.
I have data around 15GB (text and some Images) stored in SQLite database of my SD Card, I need to access it from my application. Data will get increased on daily basis and may reach till 64 GB.
Can any one tell me limitations in accessing such huge database stored in SD card from the application?
SQLite itself supports databases in that range like 16-32GB (it may start working slower, but it should still work).
However, you are likely to hit a limit of FAT32 maximum file size, which is just 4GB - and this will be tough to overcome. SQLite allows to use attached databases which allow you to split it into smaller chunks, but this is really cumbersome.
If you can format your SD card as ext4, or use internal storage as ext4, then you should not really have big problems.

The Case of the Missing '14 second SQLite database' performance

I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.

ASP.NET: Large memory leak.. Where is it? DB? Updatepanels? No disposables?

I have been developing a quite large application, and I uploaded it to my server some days ago. Now I have found out it has several memory leaks - Uh oh.
My server is running Windows Server 2008 on 1GB ram. When I have 0 people online, only 550-600mb is used. When one people comes online the memory starts skyrocketing, and if 3-4 people are online all 1GB ram is used.
The application is made in ASP.NET with AJAX. It has many updatepanels which runs every second and quite a lot of javascript. It uses 5-7 sessions at all times. I use LINQ to SQL as database communication.
I tried perfmon.exe on my server, and I found:
Gen 0 collections goes from 0% to
100% within minutes
Gen 1 collections
goes from 0% to 50% within 5 minutes
Gen 2 is very close to 0% at all
times
Total heap bytes goes up to
100% very fast
I also ran an analysis of my program with Visual Studio. 8% Of my total runtime is done in .ToList() methods, which properly is caused by LINQ to SQL.
My theories....
(1) Linq to SQL dataContext
This might be a crazy thing to do, but: In my data access layer I have a load of methods:
AddSomethingToDatabase();
AddSomethingElseToDatabase();
DeleteSomethingFromDatabase();
Each of these has the following initialization:
GameDataContext db = new GameDataContext();
Which means the above statement runes nearly every second or more.
(2) No objects implement IDisposable
I have to be honest: I have never worked with IDisposable. As far as I have read, this might be a problem.
Also, if this is the leak, which classes should implement it? I do not have any I/O work or others, only the DataContext.
(3) Loads of UpdatePanels and jQuery
I have some fear loads of updatepanels can give problems with performance, but I do not know how to check it.
So my question is: Any ideas on what the memory leak could be? Any ideas on how to find the memory leak? And any ideas on how to solve it?
I would love to hear from someone who has experience with the situation above!
Thanks,
Lars
I am not sure at all that there is a problem here. All the suggestions for memory leak troubleshooting seem to be just really bad advice when you have not yet established that you have a memory leak since your memory on the server is so low that this cannot be established.
So here is my 2 cents - some might not like it but as long as it could point you at the right direction, I do not mind the downvotes.
It seems that you have a very stringent memory requirement. 1GB of RAM for a Windows 2008 Server just gives about enough RAM to do its OS related job. This is way way below recommended RAM requirements for it which if I am not wrong is minimum 2 GB RAM. Overhead of just running a w3wp.exe and IIS would be around 200-300 MB. The fact that generation 2 is always is around 0% is the best evidence that all looks good and your server is probably being starved of the memory.
My suggestion is to give your server at least 2GB of RAM (4GB should be better) and then monitor the memory usage and see if it is going up. If so, post another question with your findings and we should be able to help.
You absolutely must ensure that IDisposable objects get Dispose called when you are done with them. The simplest way to do this is to use using:
using (GameDataContext db = new GameDataContext())
{
// code that uses 'db' goes in here
}
// Dispose called when 'using' scope ends
If you still have problems after doing this throughout, then profiling is needed, but fix this first since it's a no-brainer.
Your own objects usually only need to implement IDisposable if they encapsulate unmanaged resources for which you wish to guarantee deterministic release back to the OS, so that those resources - file handles, sockets, and so on - are not sitting around waiting for GC for an interval of time you cannot rely on.
I don't have an answer for your question 3), sorry.
I'd recommend you use a memory profiler. Redgate's ANTS is pretty superior; it can give you a breakdown of which objects are in memory at a given time.
I am not expert at this. But If you try ANTS Memory Profiler might help you figure out where the problem is.
Scitech memory profiler found our leaks and gives good advice.

Resources