Minimongo memory consumption - meteor

I have collection with size about 30 MB (on server). It's fully mirrored with client. Tell me, why client uses hundreds of ram (400MB+) instead of 30 MB? I was searching for minimongo memory efficiency, but I found nothing, so I'm asking now

Related

ASP.Net Core Performance on Azure vs Local

I have a website with a function that generates a report in Excel that is pretty much just a datadump, approx 16000 rows, using EPPlus. This report keeps timing out for the users on Azure. The timeout (524) is a cloudflare limit if the request takes longer than 100 seconds.
I have optimised the hell out of it using Hashsets and Dictionaries, and it now runs in under 2 seconds on my laptop in Debug. I've also tried publishing with the target runtime as win-x64, in case it's a memory allocation issue.
Initially I thought the bottleneck would be memory. After setting up Application Insights, I can see that the CPU is at 100% while the memory is fairly low, about 300MB. I've bumped the Service Plan up to the P3V2 (14GB RAM & 840 ACU) to test if it's just a resource allocation issue. Even at that level, it takes about 50-60 seconds to produce. I can't run the app at that level, so I need to get it down much lower.
I'm not sure how else to optimise this, or identify the bottleneck. Any ideas?

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

Can I tell if application has memory leak only based on it's memory consumption?

I was told on one of environments ASP.NET application consumes even up to 64GB of RAM. I don't know how long it takes to consume it and I have not tried to monitor this app with any kind of tool yet. But I suspect that this is some memory leak. My colleague said that maybe it is not and that it's possible that GC decides not to garbage collect because it still has 64GB RAM left.
From what I understand it's not possible to use that much of RAM without some extensive caching built in and I have not seen this in this applications' source code. I know GC can decide to grow Generation 0 when it sees that it needs more space but in order to consume 64GB this memory must be used by either Gen2 or LOH right? This is Business Intelligence app and it does store some data in Session between postbacks so that it does not hit data warehouse every time but still 64GB of RAM consumed seems suspicious to me.

SQLite Abnormal Memory Usage

We are trying to Integrate SQLite in our Application and are trying to populate as a Cache. We are planning to use it as a In Memory Database. Using it for the first time. Our Application is C++ based.
Our Application interacts with the Master Database to fetch data and performs numerous operations. These Operations are generally concerned with one Table which is quite huge in size.
We replicated this Table in SQLite and following are the observations:
Number of Fields: 60
Number of Records: 1,00,000
As the data population starts, the memory of the Application, shoots up drastically to ~1.4 GB from 120MB. At this time our application is in idle state and not doing any major operations. But normally, once the Operations start, the Memory Utilization shoots up. Now with SQLite as in Memory DB and this high memory usage, we don’t think we will be able to support these many records.
Q. Is there a way to find the size of the database when it is in memory?
When I create the DB on Disk, the DB size sums to ~40MB. But still the Memory Usage of the Application remains very high.
Q. Is there a reason for this high usage. All buffers have been cleared and as said before the DB is not in memory?
Any help would be deeply appreciated.
Thanks and Regards
Sachin
A few questions come to mind...
What is the size of each record?
Do you have memory leak detection tools for your platform?
I used SQLite in a few resource constrained environments in a way similar to how you're using it and after fixing bugs it was small, stable and fast.
IIRC it was unclear when to clean up certain things used by the SQLite API and when we used tools to find the memory leaks it was fairly easy to see where the problem was.
See this:
PRAGMA shrink_memory
This pragma causes the database connection on which it is invoked to free up as much memory as it can, by calling sqlite3_db_release_memory().

Membase Blocking on Key Eviction?

We've been using Memcached for a while and recently started testing Membase in AWS. We're testing a single instance of Membase 1.6.0 on a large EC2 instance with 5GB RAM, 750GB disk (Linux FC8).
We've noticed that SQLite seems to block on eviction purges on an hourly basis when expiryPagerSleeptime wakes up. Although this was expected (because SQLite uses database level locking), we didn't expect that Membase would block as well.
In this case, it seems that while SQLite is deleting old keys, Membase "operations per second" fall to zero or near zero for several minutes. After the eviction process has finished, the Membase server quickly recovers. I would have anticipated that reads from Membase RAM would still proceed while SQLite was locked but this doesn't seem to be the case. Everything stops; the spy clients throw streams of exceptions as they time-out waiting for data that never arrives.
My impression from the docs was that Membase was asynchronous and would continue to serve reads from RAM. I would appreciate any help or suggestions to prevent Membase from blocking on key evictions. This is a serious issue for us because it seems to take about 4 minutes for this eviction process to finish and for the backlog in the disk queue to clear. That means every hour, Membase is effectively offline for 4 minutes.
I should also mention that this happens once the data is larger than RAM (and it's increasing size on the disk). We didn't notice any issues with key eviction when the data was just in RAM (presumably because key eviction in RAM happens too quickly to be noticable.)
With the desire of not duplicating information, this question is being answered and explained here: http://www.couchbase.com/forums/thread/membase-blocking-key-eviction
Perry

Resources