Maria DB recommended RAM,disk,core capacity? - mariadb

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!

Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.

1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.

MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

Related

The Case of the Missing '14 second SQLite database' performance

I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.

Postgresql Database Slow Down

I have installed PostgreSQL 9.1 server on my production server.
I have Changed all the configuration(postgresql.conf) according to system.
Everything has been working fine for a week.
After this suddenly, postgresql server becomes very slow.
Even for count(*) query on a table.It is taking too much time.
After this I have done so many activities like:
Monitor Load on system :- Normal within range of 0.5 to 1.5.
Monitor No of users logged in application :- 200 to 400. Normal
Recreated Index
Kill the ideal transactions.
Check Locks (No DeadLock Found)
Application server restarted.
Database server restarted.
After doing this all activities the performance of database server is not increased.
It is taking so much time for normal queries also.
Then I drop the database and recreated then Performance Increases
Everything working after recreating the database.
But after some days suddenly performance goes down.
This sounds like the active portion of the database is growing large enough that it doesn't fit in cache, causing actual disk access (which is orders of magnitude slower) for many of your reads. This is often caused by not vacuuming aggressively enough.
Other factors could be related to your configuration of PostgreSQL and the operating system. It's hard to give much advice without knowing:
exactly what version of PostgreSQL you're using (9.1 tells us the major release, but the minor release sometimes matters),
how you have PostgreSQL configured,
what OS you're running on,
what hardware you are using (cores, RAM, drive arrays, controllers), etc.
Part of that can be supplied by posting the results of running the query on this page:
http://wiki.postgresql.org/wiki/Server_Configuration
It might also help to select relname, relpages, and reltuples from pg_class for the tables involved and compare numbers when things are running well to when they are slow.
With the additional information, people should be able to make some pretty specific recommendations.

ZODB in-memory backend?

I'm currently working on a fairy large project (active members is about hundreds K) and was strongly lean to Plone solutions.
I've asked some questions related to it like here and here.
Got some replies from very experienced Plonistas (and active stackoverflowers as well). I really appreciate it. People keeps saying Plone does not scale well to that large, and most of the reasons is because of ZODB.
Then I think of an in-memory backend for ZODB. RAM is really cheap now ! you can get 128GB for just ~$3k, ten times over a normal $300 128GB SSD, and achieve ~30GBs IO bandwidth compare to that ~300MBs of the SSD.
in-memory backend + Blob for binary + 10s disk journalling for backup + all undos except last 10s would be an instance kill ! They should smoke the RDBMs and offer full ACID + Transaction + Object Mapping compare to such couch*/redis etc.
Is it technical possible ? Is there any implementation ? Does it worth implement (in your opinion) ?
There is a memcache option for RelStorage which helps when you need to use a slow database, but really you should probably just leave that sort of caching to your operating system and make sure your database server has plenty of RAM. (If your RAM is large enough then your filesystem cache should already store most of the data.)
An SSD will significantly reduce the worst case read latencies for random access to data not already in the filesystem cache. It seems silly not to use them now, especially as the Intel 330 SSD is so cheap and has a capacitor equivalent to a battery backed raid controller (making writes superfast too.)
An all in RAM solution can never be considered ACID, as it won't be Durable.
As mentioned in my comment on your other post, it is not the ZODB that is the problem here but Plone's synchronous use of a single contended portal_catalog.
Instead of keeping the entire ZODB in memory, you could mount the portal_catalog in a separate mount point and keep it in memory. I've already seen such kind of configuration and it works smoothly for about 8k users using standard hardware (2 server + 1 zeo server). It may be sufficient for your needs, maybe using more performant hw.

w3wp.exe has high cpu usage on every request

I'm running a Windows 2008 server (a VPS with 1GB of RAM), with SQL Server Express and IIS 7 installed. On it I'm hosting a NopCommerce 1.7 website, with a database of around 26 000 products.
Right now I'm the only user of the website (it's in development) and I'm getting rather bad performance from it. To be more specific every time I make a request, the worker process goes to 90-100% CPU usage for a few seconds. Is it me or this is a lot for a 1 user NopCommerce website? Any ideas why this happens and what I can do to rectify it or further investigate?
PS: the worker process uses between 100MB-400MB of memory (private working set), and SQL Server with this database, around 160MB. Do you have any suggestions other then the obvious one to get more RAM? I intend to get one more GB but I fear this will not solve the cpu usage problem.
You've already stated you're going to get more RAM, but don't be surprised how much a lack of RAM can impact the CPU. If your RAM is not able to hold large objects efficiently because of lack of space (and I'd say using 40% of available RAM qualifies), then the CPU has to work harder to page things in and out of virtual memory. 90% is a little overkill for this, but with the server specs you give it's not impossible.
The most likely problem is that there is a hole in your code somewhere. My guess is that you have either an infinite loop or a direct memory leak (resources open during requests that aren't closed perhaps?). Your best bet would be to get the IIS Debug Diagnostics tool, install it and set up reports to find out what is going on directly on the server.

Host ASP.Net WebSite - What would be an Ideal Hardware Configuration?

I am studying various ASP.Net deployment approaches. In there, I got a basic question. Is there any thumb rule about enviornment definition? What could be called a 'good' setup if I have to support 1000 concurrent users(requests).
I understand that there are many factors like how application is designed etc. But assuming that everything else is great, what configuration should I look for like Which processor, how much RAM etc?
Also how many concurrent users below configuration should be able to support ?
CPU: Dual 3.40 GHz Intel Xeon (Hyper-Threaded)
Memory : 3GB
OS: Windows Server 2003 SP2
Thanks for thelp
Having been on both sides of the equation (web developer and hardware engineer), my current opinion is that the answer involves both of those sides as well.
Your hardware needs to be not only sufficient for general usage, but it also has to cope with reasonable unexpected peaks and failures - which means that it needs to be redundant, and in excess of your capacity planning.
Your software needs to be designed so its easily redundant - theres no point in speccing a tiered hardware architecture (now or for future planning) if the software is going to require significant amount of changes to handle it.
Your software also needs to be designed so sudden unexpected peaks in resource usage don't happen as a regular occurrence for no external reason (eg marketing campaign).
I know that you say you understand the non-hardware factors, but the real answer to your question is that there is no real way to answer it without knowing the other factors - each situation and circumstance is unique, and requires a unique solution.
However, in an effort to add generalised recommendations, try these:
CPU - choose something with a lot of cache, and individual cache per core as well. This will do wonders to speed up the system. I typically go for dual core, dual processor at a minimum (for a total of 4 cores on two seperate physical cpus). Processor speed ratings don't really matter as much as you think these days.
Memory - fast memory, minimum of 8GB of it. Use the smallest dimms possible for the server.
Harddisk - SAS 15K RPM at a minimum, RAID 6 for the data partition on one controller, RAID 1 or 6 for the system partition on another controller. Choose a good quality controller backed by a good support or warranty package - your controller is no good if it dies in 3 years time and you can't get a replacement.
But above all, don't just install the OS and app and let it be, profile the set up as much as possible, don't be afraid of making changes to optimise to the individual setup (within reason). Move your ASP.Net temporary files to a fast disk (or a ram disk - if they are going to be rebuilt anyway, no matter worrying over losing them). Move the database to a second server, with a crossover 1GBit link between the two. Turn off disk maintenance in the OS, turn off services you do not need.
Good luck!

Resources