Mariadb in raid 10 NVME SSD slow read speeds - mariadb

It would seem to me that we have a bottleneck we just cant seem to get over.
We have a setup which contains 4 NVME drives in Raid 10
We are using mariadb 10.4
We have indexes
The workload that we have will 99% of the time be IO bound there is no way around that fact
What I have seen while watching the performance dashboard in mysql workbench is that both the SATA SSD and NVME SSD read at about 100MB for the same data set
Now if I am searching through 200M rows(or pulling 200M) I would think that the Innodb disk read would go faster then 100MB
I mean these drives should be capable of reading 3GB(s) so I would at least expect to see like 500MB(s)
The reality here is that I am seeing the exact same speed on the NVME that I see on the SATA SSD
So the question I have is how do I get these disk to be fully utilized
Here is the only config settings outside of replication
sql_mode = 'NO_ENGINE_SUBSTITUTION'
max_allowed_packet = 256M
innodb_file_per_table
innodb_buffer_pool_size = 100G
innodb_log_file_size = 128M
innodb_write_io_threads = 16 // Not sure these 2 lines actually do anything
innodb_read_io_threads = 16

IO bound there is no way around that fact
Unless you are very confident on the suitability of indexes this seems a little presumptuous.
Assuming your right, this would imply a 100% write workload, or a data size orders of magnitude higher that RAM available and a uniform distribution of small accesses?
innodb_io_capacity is providing a default limitation and your hardware is capable of more.
Also if you are reading so frequently, your innodb_buffer_pool_size isn't sufficient.

Related

Innodb_buffer_pool_size High but RAM usage is Low

My app is python with mariadb on 8 Cores vCPU and 16GB RAM
I've set innodb_buffer_pool_size to 11G but RAM usage is low, never crosses 5G
My application then runs quite slow. What causes this? Any clues and guidance?
DB size is less than 5GB
I attached htop statistic, it seemed there're much resources avaialable .what clues we can get? see htop

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

MySQL 5.0, InnoDB Table, Slow inserts when heavy traffic.

I have INNODB table that stores user navigation details once user logs in.
I have simple INSERT statement for this purpose.
But sometimes this INSERT will take 15-24 secs when there is heavy traffic otherwise for single user it comes in micro seconds.
Server has 2GB RAM.
Below is MySQL configuration details:
max_connections=500
# You can set .._buffer_pool_size up to 50 - 80 % of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 800M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 200M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
table_cache = 90
query_cache_size = 256M
query_cache_limit = 256M
thread_cache_size = 16
sort_buffer_size = 64M
innodb_thread_concurrency=8
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
Thanks.
As a first measure have you considered updating? 5.0 is old. It's end of product lifecycle was reached. There has not been any changes to it since two years. There were made serious improvements to different aspects of the whole DBMS in versions 5.1 and 5.5. You should seriously consider upgrading.
You might want to try the tuning primer as another direction in what options you can change.
You can also check with SHOW FULL PROCESSLIST in what state single threads of MySQL are hanging. Maybe you spot something relevant.

The Case of the Missing '14 second SQLite database' performance

I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.

File size limit for SQLite on 32bit system

I'm using sqlite as temporary storage to calculate statistic about moderately large data set.
I'm wondering what will happen if my database exceed 2GB on 32 bit system. (I can't currently change the system to 64 bit)
Does it use memory mapped files and break if size of file exceed addressable memory? (like mongodb)
According sqlite documentation, maximum size of database file is ~140 terabytes and is practically limited by os/file system.
You can read more here (note the Pages section): http://www.sqlite.org/fileformat2.html
Though this is an old question, but let me share my findings for people who reach this question.
Although Sqlite documentation states that maximum size of database file is ~140 terabytes but your OS imposes it's own restrictions on maximum file size for any type of file.
For e.g. if you are using FAT32 disk on Windows, maximum file size that I could achieve for sqLite was 2GB.
(According to Microsoft site, limit on FAT 32 system is 4GB but still my sqlite db size was restricted to 2GB).
While on Linux , I was able to reach 3 GB (where I stopped. it could have reached more size)
Find out your file system type of the partition. Remember that the file size limit its not dependent with the OS 32-bit or 64-bit, but with partition type of your hard disk.
See Wikipedia

Resources