About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000 is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Related
My.cnf
[mysql]
port = 3306
socket=/var/lib/mysql/mysql.sock
[mysqld]
# === Required Settings ===
basedir = /usr
bind_address = 127.0.0.1
datadir = /var/lib/mysql
max_allowed_packet = 1024M
max_connect_errors = 1000000
pid-file=/var/lib/mysql/mysql.pid
port = 3306
skip_name_resolve
socket=/var/lib/mysql/mysql.sock
tmpdir = /tmp
sql_mode = ""
thread_handling = pool-of-threads
# === InnoDB Settings ===
default_storage_engine = InnoDB
innodb_buffer_pool_size = 36G # Use up to 70-80% of RAM
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 16M
innodb_log_file_size = 1G
innodb_sort_buffer_size = 4M # UPD - Defines how much data is read into memory for sorting operations before writing to disk (default is 1M / max is 64M)
innodb_stats_on_metadata = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_io_capacity = 2000 # Depends on the storage tech - use 2000 for SSD, more for NVMe
innodb_io_capacity_max = 4000 # Usually double the value of innodb_io_capacity
# === Connection Settings ===
max_connections = 100 # UPD - Important: high no. of connections = high RAM consumption
back_log = 512
thread_cache_size = 100
thread_stack = 192K
interactive_timeout = 180
wait_timeout = 180
# === Buffer Settings ===
join_buffer_size = 4M # UPD
read_buffer_size = 3M # UPD
read_rnd_buffer_size = 4M # UPD
sort_buffer_size = 4M # UPD
# === Table Settings ===
table_definition_cache = 40000 # UPD
table_open_cache = 40000 # UPD
open_files_limit = 60000 # UPD
max_heap_table_size = 128M # Increase to 256M or 512M if you have lots of temporary tables because of missing indices in JOINs
tmp_table_size = 128M # Use same value as max_heap_table_size
# === Search Settings ===
ft_min_word_len = 3 # Minimum length of words to be indexed for search results
# === Binary Logging ===
disable_log_bin = 1
# === Error & Slow Query Logging ===
log_error = /var/lib/mysql/mysql_error.log
log_queries_not_using_indexes = 0 # Disabled on production
long_query_time = 5
slow_query_log = 0 # Disabled on production
slow_query_log_file = /var/lib/mysql/mysql_slow.log
[mysqldump]
quick
quote_names
max_allowed_packet = 1024M
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
My ram & cpu
[root#localhost ~]# cat /proc/cpuinfo | grep processor | wc -l
24
[root#localhost ~]# free -m
total used free shared buff/cache available
Mem: 48125 6136 12446 50 29541 41490
Swap: 0 0 0
mysqltuner
[root#localhost ~]# ./mysqltuner.pl
>> MySQLTuner 1.9.8
* Jean-Marie Renouard <jmrenouard#gmail.com>
* Major Hayden <major#mhtx.net>
>> Bug reports, feature requests, and downloads at http://mysqltuner.pl/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[!!] Your MySQL version 10.8.3-MariaDB is EOL software! Upgrade soon!
[OK] Operating on 64-bit architecture
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/lib/mysql/mysql_error.log exists
[--] Log file: /var/lib/mysql/mysql_error.log(70K)
[OK] Log file /var/lib/mysql/mysql_error.log is not empty
[OK] Log file /var/lib/mysql/mysql_error.log is smaller than 32 Mb
[OK] Log file /var/lib/mysql/mysql_error.log is readable.
[!!] /var/lib/mysql/mysql_error.log contains 339 warning(s).
[!!] /var/lib/mysql/mysql_error.log contains 51 error(s).
[--] 2 start(s) detected in /var/lib/mysql/mysql_error.log
[--] 1) 2022-06-19 16:42:05 0 [Note] /usr/sbin/mariadbd: ready for connections.
[--] 2) 2022-06-17 21:30:12 0 [Note] /usr/sbin/mariadbd: ready for connections.
[--] 1 shutdown(s) detected in /var/lib/mysql/mysql_error.log
[--] 1) 2022-06-19 16:42:04 0 [Note] /usr/sbin/mariadbd: Shutdown complete
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE
[--] Data in Aria tables: 32.0K (Tables: 1)
[--] Data in InnoDB tables: 1.7G (Tables: 845)
[OK] Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Views Metrics -----------------------------------------------------------------------------
-------- Triggers Metrics --------------------------------------------------------------------------
-------- Routines Metrics --------------------------------------------------------------------------
-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!
-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 4d 6h 58m 28s (166M q [448.474 qps], 937K conn, TX: 203G, RX: 12G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory : 47.0G
[--] Max MySQL memory : 137.9G
[--] Other process memory: 0B
[--] Total buffers: 36.4G global + 1.0G per thread (100 max threads)
[--] P_S Max memory usage: 0B
[--] Galera GCache Max memory usage: 0B
[!!] Maximum reached memory usage: 93.2G (198.36% of installed RAM)
[!!] Maximum possible memory usage: 137.9G (293.37% of installed RAM)
[!!] Overall possible memory usage with other process exceeded memory
[OK] Slow queries: 0% (53/166M)
[OK] Highest usage of available connections: 56% (56/100)
[OK] Aborted connections: 0.01% (107/937871)
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 874K sorts)
[!!] Joins performed without indexes: 46708
[OK] Temporary tables created on disk: 2% (6K on disk / 305K total)
[--] Thread cache not used with thread pool enabled
[OK] Table cache hit rate: 99% (77M hits / 77M requests)
[OK] table_definition_cache(40000) is upper than number of tables(1137)
[OK] Open file limit used: 0% (29/32K)
[OK] Table locks acquired immediately: 100% (2K immediate / 2K locks)
-------- Performance schema ------------------------------------------------------------------------
[!!] Performance_schema should be activated.
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 24 thread(s).
[--] Using default value is good enough for your version (10.8.3-MariaDB)
-------- MyISAM Metrics ----------------------------------------------------------------------------
[--] No MyISAM table(s) detected ....
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 36.0G/1.7G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (2.77777777777778 %): 1.0G * 1/36.0G should be equal to 25%
[--] Number of InnoDB Buffer Pool Chunk : 64 for 1 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (36551426226 hits/ 36551507601 total)
[OK] InnoDB Write log efficiency: 90.84% (7736631 hits/ 8516998 total)
[OK] InnoDB log waits: 0.00% (0 waits / 780367 writes)
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/344.0K
[!!] Aria pagecache hit rate: 82.5% (38K cached / 6K reads)
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: MIXED
[--] XA support enabled: ON
[--] Semi synchronous replication Master: OFF
[--] Semi synchronous replication Slave: OFF
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
You are using n unsupported version for production environments
Upgrade as soon as possible to a supported version !
Check warning line(s) in /var/lib/mysql/mysql_error.log file
Check error line(s) in /var/lib/mysql/mysql_error.log file
Reduce your overall MySQL memory footprint for system stability
Dedicate this server to your database for highest performance.
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
Performance schema should be activated for better diagnostics
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this:
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
join_buffer_size (> 4.0M, or always use indexes with JOINs)
performance_schema=ON
innodb_log_file_size should be (=9G) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
[root#localhost ~]#
Tuning Primer
[root#localhost ~]# ./tuning-primer.sh
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -
MySQL Version 10.8.3-MariaDB x86_64
Uptime = 4 days 9 hrs 29 min 40 sec
Avg. qps = 454
Total Questions = 172718132
Threads Connected = 33
Server has been running for over 48hrs.
It should be safe to follow these recommendations
To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/10.8/en/server-system-variables.html
Visit http://www.mysql.com/products/enterprise/advisors.html
for info about MySQL's Enterprise Monitoring and Advisory Service
SLOW QUERIES
The slow query log is NOT enabled.
Current long_query_time = 5.000000 sec.
You have 53 out of 172718244 that take longer than 5.000000 sec. to complete
Your long_query_time seems to be fine
BINARY UPDATE LOG
The binary update log is NOT enabled.
You will not be able to do point in time recovery
See http://dev.mysql.com/doc/refman/10.8/en/point-in-time-recovery.html
WORKER THREADS
Current thread_cache_size = 100
Current threads_cached = 0
Current threads_per_sec = 0
Historic threads_per_sec = 0
Your thread_cache_size is fine
MAX CONNECTIONS
Current max_connections = 100
Current threads_connected = 33
Historic max_used_connections = 56
The number of used connections is 56% of the configured maximum.
Your max_connections variable seems to be fine.
No InnoDB Support Enabled!
MEMORY USAGE
Max Memory Ever Allocated : 36.97 G
Configured Max Per-thread Buffers : 1.48 G
Configured Max Global Buffers : 36.14 G
Configured Max Memory Limit : 37.62 G
Physical Memory : 46.99 G
Max memory limit seem to be within acceptable norms
KEY BUFFER
No key reads?!
Seriously look into using some indexes
Current MyISAM index space = 0 bytes
Current key_buffer_size = 128 M
Key cache miss rate is 1 : 0
Key buffer free ratio = 81 %
Your key_buffer_size seems to be fine
QUERY CACHE
Query cache is enabled
Current query_cache_size = 1 M
Current query_cache_used = 16 K
Current query_cache_limit = 1 M
Current Query cache Memory fill ratio = 1.65 %
Current query_cache_min_res_unit = 4 K
Your query_cache_size seems to be too high.
Perhaps you can use these resources elsewhere
MySQL won't cache query results that are larger than query_cache_limit in size
SORT OPERATIONS
Current sort_buffer_size = 4 M
Current read_rnd_buffer_size = 4 M
Sort buffer seems to be fine
JOINS
./tuning-primer.sh: line 402: export: `2097152': not a valid identifier
Current join_buffer_size = 4.00 M
You have had 48458 queries where a join could not use an index properly
join_buffer_size >= 4 M
This is not advised
You should enable "log-queries-not-using-indexes"
Then look for non indexed joins in the slow query log.
OPEN FILES LIMIT
Current open_files_limit = 32768 files
The open_files_limit should typically be set to at least 2x-3x
that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
TABLE CACHE
Current table_open_cache = 16319 tables
Current table_definition_cache = 40000 tables
You have a total of 957 tables
You have 1124 open tables.
The table_cache value seems to be fine
TEMP TABLES
Current max_heap_table_size = 128 M
Current tmp_table_size = 128 M
Of 315112 temp tables, 2% were created on disk
Created disk tmp tables ratio seems fine
TABLE SCANS
Current read_buffer_size = 3 M
Current table scan ratio = 155 : 1
read_buffer_size seems to be fine
TABLE LOCKING
Current Lock Wait ratio = 0 : 172720142
Your table locking seems to be fine
[root#localhost ~]#
I can allocate 40gb ram to mariadb server.
I need 8gb ram. We can use all 24 processors.
Please help me how to do the settings.
The site will be used for a web-based game.
The tables were all using MyISAM.
I converted all of them to innodb and it works fine.
If you need additional information, let me know in the comments.
These seem excessive:
table_definition_cache = 40000 # UPD
table_open_cache = 40000 # UPD
Will you have thousands of tables? Why? Even if you do have that many, these are caches, hence they don't have to be big enough to handle everything.
For better performance turn on the slowlog with a low value such as long_query_time = 1. After a few hours, use pt-query-digest. More on the SlowLog
For a deeper analysis of the settings: http://mysql.rjweb.org/doc.php/mysql_analysis#tuning
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section to improve performance
read_rnd_buffer_size=32K # from 4M to reduce handler_read_rnd_next RPS of 20,873
read_buffer_size=1M # from 3M to reduce handler_read_next RPS of 20,918
innodb_buffer_pool_size=4G # from 36G because you only have 1.7G of data and indexes
net_buffer_length=96K # from 16K to accommodate sending 700MB data per hr
Let us know how your system is performing in a few days, please. View profile for contact information and we do have FREE Utility Scripts to assist with performance tuning.
This is only the beginning of improving performance for your installation. As you get deeper into the environment you will find many opportunities to improve response time and reduce system overhead. We are available to assist.
I am testing server performance using JMeter to mimic a large number of users hitting our digital ocean server in a short period of time. When I set JMeter to 200 users and test it against my Laravel based webpage, every works fine. When I increase the number of users to 500, I start to get 524 errors.
The server CPU never goes over 10% and the memory is at 30%, so the server has enough power. The first couple of hundred requests process correctly, but then the 524 errors begin to appear. The failed requests have a higher latency and connection than the successful requests as shown in this screen shot. Any clue where I should start looking for the problem?
My conf file in sites available
location ~.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/xxxxxxxxxxxx.com.sock;
fastcgi_index index.php;
}
my nginx.conf settings
fastcgi_buffers 8 128k;
fastcgi_buffer_size 256k;
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 32m;
my /etc/sysctl.conf file settings, taken from this post after previously getting 504 errors https://www.digitalocean.com/community/questions/getting-nginx-fpm-sock-error
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
### IMPROVE SYSTEM MEMORY MANAGEMENT ###
# Increase size of file handles and inode cache
fs.file-max = 2097152
# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
### GENERAL NETWORK SECURITY OPTIONS ###
# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2
# Allowed local port range
net.ipv4.ip_local_port_range = 2000 65535
# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
### TUNING NETWORK PERFORMANCE ###
# Default Socket Receive Buffer
net.core.rmem_default = 31457280
# Maximum Socket Receive Buffer
net.core.rmem_max = 12582912
# Default Socket Send Buffer
net.core.wmem_default = 31457280
# Maximum Socket Send Buffer
net.core.wmem_max = 12582912
# Increase number of incoming connections
net.core.somaxconn = 65535
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65535
# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824
# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65535 131072 262144
net.ipv4.udp_mem = 65535 131072 262144
# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65535 16777216
net.ipv4.udp_wmem_min = 16384
# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
Added this to /etc/security/limits.conf
nginx soft nofile 2097152
nginx hard nofile 2097152
www-data soft nofile 2097152
www-data hard nofile 2097152
My php-fpm.d conf file settings
pm = static
pm.max_children = 40
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 8
; number of processes to process before respawning. Lower number if you have memory leaks, but each respawn takes time
pm.max_requests=50
; pm.process_idle_timeout=10
chdir = /
php_admin_value[disable_functions] = exec,passthru,shell_exec,system
php_admin_flag[allow_url_fopen] = on
php_admin_flag[log_errors] = on
php_admin_value[post_max_size] = 8M
php_admin_value[upload_max_filesize] = 8M
HTTP 524 errors are cloudflare specific - they're not being generated by your own nginx installation. Cloudflare gave up waiting for a response from your backend service, probably because of the low numbers of fpm children available to serve the requests.
524 A Timeout Occurred
Cloudflare was able to complete a TCP connection to the origin server, but did not receive a timely HTTP response.
If you're performance testing your own server setup, don't go through the endpoint that points to cloudflare.
The general "backend timed out" response for http servers are 504 Gateway Timeout.
i have upgraded from mariadb 10.1.36 to 10.4.8 and i can see mysterious increasing ram ussage on that new version. I also edited innodb_buffer_pool_size ant seems there is no effect if its set to 15M or 4G, ram is just slowly increasing. After while it eat whole ram and oom killer kills mariadb and this is repeating.
My server has 8GB RAM and its increasing like 60-150MB per day. Its not terrible but i have around 150 database servers so its huge problem.
I can temporary fix problem by restarting mariadb and its start again.
Info about database server:
databases: 200+
tables: 28200(141 per database)
average active connections: 100-200
size of stored data: 100-350GB
cpu: 4
ram: 8GB
there is my config:
server-id=101
datadir=/opt/mysql/
socket=/var/lib/mysql/mysql.sock
tmpdir=/tmp/
gtid-ignore-duplicates=True
log_bin=mysql-bin
expire_logs_days=4
wait_timeout=360
thread_cache_size=16
sql_mode="ALLOW_INVALID_DATES"
long_query_time=0.8
slow_query_log=1
slow_query_log_file=/opt/log/slow.log
log_output=TABLE
userstat = 1
user=mysql
symbolic-links=0
binlog_format=STATEMENT
default_storage_engine=InnoDB
slave_skip_errors=1062,1396,1690innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=4G
innodb_buffer_pool_instances=5
innodb_log_file_size=1G
innodb_log_buffer_size=196M
innodb_flush_log_at_trx_commit=1
innodb_thread_concurrency=24
innodb_file_per_table
innodb_write_io_threads=24
innodb_read_io_threads=24
innodb_adaptive_flushing=1
innodb_purge_threads=5
innodb_adaptive_hash_index=64
innodb_flush_neighbors=0
innodb_flush_method=O_DIRECT
innodb_io_capacity=10000
innodb_io_capacity_max=16000
innodb_lru_scan_depth=1024
innodb_sort_buffer_size=32M
innodb_ft_cache_size=70M
innodb_ft_total_cache_size=1G
innodb_lock_wait_timeout=300
slave_parallel_threads=5
slave_parallel_mode=optimistic
slave_parallel_max_queued=10000000
log_slave_updates=on
performance_schema=on
skip-name-resolve
max_allowed_packet = 512M
query_cache_type=0
query_cache_size = 0
query_cache_limit = 1M
query_cache_min_res_unit=1K
max_connections = 1500
table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
open_files_limit=1020000
collation-server = utf8_general_ci
character-set-server = utf8
log-error=/opt/log/error.log
log-error=/opt/log/error.log
pid-file=/var/run/mysqld/mysqld.pid
malloc-lib=/usr/lib64/libjemalloc.so.1
I solved it! The problem is memory allocation library.
If you do this SQL query:
SHOW VARIABLES LIKE 'version_malloc_library';
You must to get value "jemalloc" library. If you get only "system", you may have problems.
To change that, you need edit any .conf file in this directory:
/etc/systemd/system/mariadb.service.d/
There, add this line:
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1"
(this library file may be in other folder)
Then you must to restart mysqld
service mysqld stop && systemctl daemon-reload && service mysqld start
You got carried away in increasing values in my.cnf.
Many of the caches grow until hitting their limit, hence the memory growth you experienced.
What is the value from SHOW GLOBAL STATUS LIKE 'Max_used_connections';? Having a large max_connections accentuates several of the other values; lower it.
But perhaps the really bad one(s) involve table caches -- which have units of tables, not bytes. Crank these down a lot:
table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
I have exactly the same problem. Is it due to a bad configuration? Or is it a bug of the new version?
mariadb 10.1 was updated to 10.3 just when I upgraded Debian 9 to Debian 10. I tried solve the problem with mariadb 10.4 but nothing changed.
I want to downgrade version but I think it's neccesary dump all database and restore it, and that means being hours without service.
I don't think Debian 10 has to do with the issue
Please read my previous comments about alternative memory allocators...
When jemalloc is used:
When default memory allocator used:
Try with tcmalloc
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4.5.3.1"
My system crash several times per day with error "out of memory", i have tried many changes to the configuration file with out success,
Mariadb 10.1.43 runs in Windows Server 2019 with 16Gb of RAM, with out any other program running
this is my config file:
skip-external-locking
skip-name-resolve
performance_schema = ON
max_connections = 512
key_buffer_size = 256M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
max_allowed_packet = 4M
table_open_cache = 256
sort_buffer_size = 1M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 16M
thread_concurrency = 8
default-storage-engine = MyISAM
default_tmp_storage_engine = MyISAM
Any ideas?
Solved!!
In the configuration file I added the following:
tmp_table_size = 1M
If it is not specified, it uses 16M as the maximum size of a temporary table, although in the queries these files are very small, it acts as if it used the 16M on each occasion, I cannot find another explanation for this behavior. Thank you all. –
I have made a lot of changes but always home page takes 5 secs to load and backend is also more slow
My PC is CORE 2 DUO 2GH CPU & 2G RAM Window 7 32 bits WampServer 2.5
I have disabled Cgi_module
I have changed localhost to 127.0.0.1 in System32/etc/hosts
I have updated this define('DB_HOST', '127.0.0.1'); in wp-config
I have increased memory limit in php.ini to 512
and I restart server but nothing changed
knowing that I work with woocommerce plugin
and I have two others apps without cms (native php, mysql), they works fast
UPDATE :
I have made those mysql tuning updates :
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1024M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE