ERROR InnoDB: InnoDB: Unable to allocate memory of size 18446744073709544120 - mariadb

I have three nodes to setup the mariadb cluster.
When I use /usr/sbin/mysqld --wsrep-new-cluster --user=root & to start the cluster in the node1, but get the below error:
2017-08-05 17:41:36 140123776886528 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2017-08-05 17:41:36 140123777161408 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.19-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2017-08-05 17:41:41 140123698513664 [ERROR] InnoDB: InnoDB: Unable to allocate memory of size 18446744073709544120.
2017-08-05 17:41:41 7f7117464700 InnoDB: Assertion failure in thread 140123698513664 in file ha_innodb.cc line 22407
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
170805 17:41:41 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.1.19-MariaDB
key_buffer_size=0
read_buffer_size=131072
max_used_connections=3
max_threads=10002
thread_count=6
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 21969763 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0x7f6fe0b7e008
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f7117463cc0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7f711ca3dcde]
/usr/sbin/mysqld(handle_fatal_signal+0x2d5)[0x7f711c561005]
/lib64/libpthread.so.0(+0xf100)[0x7f711bb7b100]
/lib64/libc.so.6(gsignal+0x37)[0x7f7119ed65f7]
/lib64/libc.so.6(abort+0x148)[0x7f7119ed7ce8]
/usr/sbin/mysqld(+0x73880f)[0x7f711c6e480f]
/usr/sbin/mysqld(+0x78fbb7)[0x7f711c73bbb7]
/usr/sbin/mysqld(+0x78fcfc)[0x7f711c73bcfc]
/usr/sbin/mysqld(+0x82da1c)[0x7f711c7d9a1c]
/usr/sbin/mysqld(+0x82dc9e)[0x7f711c7d9c9e]
/usr/sbin/mysqld(+0x80ab5f)[0x7f711c7b6b5f]
/usr/sbin/mysqld(+0x8002cc)[0x7f711c7ac2cc]
/usr/sbin/mysqld(+0x7360f2)[0x7f711c6e20f2]
/usr/sbin/mysqld(_ZN7handler18index_read_idx_mapEPhjPKhm16ha_rkey_function+0x85)[0x7f711c561965]
/usr/sbin/mysqld(_ZN7handler21ha_index_read_idx_mapEPhjPKhm16ha_rkey_function+0xa6)[0x7f711c565ca6]
/usr/sbin/mysqld(+0x479524)[0x7f711c425524]
/usr/sbin/mysqld(+0x4796a0)[0x7f711c4256a0]
/usr/sbin/mysqld(+0x47edd6)[0x7f711c42add6]
/usr/sbin/mysqld(_ZN4JOIN14optimize_innerEv+0x72f)[0x7f711c432c1f]
/usr/sbin/mysqld(_ZN4JOIN8optimizeEv+0x2f)[0x7f711c43551f]
/usr/sbin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x8f)[0x7f711c43565f]
/usr/sbin/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x245)[0x7f711c4361c5]
/usr/sbin/mysqld(+0x4291a1)[0x7f711c3d51a1]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x5f8f)[0x7f711c3e14cf]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x352)[0x7f711c3e4e62]
/usr/sbin/mysqld(+0x439689)[0x7f711c3e5689]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1fb0)[0x7f711c3e7d10]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x169)[0x7f711c3e8bb9]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x18a)[0x7f711c4af71a]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x7f711c4af8f0]
/usr/sbin/mysqld(+0x96f37d)[0x7f711c91b37d]
/lib64/libpthread.so.0(+0x7dc5)[0x7f711bb73dc5]
/lib64/libc.so.6(clone+0x6d)[0x7f7119f9721d]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7f6fd9420020): is an invalid pointer
Connection ID (thread ID): 16
Status: NOT_KILLED
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=off
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
We think the query pointer is invalid, but we will try to print it anyway.
Query: SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, agents.`binary` AS agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, agents.availability_zone AS agents_availability_zone, agents.admin_state_up AS agents_admin_state_up, agents.created_at AS agents_created_at, agents.started_at AS agents_started_at, agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS agents_description, agents.configurations AS agents_configurations, agents.resource_versions AS agents_resource_versions, agents.`load` AS agents_load
FROM agents
WHERE agents.agent_type = 'Linux bridge agent' AND agents.host = 'ha-node2'

Related

Mariadb Galera 10.5.13-16 Node Crash

I have a cluster with 2 galera nodes and 1 arbitrator.
My node 1 crashed I don't understand why..
Here is the log of the node 1.
It seems that it is a problem with the pthread library.
Also every requests are proxied by 2 HAProxy.
2023-01-03 12:08:55 0 [Warning] WSREP: Handshake failed: peer did not return a certificate
2023-01-03 12:08:55 0 [Warning] WSREP: Handshake failed: peer did not return a certificate
2023-01-03 12:08:56 0 [Warning] WSREP: Handshake failed: http request
terminate called after throwing an instance of 'boost::wrapexcept<std::system_error>'
what(): remote_endpoint: Transport endpoint is not connected
230103 12:08:56 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.5.13-MariaDB-1:10.5.13+maria~focal
key_buffer_size=134217728
read_buffer_size=2097152
max_used_connections=101
max_threads=102
thread_count=106
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 760333 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mariadbd(my_print_stacktrace+0x32)[0x55b1b67f7e42]
Printing to addr2line failed
mariadbd(handle_fatal_signal+0x485)[0x55b1b62479a5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7ff88ea983c0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7ff88e59e18b]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7ff88e57d859]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0x9e911)[0x7ff88e939911]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa38c)[0x7ff88e94538c]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa3f7)[0x7ff88e9453f7]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa6a9)[0x7ff88e9456a9]
/usr/lib/galera/libgalera_smm.so(+0x448ad)[0x7ff884b5e8ad]
/usr/lib/galera/libgalera_smm.so(+0x1fc315)[0x7ff884d16315]
/usr/lib/galera/libgalera_smm.so(+0x1ff7eb)[0x7ff884d197eb]
/usr/lib/galera/libgalera_smm.so(+0x1ffc28)[0x7ff884d19c28]
/usr/lib/galera/libgalera_smm.so(+0x2065b6)[0x7ff884d205b6]
/usr/lib/galera/libgalera_smm.so(+0x1f81f3)[0x7ff884d121f3]
/usr/lib/galera/libgalera_smm.so(+0x1e6f04)[0x7ff884d00f04]
/usr/lib/galera/libgalera_smm.so(+0x103438)[0x7ff884c1d438]
/usr/lib/galera/libgalera_smm.so(+0xe8eea)[0x7ff884c02eea]
/usr/lib/galera/libgalera_smm.so(+0xe9a8d)[0x7ff884c03a8d]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x9609)[0x7ff88ea8c609]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7ff88e67a293]
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Fatal signal 11 while backtracing
PS: if you want more data ask me :)
OK it seems that 2 simultaneous scans of OpenVAS crashes the node.
I tried with version 10.5.13 and 10.5.16 -> crash.
Solution: Upgrade to 10.5.17 at least.

Data unpack would read past end of buffer in file util/show_help.c at line 501

I submitted a job via slurm. The job ran for 12 hours and was working as expected. Then I got Data unpack would read past end of buffer in file util/show_help.c at line 501. It is usual for me to get errors like ORTE has lost communication with a remote daemon but I usually get this in the beginning of the job. It is annoying but still does not cause as much time loss as getting error after 12 hours. Is there a quick fix for this? Open MPI version is 4.0.1.
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: barbun40
Local adapter: mlx5_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: barbun40
Local device: mlx5_0
--------------------------------------------------------------------------
[barbun21.yonetim:48390] [[15284,0],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in
file util/show_help.c at line 501
[barbun21.yonetim:48390] 127 more processes have sent help message help-mpi-btl-openib.txt / ib port
not selected
[barbun21.yonetim:48390] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error
messages
[barbun21.yonetim:48390] 126 more processes have sent help message help-mpi-btl-openib.txt / error in
device init
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
An MPI communication peer process has unexpectedly disconnected. This
usually indicates a failure in the peer process (e.g., a crash or
otherwise exiting without calling MPI_FINALIZE first).
Although this local MPI process will likely now behave unpredictably
(it may even hang or crash), the root cause of this problem is the
failure of the peer -- that is what you need to investigate. For
example, there may be a core file that you can examine. More
generally: such peer hangups are frequently caused by application bugs
or other external events.
Local host: barbun64
Local PID: 252415
Peer host: barbun39
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[15284,1],35]
Exit code: 9
--------------------------------------------------------------------------

Cannot get galera cluster to start - -bash: galera_new_cluster: command not found

I am following instructions to install MariaDB Galera cluster on Centos 7.6
But, I just cannot get the cluster to start.
I can get the MariaDB service started on both nodes.
Here is my server.cnf
[galera]
# Mandatory settings
wsrep_cluster_name="galera_cluster"
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://172.18.35.XXX,172.18.35.XXX
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
I am stumped, there is nothing in the Maria DB logs. What else should I be looking at?
Never mind, I was able to get past that step, but the cluster will not start.
I do not get any errors when I run
root#db-mmr101:/var/lib/mysql$ /usr/bin/mysqld_safe --wsrep-new-cluster
190709 15:01:24 mysqld_safe Logging to '/var/lib/mysql/db-mmr101.err'.
190709 15:01:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Or start the MariaDB service. Nothing in the error logs as well?
90709 15:01:30 mysqld_safe mysqld from pid file /var/lib/mysql/db-mmr101.pid ended
190709 15:01:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
190709 15:01:38 [Note] /usr/libexec/mysqld (mysqld 5.5.60-MariaDB) starting as process 19920 ...
190709 15:01:38 InnoDB: The InnoDB memory heap is disabled
190709 15:01:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins
190709 15:01:38 InnoDB: Compressed tables use zlib 1.2.7
190709 15:01:38 InnoDB: Using Linux native AIO
190709 15:01:38 InnoDB: Initializing buffer pool, size = 128.0M
190709 15:01:38 InnoDB: Completed initialization of buffer pool
190709 15:01:38 InnoDB: highest supported file format is Barracuda.
190709 15:01:38 InnoDB: Waiting for the background threads to start
190709 15:01:39 Percona XtraDB (http://www.percona.com) 5.5.59-MariaDB-38.11 started; log sequence number 1597945
190709 15:01:39 [Note] Plugin 'FEEDBACK' is disabled.
190709 15:01:39 [Note] Server socket created on IP: '0.0.0.0'.
190709 15:01:39 [Note] Event Scheduler: Loaded 0 events
190709 15:01:39 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.60-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
On newer machines using SystemD as init system, additional steps might be necessary to start the first cluster node again.
First make sure that the node, which will be the new primary node, is allowed to bootstrap the cluster (this part is irrelevant to SystemD):
# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 6a1f102a-13a3-11e7-b710-b2876418a643
seqno: -1
safe_to_bootstrap: 0
Replace the value of safe_to_bootstrap to 1:
# sed -i "/safe_to_bootstrap/s/0/1/" /var/lib/mysql/grastate.dat
Then run the command
# galera_new_cluster
You have to tell the first node that is the first participant in the cluster, with MariaDB the command is:
galera_new_cluster
https://galeracluster.com/library/training/tutorials/starting-cluster.html
You may need to use the full path to the script
I realized that yum install was not installing MariaDB 10+ on Centos7.6, since there is no build in the repo for that. Had to use rpm to download/build MariaDB 10.4.
yum will install the default MariaDB 5.5 which comes with Centos 7.6. 5.5 is a really old version, which does not have the galera_new_cluster command.
Here is a good guide to get MariaDB installed on RHEL 7+ using rpm-qa
https://medium.com/#thomashysselinckx/installing-mariadb-with-rpm-on-centos7-bce648cce758
I spent a lot of time, trying to get it working with yum and eventually gave up and went the rpm route.

Why does galera crash by a simple ALTER -statement

I have a Mariadb 10.2.14 5-nodes Galera server. Simple straightforward database almost 20G. No triggers. A lot of INDEXes and FOREIGN KEYS.
Where I try to ALTER a empty or small table (add a field) via the command line MySQL on one of the multi-masters then the whole cluster crashes, why? I never have had this problem on other Galera Systems. RedHat 6.10 is the OS.
Can someone help?
This is the error-log on one of the servers:
While update a simple table with an simple alter statement the 5-node multimaster Galera stops working an the table gets corrupted. This has happened several times nog with different tables and simple alterstatements (without triggers).
The mysql-errorlog shows this:
2019-01-15 10:47:19 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) synced with group.
2019-01-15 11:07:45 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) desyncs itself from group
2019-01-15 11:07:46 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) resyncs itself to group
2019-01-15 11:07:46 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) synced with group.
2019-01-15 11:27:40 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) desyncs itself from group
2019-01-15 11:27:41 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) resyncs itself to group
2019-01-15 11:27:41 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) synced with group.
2019-01-15 11:47:23 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) desyncs itself from group
2019-01-15 11:47:24 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) resyncs itself to group
2019-01-15 11:47:24 140487941920512 [Note] WSREP: Member 1.0 (server.company.local) synced with group.
2019-01-15 12:24:39 140452405958400 [Note] WSREP: MDL BF-BF conflict
schema: databasename
request: (8227134 seqno 46874664 wsrep (2, 1, 0) cmd 3 3 ALTER TABLE `aagenda` ADD `id_subject_cat` int(11) NULL DEFAULT '0' AFTER `id_subject`, ADD INDEX `id_s$
granted: (15 seqno 46874665 wsrep (1, 0, 0) cmd 0 147 (null))
2019-01-15 12:24:40 140452405958400 [Note] WSREP: MDL BF-BF conflict
schema: databasename
request: (8227134 seqno 46874664 wsrep (2, 1, 0) cmd 3 3 ALTER TABLE `aagenda` ADD `id_subject_cat` int(11) NULL DEFAULT '0' AFTER `id_subject`, ADD INDEX `id_s$
granted: (15 seqno 46874665 wsrep (1, 0, 0) cmd 0 147 (null))
2019-01-15 12:24:40 140452405958400 [Note] WSREP: MDL BF-BF conflict
schema: databasename
request: (8227134 seqno 46874664 wsrep (2, 1, 0) cmd 3 3 ALTER TABLE `aagenda` ADD `id_subject_cat` int(11) NULL DEFAULT '0' AFTER `id_subject`, ADD INDEX `id_s$
granted: (11 seqno 46874666 wsrep (1, 0, 0) cmd 0 147 (null))
2019-01-15 12:24:40 0x7fbd9fc3d700 InnoDB: Assertion failure in file /home/buildbot/buildbot/padding_for_CPACK_RPM_BUILD_SOURCE_DIRS_PREFIX/mariadb-10.2.14/storage/innobase/row/row0merge.cc l$
InnoDB: Failing assertion: table->get_ref_count() == 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: https://mariadb.com/kb/en/library/xtradbinnodb-recovery-modes/
InnoDB: about forcing recovery.
190115 12:24:40 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.2.14-MariaDB-log
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=837
max_threads=1502
thread_count=280
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3431472 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7fbe2d906c18
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went terribly wrong...
stack_bottom = 0x7fbd9fc3cd80 thread_stack 0x49000
/usr/sbin/mysqld(my_print_stacktrace+0x2b)[0x55f4e00d8fab]
/usr/sbin/mysqld(handle_fatal_signal+0x535)[0x55f4dfbad005]
/lib64/libpthread.so.0(+0xf7e0)[0x7fc5f97f67e0]
/lib64/libc.so.6(gsignal+0x35)[0x7fc5f7e50495]
/lib64/libc.so.6(abort+0x175)[0x7fc5f7e51c75]
/usr/sbin/mysqld(+0x47c4eb)[0x55f4df97a4eb]
/usr/sbin/mysqld(+0x90edcc)[0x55f4dfe0cdcc]
/usr/sbin/mysqld(+0x873236)[0x55f4dfd71236]
/usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P14HA_CREATE_INFOP10TABLE_LISTP10Alter_infojP8st_orderb+0x29ed)[0x55f4dfab181d]
/usr/sbin/mysqld(_ZN19Sql_cmd_alter_table7executeEP3THD+0x3ae)[0x55f4dfaf62fe]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0xf81)[0x55f4dfa2b251]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_statebb+0x29a)[0x55f4dfa327ca]
/usr/sbin/mysqld(+0x5348c0)[0x55f4dfa328c0]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcjbb+0x18cd)[0x55f4dfa346fd]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x16e)[0x55f4dfa350ee]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP7CONNECT+0x16f)[0x55f4dfaf335f]
/usr/sbin/mysqld(handle_one_connection+0x44)[0x55f4dfaf3484]
/lib64/libpthread.so.0(+0x7aa1)[0x7fc5f97eeaa1]
/lib64/libc.so.6(clone+0x6d)[0x7fc5f7f06bdd]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7fbe2d9141f0): is an invalid pointer
Connection ID (thread ID): 8227134
Status: NOT_KILLED
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_push$
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
We think the query pointer is invalid, but we will try to print it anyway.
Query: ALTER TABLE `aagenda` ADD `id_subject_cat` int(11) NULL DEFAULT '0' AFTER `id_subject`, ADD INDEX `id_subject_cat` (`id_subject_cat`)
Advice that I got from MariaDB:
If you want to ALTER a table in a Galera-production environment without downtime do this per node:
SET GLOBAL wsrep_desync = TRUE; (OR SET GLOBAL wsrep_desync = ON;)
SET SESSION wsrep_on = FALSE; (OR SET GLOBAL wsrep_on= OFF ;)
--- ALTER STATEMENT ---
SET SESSION wsrep_on = TRUE; (OR SET GLOBAL wsrep_on= ON ;)
SET GLOBAL wsrep_desync = FALSE; (OR SET GLOBAL wsrep_desync = OFF;)
But the table structure has to be backwards compatible-usable by the application, otherwise you have to stop the cluster and then alter your tables on one node, restart the cluster.

Error establishing a database connection EC2 Amazon

I hope you can help me. I can not stand having to keep restarting my ec2 instance on Amazon.
I have two wordpress sites hosted there. My sites have always worked well until two months ago, one of them started having this problem. I tried all ways pack up, and the only solution was to reconfigure.
Now that all was right with the two. The second site started the same problem. I think Amazon is clowning me.
I am using a free micro instance. If anyone knows what the problem is, please help me!
Your issue will be the limited memory that is allocated to the T1 Micro instances in EC2. I'm assuming you are using ANI Linux in this case and if an alternate version of Linux is used then you may have different locations for your log and config files.
Make sure you are the root user.
Have a look at your MySQL logs in the following location:
/var/log/mysqld.log
If you see repeated instances of the following it's pretty certain that the 0.6GB of memory allocated to the micro instance is not cutting it.
150714 22:13:33 InnoDB: Initializing buffer pool, size = 12.0M
InnoDB: mmap(12877824 bytes) failed; errno 12
150714 22:13:33 InnoDB: Completed initialization of buffer pool
150714 22:13:33 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150714 22:13:33 [ERROR] Plugin 'InnoDB' init function returned error.
150714 22:13:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150714 22:13:33 [ERROR] Unknown/unsupported storage engine: InnoDB
150714 22:13:33 [ERROR] Aborting
You will notice in the log excerpt above that my buffer pool size is set to 12MB. This can be configured by adding the line innodb_buffer_pool_size = 12M to your MySQL config file /etc/my.cnf.
A pretty good way to deal with InnoDB chewing up your memory is to create a swap file.
Start by checking the status of your memory:
free -m
You will most probably see that your swap is not doing much:
total used free shared buffers cached
Mem: 592 574 17 0 15 235
-/+ buffers/cache: 323 268
Swap: 0 0 0
To start ensure you are logged in as the root user and run the following command:
dd if=/dev/zero of=/swapfile bs=1M count=1024
Wait for a bit as the command is not verbose but you should see the following response after about 15 seconds when the process is complete:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.505 s, 34.1 MB/s
Next set up the swapspace with:
mkswap /swapfile
Now set up the swap event:
swapon /swapfile
If you get a permissions response you can ignore it or address the swap file by changing the permissions to 600 with the chmod command.
chmod 600 /swapfile
Now add the following line to /etc/fstab to create the swap spaces on server start:
/swapfile swap swap defaults 0 0
Restart your MySQL instance:
service mysqld restart
Finally check to see if your swap file is working correctly with the free -m command.
You should see something like:
total used free shared buffers cached
Mem: 592 575 16 0 16 235
-/+ buffers/cache: 323 269
Swap: 1023 0 1023
Hope this helps.

Resources