I have installed pgpool 3.2.1 with 2 backends in streaming replication mode with load balancing and connections pool.I did some high load tests tring to colapse the pgpool connections.
Suposing that this rule is correct : max_pool*num_init_children <= (max_connections - superuser_reserved_connections)
Test 1:
num_init_children = 90
max_pool = 1
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 90.
Test 2:
num_init_children = 90
max_pool = 2
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 91. What happens with the other 6 connections to get up to 97 connections? which is the maximum number of connections I can get to postgresql.
In both cases I get all connections used in pgpoolAdmin and the connection to database get frozen and no new connections were allowed.
Thank you!
In pgpool they are using the following rule to control the connections:
max_pool*num_init_children <= (max_connections - superuser_reserved_connections) (no query canceling needed)
max_pool*num_init_children*2 <= (max_connections - superuser_reserved_connections) (query canceling needed)
So, the problem is when you have query cancelling you must set in postgresql the double number of connections configured in pgpool.
Related
As the title suggests, no log is recorded in the log file even though the related settings have been completed.
slow_query_log_file = /var/log/mysql/mariadb-slow.log
slow_query_log = 1
long_query_time = 1
log_slow_rate_limit = 1000
log_slow_verbosity = query_plan
log-queries-not-using-indexes
This is mariadb's conf content.
When you open the log file, only the basics exist.
Tcp port: 3306 Unix socket: /run/mysqld/mysqld.sock
Time Id Command Argument
logrotate seems to work fine.
After connecting to mysql, I used select sleep(); but it did not work properly.
The result after using the command is 0, which seems to be normal, but the log is not recorded.
Why wouldn't it work?
The new settings will apply only if the MariaDB server instance is restarted. Therefore, the solution, as mentioned in the comment, is to restart the MariaDB server instance in order to apply the new settings.
On one host ldapsearch was taking 20 seconds to launch.
Even if I just asked it what its version number is, it still took 20 seconds:
time ldapsearch -VV
ldapsearch: #(#) $OpenLDAP: ldapsearch 2.4.44 (Sep 30 2020 17:16:36) $
mockbuild#x86-02.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.44/openldap-2.4.44/clients/tools
(LDAP library: OpenLDAP 20444)
real 0m20.034s
user 0m0.006s
sys 0m0.008s
This isn't a question about time to search - if I asked it to search, it would spend 20 seconds before it even starts searching.
Once it starts, the search succeeds and takes about the same time as it does when invoked from other hosts.
I tried adding various command line parameters.
The only thing that returned a different result was ldapsearch --help which returns basically instantly, suggesting that the problem wasn't in loading libraries or any such.
Running strace showed that the delay was in network traffic, specifically port 53 (DNS):
socket(AF_INET6, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3 <0.000038>
connect(3, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, "... poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}]) <0.000011>
sendto(3, "..."..., 34, MSG_NOSIGNAL, NULL, 0) = 34 <0.000033>
poll([{fd=3, events=POLLIN}], 1, 5000) = 0 (Timeout) <5.005182>
The destination for the connect call turned out to be an IP address that was being set in /etc/resolv.conf.
The IP address was unreachable.
Removing the unreachable IP address from /etc/resolv.conf made the delay go away.
Have had airflow webserver -D deamon process (v1.10.7) running on machine (CentOS 7) for long time. Suddenly saw that the webserver could no longer be accessed and checking the airflow-webserver.log saw...
[airflow#airflowetl airflow]$ cat airflow-webserver.log
2020-10-23 00:57:15,648 ERROR - No response from gunicorn master within 120 seconds
2020-10-23 00:57:15,649 ERROR - Shutting down webserver
(nothing of note in airflow-webserver.err)
[airflow#airflowetl airflow]$ cat airflow-webserver.err
/home/airflow/.local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
The airflow.cfg values for the webserver section looks like...
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
#base_url = http://localhost:8080
base_url = http://airflowetl.co.local:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the webserver waits before killing gunicorn master that doesn't respond
web_server_master_timeout = 120
# Number of seconds the gunicorn webserver waits before timing out on a worker
#web_server_worker_timeout = 120
web_server_worker_timeout = 300
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = my_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
Ultimately, just restarted the process as a daemon again (airflow webserver -D (should I have deleted the old airflow-webserer.log and .err files first?)), but not sure what would make this happen, since it had had no problems running for months before this.
Could anyone with more experience explain what could have happened after all this time and how I could prevent it in the future? Any issues with running dags or anything else that I should check for that this temporary unexpected shutdown of the websever may have caused?
I am experiencing the same issue, and it only started (very unfrequently) when I changed the following two config parameters in the webserver.
worker_refresh_interval = 120
workers = 2
However, my parameters are also set quite differently than yours, will share them here.
rbac = True
web_server_host = 0.0.0.0
web_server_port = 8080
web_server_master_timeout = 600
web_server_worker_timeout = 600
default_ui_timezone = Europe/Amsterdam
reload_on_plugin_change = True
After comparing the two, as your settings of the two I changed were set to the default (same as me before changing them), it seems that it is a combination of more parameters.
I'm working on a PHP probject using Asterisk.I need to store Asterisk CDR in a database .I want to know how could I connect Asterisk to phpmyadmin.I installed Asterisk on centos 6( which is installed on virtual box) and phpmyadmin is installed on another system.
Asterisk support direct mysql cdr log. So no need do anything like that
http://www.voip-info.org/wiki/view/Asterisk+cdr+mysql
You'll need the cdr_mysql module. It's in the addons category.
Configuration is at /etc/asterisk/cdr_mysql.conf:
[global]
dbname = asteriskcdrdb
user = asterisk
password = supersecret
charset = utf8
table = cdr
;timezone = UTC
;compat = no
hostname = 127.0.0.1
port = 3306
To check if the module is loaded:
asterisk*CLI> cdr show status
Call Detail Record (CDR) settings
----------------------------------
Logging: Enabled
Mode: Simple
Log unanswered calls: No
Log congestion: No
* Registered Backends
-------------------
mysql
To check if connection succeeded:
asterisk*CLI> cdr mysql status
Connected to asteriskcdrdb on 127.0.0.1 using table cdr for 8 days, 12 hours, 8 minutes, 38 seconds.
Wrote 0 records since last restart.
I have a Multi-Master Ring Replication setup in MariaDB. 3 Servers.
One of my server's ran out of disk space and I eventually needed to restart the server. Now after doing that the two slave servers are reporting this error in the slave status.
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Error: connecting slave requested to start from GTID 1-1-426253, which is not in the master's binlog'
I'm really confused on how to recover my slave from this error, could someone please tell me how I tell this slave server where to start from the correct GTID on it's master?
Thanks
I got it all working again. I simply found the masters log and position number by going to the master server and typing SHOW MASTER STATUS;
I then used that information on the slave and did this.
STOP SLAVE 'MDB1';
CHANGE MASTER "MDB1" TO master_host="xxx.xxx.xxx.xxx", master_port=3306, master_user="****", master_password="****", master_log_file="mariadb-bin.000394", master_log_pos=385;
START SLAVE 'MDB1';
Then checked it was all working ok, then I changed back to using GTID
STOP SLAVE 'MDB1';
CHANGE MASTER "MDB1" TO master_use_gtid=slave_pos;
START SLAVE 'MDB1';
After that it was all back and running again.
Moshe L, here is my masters binlog setup
server-id = 1
gtid_domain_id = 1
gtid_strict_mode = 1
report_host = MDB1
auto_increment_increment = 3
auto_increment_offset = 1
slave_parallel_threads = 12
replicate_ignore_db = mysql
replicate_ignore_table = MA4_Data.EOD_FileCache
log_bin = /var/log/mysql/mariadb-bin
log_bin_index = /var/log/mysql/mariadb-bin.index
binlog_format = mixed
#binlog_commit_wait_count = 12
#binlog_commit_wait_usec = 10000
#slave_compressed_protocol = 1
# not fab for performance, but safer
sync_binlog = 1
expire_logs_days = 10
max_binlog_size = 100M
# slaves
relay_log = /var/log/mysql/relay-bin
relay_log_index = /var/log/mysql/relay-bin.index
relay_log_info_file = /var/log/mysql/relay-bin.info
This is another solution one could try.
stop slave;
reset slave;
start slave