As the title suggests, no log is recorded in the log file even though the related settings have been completed.
slow_query_log_file = /var/log/mysql/mariadb-slow.log
slow_query_log = 1
long_query_time = 1
log_slow_rate_limit = 1000
log_slow_verbosity = query_plan
log-queries-not-using-indexes
This is mariadb's conf content.
When you open the log file, only the basics exist.
Tcp port: 3306 Unix socket: /run/mysqld/mysqld.sock
Time Id Command Argument
logrotate seems to work fine.
After connecting to mysql, I used select sleep(); but it did not work properly.
The result after using the command is 0, which seems to be normal, but the log is not recorded.
Why wouldn't it work?
The new settings will apply only if the MariaDB server instance is restarted. Therefore, the solution, as mentioned in the comment, is to restart the MariaDB server instance in order to apply the new settings.
Have had airflow webserver -D deamon process (v1.10.7) running on machine (CentOS 7) for long time. Suddenly saw that the webserver could no longer be accessed and checking the airflow-webserver.log saw...
[airflow#airflowetl airflow]$ cat airflow-webserver.log
2020-10-23 00:57:15,648 ERROR - No response from gunicorn master within 120 seconds
2020-10-23 00:57:15,649 ERROR - Shutting down webserver
(nothing of note in airflow-webserver.err)
[airflow#airflowetl airflow]$ cat airflow-webserver.err
/home/airflow/.local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
The airflow.cfg values for the webserver section looks like...
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
#base_url = http://localhost:8080
base_url = http://airflowetl.co.local:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the webserver waits before killing gunicorn master that doesn't respond
web_server_master_timeout = 120
# Number of seconds the gunicorn webserver waits before timing out on a worker
#web_server_worker_timeout = 120
web_server_worker_timeout = 300
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = my_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
Ultimately, just restarted the process as a daemon again (airflow webserver -D (should I have deleted the old airflow-webserer.log and .err files first?)), but not sure what would make this happen, since it had had no problems running for months before this.
Could anyone with more experience explain what could have happened after all this time and how I could prevent it in the future? Any issues with running dags or anything else that I should check for that this temporary unexpected shutdown of the websever may have caused?
I am experiencing the same issue, and it only started (very unfrequently) when I changed the following two config parameters in the webserver.
worker_refresh_interval = 120
workers = 2
However, my parameters are also set quite differently than yours, will share them here.
rbac = True
web_server_host = 0.0.0.0
web_server_port = 8080
web_server_master_timeout = 600
web_server_worker_timeout = 600
default_ui_timezone = Europe/Amsterdam
reload_on_plugin_change = True
After comparing the two, as your settings of the two I changed were set to the default (same as me before changing them), it seems that it is a combination of more parameters.
Emails are not being deliver to a particular email IDs.
We are using Sentora panel and Postfix mail server.
Error message:
Command died with signal 6: "/usr/libexec/dovecot/deliver"
Mail log:
Feb 14 09:50:27 host postfix/pipe[24913]: CBD7D2010A5: to=,
relay=dovecot, delay=13047, delays=13045/0/0/1.3, dsn=4.3.0,
status=SOFTBOUNCE (Command died with signal 6:
"/usr/libexec/dovecot/deliver")
Please help.
Signal 6 is SIGABRT, which is typically sent when there is an internal problem with the code of Dovecot's deliver binary. There are a number of reasons this could happen.
You can turn on LDA logging within your Dovecot config to get more insight on what's actually happening:
protocol lda {
...
# remember to give proper permissions for these files as well
log_path = /var/log/dovecot-lda-errors.log
info_log_path = /var/log/dovecot-lda.log
}
this can also happen when mail_temp_dir (default: /tmp) does not have enough space to extract attachments. it was fixed in https://github.com/dovecot/core/commit/43d7f354c44b358f45ddd10deb3742ec1cc94889 but is not yet available in some linux distributions (such as debian bullseye).
I have installed pgpool 3.2.1 with 2 backends in streaming replication mode with load balancing and connections pool.I did some high load tests tring to colapse the pgpool connections.
Suposing that this rule is correct : max_pool*num_init_children <= (max_connections - superuser_reserved_connections)
Test 1:
num_init_children = 90
max_pool = 1
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 90.
Test 2:
num_init_children = 90
max_pool = 2
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 91. What happens with the other 6 connections to get up to 97 connections? which is the maximum number of connections I can get to postgresql.
In both cases I get all connections used in pgpoolAdmin and the connection to database get frozen and no new connections were allowed.
Thank you!
In pgpool they are using the following rule to control the connections:
max_pool*num_init_children <= (max_connections - superuser_reserved_connections) (no query canceling needed)
max_pool*num_init_children*2 <= (max_connections - superuser_reserved_connections) (query canceling needed)
So, the problem is when you have query cancelling you must set in postgresql the double number of connections configured in pgpool.
Does anyone know how to use python to ping a local host to see if it is active or not? We (my team and I) have already tried using
os.system("ping 192.168.1.*")
But the response for destination unreachable is the same as the response for the host is up.
Thanks for your help.
Use this ...
import os
hostname = "localhost" #example
response = os.system("ping -n 1 " + hostname)
#and then check the response...
if response == 0:
print(hostname, 'is up!')
else:
print(hostname, 'is down!')
If using this script on unix/Linux replace -n switch with -c !
Thats all :)
I've found that using os.system(...) leads to false positives (as the OP said, 'destination host unreachable' == 0).
As stated before, using subprocess.Popen works. For simplicity I recommend doing that followed by parsing the results. You can easily do this like:
if ('unreachable' in output):
print("Offline")
Just check the various outputs you want to check from ping results. Make a 'this' in 'that' check for it.
Example:
import subprocess
hostname = "10.20.16.30"
output = subprocess.Popen(["ping.exe",hostname],stdout = subprocess.PIPE).communicate()[0]
print(output)
if ('unreachable' in output):
print("Offline")
The best way I could find to do this on Windows, if you don't want to be parsing the output is to use Popen like this:
num = 1
host = "192.168.0.2"
wait = 1000
ping = Popen("ping -n {} -w {} {}".format(num, wait, host),
stdout=PIPE, stderr=PIPE) ## if you don't want it to print it out
exit_code = ping.wait()
if exit_code != 0:
print("Host offline.")
else:
print("Host online.")
This works as expected. The exit code gives no false positives. I've tested it in Python 2.7 and 3.4 on Windows 7 and Windows 10.
I've coded a little program a while back. It might not be the exact thing you are looking for, but you can always run a program on the host OS that opens up a socket on startup. Here is the ping program itself:
# Run this on the PC that want to check if other PC is online.
from socket import *
def pingit(): # defining function for later use
s = socket(AF_INET, SOCK_STREAM) # Creates socket
host = 'localhost' # Enter the IP of the workstation here
port = 80 # Select port which should be pinged
try:
s.connect((host, port)) # tries to connect to the host
except ConnectionRefusedError: # if failed to connect
print("Server offline") # it prints that server is offline
s.close() #closes socket, so it can be re-used
pingit() # restarts whole process
while True: #If connected to host
print("Connected!") # prints message
s.close() # closes socket just in case
exit() # exits program
pingit() #Starts off whole process
And here you have the program that can recieve the ping request:
# this runs on remote pc that is going to be checked
from socket import *
HOST = 'localhost'
PORT = 80
BUFSIZ = 1024
ADDR = (HOST, PORT)
serversock = socket(AF_INET, SOCK_STREAM)
serversock.bind(ADDR)
serversock.listen(2)
while 1:
clientsock, addr = serversock.accept()
serversock.close()
exit()
To run a program without actually showing it, just save the file as .pyw instead of .py.
It makes it invisible until user checks running processes.
Hope it helped you
For simplicity, I use self-made functions based on socket.
def checkHostPort(HOSTNAME, PORT):
"""
check if host is reachable
"""
result = False
try:
destIp = socket.gethostbyname(HOSTNAME)
except:
return result
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(15)
try:
conn = s.connect((destIp, PORT))
result = True
conn.close()
except:
pass
return result
if Ip:Port is reachable, return True
If you wanna to simulate Ping, may refer to ping.py
Try this:
ret = os.system("ping -o -c 3 -W 3000 192.168.1.10")
if ret != 0:
print "Host is not up"
-o waits for only one packet
-W 3000 gives it only 3000 ms to reply to the packet.
-c 3 lets it try a few times so that your ping doesnt run forever
Use this and parse the string output
import subprocess
output = subprocess.Popen(["ping.exe","192.168.1.1"],stdout = subprocess.PIPE).communicate()[0]
How about the request module?
import requests
def ping_server(address):
try:
requests.get(address, timeout=1)
except requests.exceptions.ConnectTimeout:
return False
return True
No need to split urls to remove ports, or test ports, and no localhost false-positive.
Timeout amount doesn't really matter since it only hits the timeout when there is no server, which in my case meant performance no longer mattered. Otherwise, this returns at the speed of a request, which is plenty fast for me.
Timeout waits for the first bit, not total time, in case that matters.