Environment Variables in LUA - nginx

I have a Lua Script which Logins to the Redis and Process some Query to enable IP Based Blocking
Below is the Redis Config I am using in my Lua script and to run this script at every hit to the webserver I use the access_by_lua directive in my Nginx conference
--- Redis Configuration
local redis_host = "100.2.4.4"
local redis_port = 6379
local redis_timeout = 30
local cache_ttl = 3600
I would like to use an environment Variable in the reds_host and the port rather than static value
Any Help is appreciated
Note:
I have tried it as below, but no luck
--- Redis Configuration
local redis_host = os.getenv("redis_auth_host")
local redis_port = os.getenv("redis_auth_port")
local redis_timeout = 30
local cache_ttl = 3600

Redis runs Lua script in a sandbox, which disables global variables, with a few exceptions. In your case, os is a disabled global variable, so you cannot use it.
In order to avoid hard-coding the host and port, you can set them in Redis' key space, i.e. setting host and port as key-value pairs in Redis, and get them with redis.call() method.
local redis_host = redis.call("get", "host")
local redis_port = redis.call("get", "port")

Related

Using vector.dev to generate syslog from other lxc containers

I am wondering how to configure vector.dev to recieve syslog from other lxc containers. I have docker-compose running and vector installed on one container. The other containers host PBX and I'm wondering how I would go about configuring this to have one syslog central server using vector?
I believe I have to create a socket but my current configuration is just this in the vector.toml file:
[sources.syslog]
type = "syslog"
address = "0.0.0.0:514"
max_length = 102_400
mode = "udp"
path = "/vector.socket"
[sources.in]
type = "stdin"
[sinks.out]
inputs = ["in"]
type = "console"
encoding.codec = "text"
This is on the host currently. I believe I'm suppose to install vector on the instances I want to get logs from too ?

mariadb slow query not logged

As the title suggests, no log is recorded in the log file even though the related settings have been completed.
slow_query_log_file = /var/log/mysql/mariadb-slow.log
slow_query_log = 1
long_query_time = 1
log_slow_rate_limit = 1000
log_slow_verbosity = query_plan
log-queries-not-using-indexes
This is mariadb's conf content.
When you open the log file, only the basics exist.
Tcp port: 3306 Unix socket: /run/mysqld/mysqld.sock
Time Id Command Argument
logrotate seems to work fine.
After connecting to mysql, I used select sleep(); but it did not work properly.
The result after using the command is 0, which seems to be normal, but the log is not recorded.
Why wouldn't it work?
The new settings will apply only if the MariaDB server instance is restarted. Therefore, the solution, as mentioned in the comment, is to restart the MariaDB server instance in order to apply the new settings.

How do I deploy Apache-Airflow via uWSGI and nginx?

I'm trying to deploy airflow in a production environment on a server running nginx and uWSGI.
I've searched the web and found instructions on installing airflow behind a reverse proxy, but those instructions only have nginx config examples. However, due to the permissions, I can't change the nginx.conf itself and have to solve it via uswsgi.
My folder structure is:
project_folder
|_airflow
|_airflow.cfg
|_webserver_config.py
|_wsgi.py
|_env
|_start
|_stop
|_uwsgi.ini
My path/to/myproject/uwsgi.ini file is configured as follows:
[uwsgi]
master = True
http-socket = 127.0.0.1:9999
virtualenv = /path/to/myproject/env/
daemonize = /path/to/myproject/uwsgi.log
pidfile = /path/to/myproject/tmp/myapp.pid
workers = 2
threads = 2
# adjust the following to point to your project
wsgi-file = /path/to/myproject/airflow/wsgi.py
touch-reload = /path/to/myproject/airflow/wsgi.py
and currently the /path/to/myproject/airflow/wsgi.py looks as follows:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b'Hello World!']
I'm assuming I have to somehow call the airflow flask app from the wsgi.py file (perhaps by also changing some reverse proxy fix configs, since I'm behind SSL), but I'm stuck; what do I have to configure?
Will this procedure then be identical for the workers and scheduler?

airflow webserver suddenly stopped after long time of no issues, "No response from gunicorn"

Have had airflow webserver -D deamon process (v1.10.7) running on machine (CentOS 7) for long time. Suddenly saw that the webserver could no longer be accessed and checking the airflow-webserver.log saw...
[airflow#airflowetl airflow]$ cat airflow-webserver.log
2020-10-23 00:57:15,648 ERROR - No response from gunicorn master within 120 seconds
2020-10-23 00:57:15,649 ERROR - Shutting down webserver
(nothing of note in airflow-webserver.err)
[airflow#airflowetl airflow]$ cat airflow-webserver.err
/home/airflow/.local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
The airflow.cfg values for the webserver section looks like...
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
#base_url = http://localhost:8080
base_url = http://airflowetl.co.local:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the webserver waits before killing gunicorn master that doesn't respond
web_server_master_timeout = 120
# Number of seconds the gunicorn webserver waits before timing out on a worker
#web_server_worker_timeout = 120
web_server_worker_timeout = 300
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = my_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
Ultimately, just restarted the process as a daemon again (airflow webserver -D (should I have deleted the old airflow-webserer.log and .err files first?)), but not sure what would make this happen, since it had had no problems running for months before this.
Could anyone with more experience explain what could have happened after all this time and how I could prevent it in the future? Any issues with running dags or anything else that I should check for that this temporary unexpected shutdown of the websever may have caused?
I am experiencing the same issue, and it only started (very unfrequently) when I changed the following two config parameters in the webserver.
worker_refresh_interval = 120
workers = 2
However, my parameters are also set quite differently than yours, will share them here.
rbac = True
web_server_host = 0.0.0.0
web_server_port = 8080
web_server_master_timeout = 600
web_server_worker_timeout = 600
default_ui_timezone = Europe/Amsterdam
reload_on_plugin_change = True
After comparing the two, as your settings of the two I changed were set to the default (same as me before changing them), it seems that it is a combination of more parameters.

define db value while creating connection with redis into nginx.conf file while using Openresty

I am using Redis with Django project which is running on nginx and i am creating connection by code
red = redis.Redis("localhost", port=6397, db=5, socket_timeout=2)
Now by using Openresty i am fetching cache data from Redis using lua into nginx.conf file i am able to create connection
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 second
local ok, err = red:connect("10.0.0.161", 6379)
Here in nginx.conf file i am not able to understand how can define db value .
I tried local ok, err = red:connect("10.0.0.161", 6379, {db=5) but it is not working .
Please help me.
Just use select once connected:
red:select(5)

Resources