Fluent Bit Parser Causing Loggin Driver to Fail - fluent-bit

I have configured a Docker compose with Postgres and Fluent Bit, I also configured the Postgres container to use fluented driver for logging. Everything works well without configuring a parser in Fluent Bit, the moment I configure a parser I get this error when I run docker compose up -d:
Error response from daemon: failed to initialize logging driver: dial tcp [::1]:24224: connect: connection refused
Am I missing something?
Here is the Fluent Bit config file:
[SERVICE]
log_level debug
Parsers_File /Fluentbit/parsers.conf
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
[FILTER]
Name parser
Match postgres
Key_Name log
Parser postgres
[OUTPUT]
Name opensearch
Match *
Host opensearch-node1
Port 9200
HTTP_User password
HTTP_Passwd password
Suppress_Type_Name On
Logstash_Format On
and here is the parser config file:
[PARSER]
Name postgres
Format regex
Regex (?<time>[\d\-\:\. ]+) [^ ]* \[(?<process>[\d]*)\] (?<db>[^ ]*) (?<ip>[\d\.]*)(\()(?<port>[\d]*)(\)) (\[)?(?<application>[\w ]*)(\])? (postgres)(?<severity>[^ ]*) (?<description>[\s\S]*)$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
I tried to configure the parser to parse postgres logs to ingest them in OpenSearch, I expected to see parsed logs ingested as documents inside OpenSearch.

Related

Airflow SambaHook authentication issue with SpnegoError and Kerberos?

I am trying to connect to a Samba server in Airflow using the SambaHook class. The Samba server requires Kerberos authentication.
I have already defined a Samba connection in Airflow using the following parameters:
Host,Schema and Extra {"auth": "kerberos"}
airflow connections add "samba_repo" --conn-type "samba" --conn-host "myhost.mywork.com" --conn-schema "fld" --conn-extra '{"auth": "kerberos"}'
I'm trying to use the SambaHook class in Airflow to connect to a Samba server. When I run my code, I get the following error:
Failed to authenticate with server: SpnegoError (1): SpnegoError (16): Operation not supported or available, Context: Retrieving NTLM store without NTLM_USER_FILE set to a filepath, Context: Unable to negotiate common mechanism
However, when I use smbclient to connect to the same server using Kerberos authentication from the Docker terminal, it works fine with the command: smbclient //'myhost'/'fld' -c 'ls "\workpath\*" ' -k
What I tried: I set up a connection to the Samba server in Airflow using the SambaHook class and tried to use the listdirmethod to retrieve a list of files in a specific directory.
What I expected to happen: I expected the listdir method to successfully retrieve a list of files in the specified directory from the Samba server.
What actually resulted: Instead, I encountered the following error message:
Failed to authenticate with server: SpnegoError (1): SpnegoError (16): Operation not supported or available, Context: Retrieving NTLM store without NTLM_USER_FILE set to a filepath, Context: Unable to negotiate common mechanism

Airflow seems to be ignoring "fernet_key" config

Summary: Ariflow seems to ignore fernet_key value both from airflow.cfg and environment variables, even though the exposed config in the webserver GUI shows the correct value. All DAGs using encrypted variables therefore fail.
Now the details:
I have an Airflow 2.3.2 (webserver and scheduler) running on a VM (Ubuntu 20.04) in the cloud. To start and restart the services I am using systemctl. Here are the contents of the airflow-webserver.service:
[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service airflow-init.service
[Service]
EnvironmentFile=/etc/airflow/secrets.txt
Environment="AIRFLOW_HOME=/etc/airflow"
User=airflow
Group=airflow
Type=simple
ExecStart=/usr/local/bin/airflow webserver --pid /run/airflow/webserver.pid
Restart=on-failure
RestartSec=5s
PrivateTmp=true
RuntimeDirectory=airflow
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
As you can tell I am using an environment file. It looks like this:
AIRFLOW__CORE__FERNET_KEY=some value
AIRFLOW__WEBSERVER__SECRET_KEY=some value
The setup itself seems to be working as confirmed by an exposed config in the webserver GUI:
link to a screenshot.
However, since upgrading to 2.3.2 (from 2.2.3) I am facing an issue that seems to be related with the fernet key configuration. The gist of it is that airflow seems to ignore the fernet_key config and therefore fails to decode variables. This is how it manifests:
$ airflow variables get DBT_USER
Variable DBT_USER does not exist
$ airflow variables list -v
[2022-07-14 17:57:09,920] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_USER, FERNET_KEY configuration missing
[2022-07-14 17:57:09,922] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_PASSWORD, FERNET_KEY configuration missing
[2022-07-14 17:57:09,924] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_USER, FERNET_KEY configuration missing
[2022-07-14 17:57:09,925] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_PASSWORD, FERNET_KEY configuration missing
key
============
DBT_USER
DBT_PASSWORD
I receive exactly the same error running the DAGs which use encrypted variables (templated or with Variable.get). The issue persists even if I hardcode the fernet_key in airflow.cfg. Since I have not too many variables I tried deleting them and creating new ones to ensure that fernet_key matches the key needed to decode values stored in the database. Then I confirmed that the fernet_key from my config can correctly decode the values stored in the db (I fetched encrypted values by querying variable table in the PostgreSQL directly).
I am out of ideas so any hint is greatly appreciated.

Symfony bdal Excepiton: could not find driver

On command line I get connection and desired entities, no driver error here:
php bin/console dbal:run-sql 'select * from ourtest'
But on web i get error:
$this->connection->fetchAll('SELECT * FROM ourtest');
Handling
"App\Application\Command\DocumentUpload\DocumentUploadCommand" failed:
An exception occurred in driver: could not find driver
i tried
php -m display among others PDO, pdo_mysql, mysqli, mysqlnd
connection url from .env:
DATABASE_URL=mysql://xxx:xxx#mysql-db:3306/db_test_01?serverVersion=8.0
doctrine.yaml:
doctrine:
dbal:
dbname: db_test_01
host: mysql-db
port: 3306
user: xxx
password: xxx
driver: pdo_mysql
version: 8.0
My guess is the command line and your webserver use different configurations of PHP or different installations altogether.
Here's how to find out: Put a phpinfo() call in a file on your webserver (Do not do that on production as it will expose sensitive data to anyone with access to this file!). Open that file in your browser and check where your PHP installation is located.
On the command line, you can run which php to see where the PHP binary is located.
Compare the two and see whether you are running on the same binary. If it is indeed the same, then you should check which .ini file is used by the web server. In that .ini file you should find the name of the mysql extension (pdo_mysql.so) and it should not be commented out (no preceding ;).

Is there any CLI way to show information about Glassfish JDBC connection pool?

The only relevant command that I found is:
NAME
list-jdbc-connection-pools - lists all JDBC connection pools
EXAMPLES
This example lists the existing JDBC connection pools.
asadmin> list-jdbc-connection-pools
sample_derby_pool
__TimerPool
Command list-jdbc-connection-pools executed successfully.
What I want is to display the information about particular connection pool. Such as:
asadmin desc-jdbc-connection-pool sample_derby_pool
name: sample_derby_pool
databaseName: oracle
portNumber: 1521
serverName: test
user: testUser
...
Try running:
asadmin get * | more
The above command will display all GlassFish attributes. Pipe it to grep to get just the pool properties you are interested in:
asadmin get * | grep TimerPool
Hope this helps.

Binding external IP address to Rabbit MQ server

I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT

Resources