Send two different log files with INPUT as tail to two different indexes in AWS OpenSearch using fluent bit - fluent-bit

We have configured OpenSearch in AWS. We need to send two different application logs to two different indexes in OpenSearch using Fluent Bit. We are using tail as an INPUT and ES as an OUTPUT. Please find below fluent-bit configuration -
INPUT -
[INPUT]
name tail
path /var/log/messages
Refresh_Interval 1
Tag messages
Path_Key On
read_from_head true
[INPUT]
name tail
path /var/log/secure
Refresh_Interval 1
Tag secure
Path_Key On
read_from_head true
OUTPUT -
[OUTPUT]
Name es
Match *
Host opensearch-url
Port 443
HTTP_User admin
HTTP_Passwd ************
tls On
tls.verify Off
Include_Tag_Key On
Tag_Key tag

Related

Configure grafana-loki output plugin

i'm trying to use the grafana-loki output plugin in fluent-bit but it seems impossible to configure with tls.
i had a working configuration running with the loki plugin like this :
[OUTPUT]
Name loki
Match *
Host my-collector-url-for-loki
Port 443
Http_User m-user
Http_Passwd some-token-value
Labels job=fluentbit
auto_kubernetes_labels on
Tls On
Tls.verify On
but the problem with this output plugin was that the logs are not showing correctly on grafana, i think a filter or parser needs to be configured for it or maybe the plugin is just meant for loki not grafana/loki, i just don't know and i got tired of trying to figure out why. So i switched to the grafana-loki plugin and the logs looked perfect on grafana but i only had it working without authentication.
this is my setup with grafana-loki output plugin
[Output]
Name grafana-loki
Match *
Url https://url-to-my-logs-collector
TenantID ""
BatchWait 1
BatchSize 1048576
Labels {job="test-fluent-bit"}
RemoveKeys kubernetes,stream
AutoKubernetesLabels false
LabelMapPath /fluent-bit/etc/labelmap.json
LineFormat json
LogLevel warn
# everything prior to this line is working successfully
# trying to set authentication here "this part doesn't work"
Tls On
Tls.verify On
Http_User m-user
Http_Passwd some-token-value
Problem with this setup, i always get a 403 forbidden http status. I'm having trouble figuring out how to set authentication on this plugin. Does anyone have a working configuration for this type of setup?
Authentication worked for me using this plugin using like below:
[Output]
Name grafana-loki
Match *
Url https://${user_loki}:${pass_loki}#lurl-to-my-logs-collector
BatchWait 1s
BatchSize 102400
TenantID ""
(...)
TLS, http.user and http.passwd options are not support, as far as I could understand, by this plugin.

How to forward logs using rsyslog client

I need to forward messages from a log file to another IP - let's say 127.0.0.1 514. How do I achieve this?
I used this example from the docs of rsyslog:
module(load="imfile" PollingInterval="10") #needs to be done just once
# File 2
input(type="imfile"
File="/path/to/file2"
Tag="tag2")
As well as providing it with the following rule:
*.* #127.0.0.1:514
But this ended up sending all of the system's logs including journald.
So how do I correctly use ruleset, input blocks and *.* #127.0.0.1:514 to send logs from file /path/to/file2 to 127.0.0.1:514?
Thanks
When specifying the input, also say which ruleset to apply. Input outside the ruleset will not be processed by the ruleset.
module(load="imfile")
input(type="imfile" File="/path/to/file2" Tag="tag2" ruleset="remote")
ruleset(name="remote"){
action(type="omfwd" target="127.0.0.1" port="514" protocol="udp")
# or use legacy syntax:
# *.* #127.0.0.1:514
}

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

Where is the Compose Scylladb SSL certificate?

I'm trying to connect to my scylladb 1.7.4 instance using the connection string provided for me in the compose overview section of the management UI:
$ cqlsh --ssl portal-xxxx.ibm-343.composedb.com 19228 -u scylla -p XXXX --cqlversion=3.3.1
However, the response is:
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /Users/snowch/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable
Where can I get access to the Compose SSL certificate so that I can connect with:
$ SSL_CERTFILE=/path/to/scylla_certfile cqlsh --ssl portal-xxxx-0.csnow-scylla-45.ibm-343.composedb.com 19228 -u scylla -p XXXX --cqlversion=3.3.1
I have seen the option SSL_VALIDATE=false in the documentation however, I don't want to disable SSL validation.
The information is further down in the documentation in the section https://help.compose.com/docs/scylla-and-certificates.
My confusion was because I was drawn to the information on ssl (#2) because of the issue I had encountered and as such I jumped over the section on full configuration for cqlsh (#1):
Cqlsh Command Line
The Cqlsh Command Line panel contains three cqlsh commands, each of which connect to the three Compose portals. Full details on obtaining cqlsh and configuring it are available in Scylla and cqlsh. (#1)
The displayed command include required flags (--ssl and --cqlversion). If the command is preceded by setting the environment variable SSL_VALIDATE=false, then no further configuration is needed. (#2)
I think this section would be a bit clearer if it was re-ordered:
Cqlsh Command Line
The Cqlsh Command Line panel contains three cqlsh commands, each of which connect to the three Compose portals.
The displayed command include required flags (--ssl and --cqlversion). If the command is preceded by setting the environment variable SSL_VALIDATE=false, then no further configuration is needed.
Full details on obtaining cqlsh and configuring it are available in Scylla and cqlsh. This section includes information on configuring cqlsh to use ssl.

Why is the netsh http add sslcert throwing error from Powershell ps1 file?

I am trying to add an sslcert using netsh http from within a powershell ps1 file, but it keeps throwing errors:
$guid = [guid]::NewGuid()
netsh http add sslcert ipport=0.0.0.0:443 certhash=5758B8D8248AA8B4E91DAA46F069CC1C39ABA718 appid={$guid}
'JABnAHUAaQBkAA' is not a valid argument for this command.
The syntax supplied for this command is not valid. Check help for the correct syntax.
Usage: add sslcert [ipport=]<IP Address:port>
[certhash=]<string>
[appid=]<GUID>
[[certstorename=]<string>
[verifyclientcertrevocation=]enable|disable
[verifyrevocationwithcachedclientcertonly=]enable|disable
[usagecheck=]enable|disable
[revocationfreshnesstime=]<u-int>
[urlretrievaltimeout=]<u-int>
[sslctlidentifier=]<string>
[sslctlstorename=]<string>
[dsmapperusage=]enable|disable
[clientcertnegotiation=]enable|disable]
Parameters:
Tag Value
ipport - IP address and port for the binding.
certhash - The SHA hash of the certificate. This hash
is 20 bytes long and specified as a hex
string.
appid - GUID to identify the owning application.
certstorename - Store name for the certificate. Defaults
to MY. Certificate must be stored in the
local machine context.
verifyclientcertrevocation - Turns on/off verification of revocation
of client certificates.
verifyrevocationwithcachedclientcertonly - Turns on/off usage of
only cached client
certificate for revocation checking.
usagecheck - Turns on/off usage check. Default is enabled.
revocationfreshnesstime - Time interval to check for an updated
certificate revocation list (CRL). If this
value is 0, then the new CRL is updated
only if the previous one expires. (in
seconds)
urlretrievaltimeout - Timeout on attempt to retrieve certificate
revocation list for the remote URL.
(in milliseconds)
sslctlidentifier - List the certificate issuers that can
be trusted. This list can be a subset of
the certificate issuers that are trusted
by the machine.
sslctlstorename - Store name under LOCAL_MACHINE where
SslCtlIdentifier is stored.
dsmapperusage - Turns on/off DS mappers. Default is
disabled.
clientcertnegotiation - Turns on/off negotiation of certificate.
Default is disabled.
Remarks: adds a new SSL server certificate binding and corresponding client
certificate policies for an IP address and port.
Examples:
add sslcert ipport=1.1.1.1:443 certhash=0102030405060708090A0B0C0D0E0F1011121314 appid={00112233-4455-6677-8899
-AABBCCDDEEFF}
I might be wrong, but I believe it has something to do how I go about specifying the appid GUID in my powershell script file. Could someone please help me solve the error?
It's a problem with the way Powershell parses cmd commands.
This will execute the command successfully:
$guid = [guid]::NewGuid()
$Command = "http add sslcert ipport=0.0.0.0:443 certhash=5758B8D8248AA8B4E91DAA46F069CC1C39ABA718 appid={$guid}"
$Command | netsh
The reason for the error is that the curly braces have to be escaped each with a backtick (`).
The following command will work from the PowerShell commandline:
This will work from the PowerShell commadline:
$AppId = [Guid]::NewGuid().Guid
$Hash = "209966E2BEDA57E3DB74FD4B1E7266F43EB7B56D"
netsh http add sslcert ipport=0.0.0.0:8000 certhash=$Hash appid=`{$Guid`}
The important details are to escape each { } with a backtick (`).
If netsh raises an error 87 try appending certstorename my
There is no need to use variables. Its just for sake of convenience.
Below code will work, & here is used for invoke program with parameters, and "appid={$guid}" make it pass string value.
& netsh http add sslcert ipport=0.0.0.0:443 certhash=5758B8D8248AA8B4E91DAA46F069CC1C39ABA718 "appid={$guid}"

Resources