Using vector.dev to generate syslog from other lxc containers - syslog

I am wondering how to configure vector.dev to recieve syslog from other lxc containers. I have docker-compose running and vector installed on one container. The other containers host PBX and I'm wondering how I would go about configuring this to have one syslog central server using vector?
I believe I have to create a socket but my current configuration is just this in the vector.toml file:
[sources.syslog]
type = "syslog"
address = "0.0.0.0:514"
max_length = 102_400
mode = "udp"
path = "/vector.socket"
[sources.in]
type = "stdin"
[sinks.out]
inputs = ["in"]
type = "console"
encoding.codec = "text"
This is on the host currently. I believe I'm suppose to install vector on the instances I want to get logs from too ?

Related

Environment Variables in LUA

I have a Lua Script which Logins to the Redis and Process some Query to enable IP Based Blocking
Below is the Redis Config I am using in my Lua script and to run this script at every hit to the webserver I use the access_by_lua directive in my Nginx conference
--- Redis Configuration
local redis_host = "100.2.4.4"
local redis_port = 6379
local redis_timeout = 30
local cache_ttl = 3600
I would like to use an environment Variable in the reds_host and the port rather than static value
Any Help is appreciated
Note:
I have tried it as below, but no luck
--- Redis Configuration
local redis_host = os.getenv("redis_auth_host")
local redis_port = os.getenv("redis_auth_port")
local redis_timeout = 30
local cache_ttl = 3600
Redis runs Lua script in a sandbox, which disables global variables, with a few exceptions. In your case, os is a disabled global variable, so you cannot use it.
In order to avoid hard-coding the host and port, you can set them in Redis' key space, i.e. setting host and port as key-value pairs in Redis, and get them with redis.call() method.
local redis_host = redis.call("get", "host")
local redis_port = redis.call("get", "port")

gitlab-runner Could not resolve host: gitlab.com

I have a raspberry pi with gitlab-runner installed (linux version) and a git repository on gitlab.com (not self hosted).
At the beginning of pipeline, gitlab-runner on raspberry try to fetch the .git repo but I get :
Could not resolve host: gitlab.com
I tried :
ping gitlab.com is ok on the raspberry
Add extra_host = ['localhost:my.ip.ad.ress] --> No changes
Add netword_mode = "gitlab_default" like this, And get :
This error :
Error response from daemon: network gitlab_default not found (exec.go:57:1s)
I am in the simplest configuration with repo on gitlab.com and a gitlab-runner on raspberry. How can I deal with it ?
Here is the config.toml :
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab runner on raspberryPi"
url = "https://gitlab.com/"
token = "XXXX"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "node:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
I had same issue , my gitlab-runner was running on my local. I restarted my docker
systemctl restart docker
and error went away.
Not being able to resolve the host name can have multiple root-causes:
IP forwarding disabled?
Routing might be disabled on your system. Check if IP forwarding is enabled (== 1).
cat /proc/sys/net/ipv4/ip_forward
1
If it's disabled, it will return 0, please enable it by editing a sysctl file. For example edit and add: /etc/sysctl.d/99-sysctl.conf:
net.ipv4.conf.all.forwarding = 1
net.ipv4.ip_forward = 1
Apply the setting without rebooting: sudo sysctl --system
Important Note: Even if the system is reporting that IP forwarding is currently enabled, you might want to set to explicitly and correctly in your sysctl configs. Since Docker will run sysctl -w net.ipv4.ip_forward=1 when the Daemon starts-up. But that is not a persistent setting, and might cause very random issues!! Like you have.
DNS missing / invalid?
You can try if setting a DNS server to 8.8.8.8 might fix the problem:
[runners.docker]
dns = ["8.8.8.8"]
Add extra_host?
You can also try to add an extra host, which might be mainly relevant within a local network (so not with gitlab.com domain).
[runners.docker]
extra_hosts = ["gitlab.yourdomain.com:192.168.xxx.xxx"]
Using host network
I really do not advise this, but you could configure the Docker container to run with network_mode as "host". Again, only do this for debugging reasons:
[runners.docker]
network_mode = "host"

How do I use Windump using Cuckoo in Windows10

When I analyze a file using Cuckoo
These error I have.
File "c:\python27\lib\site-packages\cuckoo\auxiliary\sniffer.py", line 157, in stop
(out, err, faq("permission-denied-for-tcpdump"))
CuckooOperationalError: Error running tcpdump to sniff the network traffic during the analysis; stdout = '' and stderr = 'tcpdump.exe: listening on VirtualBox Host-Only Ethernet Adapter\r\ntcpdump.exe: Error opening adapter: \xbd\xc3\xbd\xba\xc5\xdb\xc0\xcc \xc1\xf6\xc1\xa4\xb5\xc8 \xc0\xe5\xc4\xa1\xb8\xa6 \xc3\xa3\xc0\xbb \xbc\xf6 \xbe\xf8\xbd\xc0\xb4\xcf\xb4\xd9. (20)\r\n'. Did you enable the extra capabilities to allow running tcpdump as non-root user and disable AppArmor properly (the latter only applies to Ubuntu-based distributions with AppArmor, see also https://cuckoo.sh/docs/faq/index.html#permission-denied-for-tcpdump)?
My Virtualbox network(guest) name is VirtualBox Host-Only Ethernet Adapter
and my Windows10(host) is installed Windump(renamed as tcpdump.exe), Path is C:\tools\tcpdump.exe
also I set auxiliary.conf file.
# Specify the path to your local installation of tcpdump. Make sure this
# path is correct.
tcpdump = C:/tools/tcpdump.exe
My question is that why I'm getting an error like listening on VirtualBox Host-Only Ethernet Adapter\r\ntcpdump.exe: even though setting a tcpdump.exe path currectly.
I found the answer.
Confugured this line in sniffer.py.
From
err_whitelist_start = (
"tcpdump: listening on ",
"C:/tools/tcpdump.exe: listening on",
)
To
err_whitelist_start = (
"tcpdump: listening on ",
"C:\\tools\\tcpdump.exe: listening on",
)
And my virtualbox interface is wrong. So changed this
virtualbox.conf
From
interface = virtualBox Host-Only Ethernet Adapter
To
interface= \Device\NPF_{ED29CFE9-25EB-4AD9-B2EA-C09A93D465BF}

rabbitMQ.Client in .NET System.IO.IOException: connection.start was never received, likely due to a network timeout

I am writing amqp 1.0 client (using rabbitMQ.Client in .NET) for a broker who provided me the following information:
amqps://brokerRemoteHostName:5671
certificate_openssl.p12
password for certificate as a string "mypassword"
queue name
I developed the following code in Visual Studio which is supposed to work (based on long searches on the web):
var cf = new ConnectionFactory();
cf.Uri = new Uri("amqps://brokerRemoteHostName:5671");
cf.Ssl.Enabled = true;
cf.Ssl.ServerName = "brokerRemoteHostName";
cf.Ssl.CertPath = #"C:\Users\mahmoud\Documents\certificate_openssl.p12";
cf.Ssl.CertPassphrase = "myPassword";
var connection = cf.CreateConnection();
However, the output shows an exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
None of the specified endpoints were reachable ---> System.IO.IOException:
connection.start was never received
likely due to a network timeout) as seen in the image.
Where line 50 corresponds to the line where we create the connection.
I appreciate your kind assistance on the error above.
If you're connecting to a docker container, you need to add the 5672 port in addition to 15672 port when creating the container. For those using ssl, the port would be 5671 instead of 5672.
Example: docker run -d --hostname my-rabbit --name rabbitmq --net customnet -p customport:15672 -p 5672:5672 rabbitmq:3-management.
You would connect from client by calling this: ConnectionFactory factory = new ConnectionFactory() { HostName = "localhost" };.
Feel free to pass in username and password if those were changed.
Official RabbitMq docker image https://hub.docker.com/_/rabbitmq starts RabbitMq broker on port 5672, but .NET RabbitMq library expects to see broker on port 5673 which for sure differs from what we have in fact in docker. The solution is just to remap 5672 to expected 5673 port
docker run -d --hostname my-rabbit --name ds-rabbit -p 8080:15672 -p 5673:5672 rabbitmq:3-management

How to install TCP application on INET host (omnetpp)?

I have a TCP Application module built as "ClientSideApplication". I need to install this application on an AODVRouter host "host". I can not figure out a way to do that.
In my omnetpp.ini file i have,
**.host[*].numTcpApps = 1
**.host[*].tcpApp[*].typename = "ClientSideApplication"
**.host[*].tcpApp[*].active = true
**.host[*].tcpApp[*].localAddress = ""
**.host[*].tcpApp[*].localPort = -1
**.host[*].tcpApp[*].connectAddress = "ideHost"
**.host[*].tcpApp[*].connectPort = 1000
**.host[*].tcpApp[*].tOpen = 0.5s
But i am not sure that if it is enough, because when i am seeing the simulation, there is a "waiting" (text) written on the top of the Application Module. Can you guys tell me the way to install the TCP app and make it work with AODVRouter host ("host").
Note: I have implemented the TCP Application module as an extension of TCPAppBase and used the example of TCPSessionApp from inet source folder.

Resources