How to convert logs from syslog to jsonfile - syslog

docker containers in my machine has syslog driver being set as default logging driver and sends those logs to a remote rsyslog server in syslog format not in json-file format , i want to have logs information in both formats, as json-file format give us lots of useful metadata about the containers like their containerName, containerID, message etc, which syslog driver does not provide.
so since i already have syslog format info available in rsyslog server(which is being sent to logstash container, one of the containers of the elk stack running in another machine),
i) first of all can i convert syslog info into json-file info and will i get same metadata information if i had created the container with logging driver
set as json-file
ii) big question if yes, HOW TO DO it.
i have followed this link https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ in my ubuntu machine but could not get the output as was described in the link

Related

How do I connect to a RDS MySQL instance from RStudio via a bastion host?

I would like to use RStudio for analysis of data on a MySQL instance. This is a AWS RDS MySql instance that is only accessible via a jump box / bastion host. I have the credentials necessary to connect to the jump box, and from the jump box to the RDS instance. What do I need to do be able to query this DB directly from within the RStudio console?
I can connect (using the Terminal tab in RStudio)to the jump box using:
ssh -p 22xx user#ip.add.re.ss
Then I can connect to RDS mysql using:
mysql -u username -p database -h hostname.us-east-1.rds.amazonaws.com
I can connect and do manual mysql commands from within RStudio terminal, but I don't seem to be able to do anything with the DB from the RStudio console.
Sorry for opening a 2yo thread, but for everyone dealing with this issue as I am - I found this thread and it looks like it works (connecting with MySQL via ssh from R Studio).
You should use something which is called port forwarding. Some details
are here
(https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding) For
example, say you wanted to connect from your laptop to
http://www.ubuntuforums.org using an SSH tunnel. You would use source
port number 8080 (the alternate http port), destination port 80 (the
http port), and destination server www.ubuntuforums.org. :
ssh -L 8080:www.ubuntuforums.org:80 Where should be
replaced by the name of your laptop.
This is done for whole computer so you dont need to do this from r
studio.
Offcourse you need to forward your port to 3036. But you need special
privilige on the server. Because on most hosting you can only connect
from localhost (for example from PHP)
Source: https://www.py4u.net/discuss/881859

How to communicate with Kafka server running inside a docker

I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)

Service dies when Nmap is run

I am having a weird problem.
I have a service running on port 8888 on one of my many servers in a cluster.
When I run nmap on my gateway to get all the IPs inside my network, this service miraculously dies. Since nmap does a port scan too, It might have something to do with it. I am not sure.
The nmap command I am using is this:
sudo nmap -oX ${FILE_NAME} ${IP_DOMAIN} -A -O --osscan-guess
Can some tell me what might be happening ?
While Nmap developers try to limit the danger, Nmap scans can still crash services. The most likely culprit for crashing a service (as opposed to crashing an entire machine) is the service version detection scan phase (-sV, implied in your command by -A). This scan sends a series of data packets to the service in an attempt to elicit a response which can be matched against Nmap's database of known services. When a match is found, Nmap stops sending probes. That means that an unknown service can get lots of probes sent to it which contain binary data, command strings, and other data that your service is not expecting.
A well-written network service will not crash on any input; your service has a bug of some sort. Avoiding this sort of crash usually means avoiding scanning that service:
You can use the Exclude directive in your nmap-service-probes data file to instruct Nmap to never send these service probes to port 8888.
You can avoid scanning port 8888 at all by changing the ports you scan with -p. Later versions of Nmap will support the --exclude-ports option, too.
You can make sure you are using the latest version of Nmap. If your service's fingerprint was added to the nmap-service-probes file, then Nmap will stop sending probes when it detects it, which may avoid sending the later probe that crashes it.
You can reduce the intensity of the service scan with the --version-intensity option. This prevents Nmap from sending so many service probes, which may eliminate the one that is crashing your service.
Finally, if this service is a standard one and not something custom to your own network, you can report it to The Network Scanning Watch List so that other users can avoid crashing it as well.

How does Logstash integrate with Syslog?

I'm trying to figure out how Logstash integrates with syslog. Which of the following is true:
Logstash itself is a bon afide syslog server (implements the syslog protocol). In this case, you configure all of your syslog client to log directly to the Logstash server via the syslog protocol. Or...
You configure all of your syslog client to log to a centralized syslog server (such as a machine running rsyslog), and then configure some kind of bridge between the syslog server and the Logstash server? Or...
Something else entirely?
I'm looking to understand the relationships between syslog client, syslog server, and Logstash.
If you use the syslog input on logstash (http://logstash.net/docs/1.4.0/inputs/syslog), you are setting up a TCP/UDP syslog server. That means you have to tell your clients (say log4j) where your syslog server is, or configure a syslog instance already running to forward the messages on to your logstash instance (via a *.* #host syntax in /etc/syslog.conf file).
It really depends on what your requirements are -- if you need to receive logs from a unix domain socket, you'll have to use the forwarding method or setup a file watcher to watch the /var/log/* files directly.

Logstash shipper & server on the same box

I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.

Resources