I'm trying to figure out how Logstash integrates with syslog. Which of the following is true:
Logstash itself is a bon afide syslog server (implements the syslog protocol). In this case, you configure all of your syslog client to log directly to the Logstash server via the syslog protocol. Or...
You configure all of your syslog client to log to a centralized syslog server (such as a machine running rsyslog), and then configure some kind of bridge between the syslog server and the Logstash server? Or...
Something else entirely?
I'm looking to understand the relationships between syslog client, syslog server, and Logstash.
If you use the syslog input on logstash (http://logstash.net/docs/1.4.0/inputs/syslog), you are setting up a TCP/UDP syslog server. That means you have to tell your clients (say log4j) where your syslog server is, or configure a syslog instance already running to forward the messages on to your logstash instance (via a *.* #host syntax in /etc/syslog.conf file).
It really depends on what your requirements are -- if you need to receive logs from a unix domain socket, you'll have to use the forwarding method or setup a file watcher to watch the /var/log/* files directly.
Related
docker containers in my machine has syslog driver being set as default logging driver and sends those logs to a remote rsyslog server in syslog format not in json-file format , i want to have logs information in both formats, as json-file format give us lots of useful metadata about the containers like their containerName, containerID, message etc, which syslog driver does not provide.
so since i already have syslog format info available in rsyslog server(which is being sent to logstash container, one of the containers of the elk stack running in another machine),
i) first of all can i convert syslog info into json-file info and will i get same metadata information if i had created the container with logging driver
set as json-file
ii) big question if yes, HOW TO DO it.
i have followed this link https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ in my ubuntu machine but could not get the output as was described in the link
I have enabled security for my Hadoop cluster and it works fine. But when I visit the link http://namenode_host:8020, it shows:
It looks like you are making an HTTP request to a Hadoop IPC port. This is not the correct port for the web interface on this daemon.
But I don't want such behavior, because it is unencrypted message and the policy of our company is to encrypted the data for all the ports. 8020 is a RPC port of Hadoop. Any idea on how to disable HTTP requests to Hadoop RPC port?
Take a look at the Data Confidentiality section from the apache doc, I think you are looking for the RPC encryption.
8020 - is the default port of Hadoop File System, which listens for the IPC calls from HDFS clients to Hadoop NameNode for HDFS metadata operations. You should not try to access it directly through HTTP. If you want to work with your data on HDFS through web you have to use WebHDFS API which allows to perform web requests upon the data in the file system.
I want to setup monit on a server which is going to be a centralized server to monitor processes running on remote servers. I checked many docs related to setup monit but could not find how to setup for remote server processes. For example a centralized monit server should monitor nginx running on A server, mongod running on B server and so on. Any suggestion how to do this?
In the documentation, Monit can be able to test the connection remotely, using tcp or udp, what you can do is to provide a small status file that gets refreshed for each technology you are intending to monitor, and let Monit hit that status file through http, etc. and can be used as follows:
check host nginxserver with address www.nginxserver.com
if failed port 80 protocol http
and request "/some_file"
then alert
Since you are testing a web server that can be easily accomplished with the above. as a note , below is the part about Monit connection testing:
CONNECTION TESTING Monit is able to perform connection testing via
networked ports or via Unix sockets. A connection test may only be
used within a check process or within a check host service entry in
the Monit control file.
If a service listens on one or more sockets, Monit can connect to the
port (using either tcp or udp) and verify that the service will accept
a connection and that it is possible to write and read from the
socket. If a connection is not accepted or if there is a problem with
socket i/o, Monit will assume that something is wrong and execute a
specified action. If Monit is compiled with openssl, then ssl based
network services can also be tested.
The full syntax for the statement used for connection testing is as
follows (keywords are in capital and optional statements in
[brackets]),
IF FAILED [host] port [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
or for Unix sockets,
IF FAILED [unixsocket] [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
host:HOST hostname. Optionally specify the host to connect to. If the
host is not given then localhost is assumed if this test is used
inside a process entry. If this test was used inside a remote host
entry then the entry's remote host is assumed. Although host is
intended for testing name based virtual host in a HTTP server running
on local or remote host, it does allow the connection statement to be
used to test a server running on another machine. This may be useful;
For instance if you use Apache httpd as a front-end and an
application-server as the back-end running on another machine, this
statement may be used to test that the back-end server is running and
if not raise an alert.
port:PORT number. The port number to connect to
unixsocket:UNIXSOCKET PATH. Specifies the path to a Unix socket.
Servers based on Unix sockets always run on the local machine and do
not use a port.
Syslog is a client/server protocol: a logging application transmits a text message to the syslog receiver.
I need to log to this syslog server. How would I do it using classic ASP?
I've written a Syslog COM component to write syslog messages from classic ASP.
Besides providing classic ASP with a generic industry-standard logging solution, it can also come in handy when you're running a Windows Docker container running a Classic ASP application.
IIS under Docker doesn't write a default log to stdout/console. Using Syslog you can still log from IIS inside a Windows Docker container.
The ActiveX syslog component can be found at https://gitlab.com/erik4/syslog-com-client.
I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.