syslog NG not starting up when specifying an ip address but works as a catch all and write to file setup - syslog

I am trying to setup a syslog NG server where i could collect all the logs. now ive managed to create the settings where the server will collect all the logs from all the servers and write it to a single file. but i was wondering if its possible to create a separate log file for each ip address. my config file is as below and every time i mention network it fails to start. can you please let me know where im going wrong?
log { source(s_src); filter(f_console); destination(d_console_all);
destination(d_xconsole); };
log { source(s_src); filter(f_crit); destination(d_console); };
log {
source(s_src);
};
destination Windest {
file("/var/log/test");
};
source forwarder {
network( ip(192.168.1.140));
};
destination forwarderonedest {
file("/var/log/forwarder1");
};
log {
source(forwarder);
destination(forwarderonedest);
};
the
error i get when i try to restart is
/etc/init.d/syslog-ng restart
[....] Restarting syslog-ng (via systemctl): syslog-ng.serviceJob for syslog-ng.service failed because the control process exited with error code. See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
failed!
what works for me is
};
destination Windest {
file("/var/log/test");
};
source forwarder {
tcp();
udp();
};
destination forwarderonedest {
file("/var/log/forwarder1");
};
log {
source(forwarder);
destination(forwarderonedest);
};
and it works. but all the logs from all the machines get written on to a single file.

You can try the below configuration in order to split logs in two/more files:
As per teh config below , syslog-ng server will be running on 2 different ports (your choice) i.e., 514 and 515.
So, on client you can configure application logs to be forwarded to port 514 and system logs to port number 515.
Syslog-ng server will handle the logs in two different files.
#### Local Logs ####
source s_local { system(); internal(); };
#### Source : Application Logs ####
source s_xyz_network {
network(transport(tcp) ip(192.168.1.140) port (514) flags(syslog-protocol));
};
#### Source: System Logs #####
source s_sys_network {
network(transport(tcp) ip(192.168.1.140) port (515) flags(syslog-protocol));
};
destination d_local {
file("/var/log/syslog-ng/local_sys_logs.log"); };
destination d_xyz_logs {
file(
"/var/log/syslog-ng/centralized_logs_xyz.log"
owner("root")
group("root")
perm(0777)
); };
destination d_sys_logs {
file(
"/var/log/syslog-ng/centralized_sys_logs.log"
owner("root")
group("root")
perm(0777)
); };
log { source(s_xyz_network); destination(d_xyz_logs);};
log { source(s_local); destination(d_local);};
log { source (s_sys_network);destination(d_sys_logs);};
##### Config Ends ########
Hope this will help you :)

Related

syslog-ng not filtering on tags on remote server

I have an nginx server using syslog-ng to send access and error logs to a remote syslog-ng server. I am having it tag the messages so that the remote server can filter on the tags to put them into files. But the filter statements seem to be not working. On the local client I did a test, sending the messages to a local file using the filters and they work there. But they seem to break somehow when being sent remote.
The config on the client is:
#version: 3.13
#include "scl.conf"
## global options.
options { chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root");
group("adm");
perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
};
source s_qa_nginx_access {
file("/var/log/nginx/access.log" follow-freq(1)
tags("qa_nginx_access")
flags(no-parse));
};
source s_qa_nginx_error {
file("/var/log/nginx/error.log" follow-freq(1)
tags("qa_nginx_error")
flags(no-parse));
};
destination d_syslog-ng_central { syslog("10.0.0.50" transport("tcp") port(514)); };
log { source(s_qa_nginx_access); destination(d_syslog-ng_central);};
log { source(s_qa_nginx_error); destination(d_syslog-ng_central);};
On the remote syslog-ng server I have
#version: 3.13
#include "scl.conf"
options {
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root");
group("adm");
perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
time-reap(30);
mark-freq(10);
keep-hostname(yes);
};
source s_network { syslog(transport(tcp) port(514)); };
filter f_qa_nginx_access { tags("qa_nginx_access"); };
filter f_qa_nginx_error { tags("qa_nginx_error"); };
destination d_qa_nginx_access {
file(
"/var/log/remote/qa_nginx_access.log"
owner("root")
group("adm")
perm(0640)
);
};
destination d_qa_nginx_error {
file(
"/var/log/remote/qa_nginx_error.log"
owner("root")
group("adm")
perm(0640)
);
};
log { source(s_network); filter(f_qa_nginx_access); destination(d_qa_nginx_access); };
log { source(s_network); filter(f_qa_nginx_error); destination(d_qa_nginx_error); };
If I remove the filter from the log statement all of the log messages go to both files. but with the filter in place nothing makes it to any of the files on the remote server. Is it somehow not sending the tags to remote?
You might want to refer to syslog-ng administration guide. Below are some of the important notes from the guide. If you need to send the tags remotely, use SDATA.meta.tags instead or you can use the template to write is as part of the message too.
Full admin guide can be find at the following link:
https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.22/administration-guide/58
Tags are available locally, that is, if you add tags to a message on the client, these tags will not be available on the server.
To include the tags in the message, use the ${TAGS} macro in a template. Alternatively, if you are using the IETF-syslog message format, you can include the ${TAGS} macro in the .SDATA.meta part of the message. Note that the ${TAGS} macro is available only in syslog-ng OSE 3.1.1 and later.

Serving files on a local network with Deno

I recently decided to play around with Deno a bit.
Right now I am trying to set up a basic file server on my local network, but it will only serve files to my computer and not the rest of the network (I can't even send a http request to the server from outside my computer). I can not, for the life of me, figure out why it only works locally.
I have added the code I am using at the moment below just in case, but I'm pretty sure the problem is somewhere else, because I have the same problem with this file_server example and when I create a file server with oak
import { serve } from 'https://deno.land/std#v0.42.0/http/server.ts';
const server = serve({ port: 3000 });
const decoder = new TextDecoder('utf-8');
for await (const req of server) {
const filePath = 'public' + req.url;
try {
const data = await Deno.readFile(filePath);
req.respond({ body: decoder.decode(data) });
} catch (error) {
if (error.name === Deno.errors.NotFound.name) {
console.log('File "' + filePath + '" not found');
req.respond({ status: 404, body: 'File not found' });
} else {
req.respond({ status: 500, body: 'Rest in pieces' });
throw error;
}
}
}
The command I'm using to run the file is:
deno --allow-all server.ts
When I create a simple file server in Node.js everything works just fine. It can serve files to my computer and any other device on the network.
I think the fault is with my understanding of Deno and it's security concepts, but I don't know. I would greatly appreciate any help and can supply more details if required.
You need to bind the hostname to 0.0.0.0 like so :
const server = serve({ hostname: '0.0.0.0', port: 3000 });
By default, your webserver only responds to localhost and 127.0.0.1.
Binding to 0.0.0.0 tells Deno to bind to all IP addresses/interfaces on your machine. Which makes it accessible to any machine on your network.
Your network IP address in the format of 192.168.x.y. gets also bind to the Deno webserver which allows another computer in your network to access the webserver with your local IP address.

syslog-ng revice json string

Now I used syslog-ng recive json-format log and store to local file, but the log was be changed.
pro log:
{"input_name":"sensor_alert","machine":"10.200.249.27"}
currently store log:
"sensor_alert","machine":"10.200.249.27"}`
the key "input_name" was be deleted
syslog-ng config:
source test_src {
udp(
ip(0.0.0.0) port(5115)
);
};
destination test_dest {
file("/data/test_${YEAR}${MONTH}${DAY}.log"
template("$MSG\n")
template-escape(no));
};
log {
source(test_src);
destination(test_dest);
};
Who can tell me the reason, thks.
If you only send the above mentioned string (without any other framing) probably you should turn of parsing in the source with:
udp(... flags(no-parse));
This is going to put everything it received into the MSG macro.
If you have some kind of framing (like syslog) please provide an sample message, because otherwise I can only guess.

Syslog-NG two relay server issue

I am trying to forward logs through two syslog-ng relay server, which adds the first relay server IP as a source and in my SIEM, I am seeing all logs are coming from the first syslog relay server.
Setup is below.
Client --> Syslog-Relay1 ---> Syslog-Relay2 ---> SIEM
In SIEM I am seeing all the log source as Syslog-Relay1. I have played with multiple option, but no hope yet. Any idea what I am missing here ? I am not finding any proper documents / forums which explains this setup. This we are looking to meet some specific log flow, in case if you have a question why I am trying to achieve this. Thanks in advance
Following is my configuration:
Syslog-Relay1
#version:3.5
#include "scl.conf"
# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
# Note: it also sources additional configuration files (*.conf)
# located in /etc/syslog-ng/conf.d/
options {
time-reap(30);
mark-freq(10);
# keep-hostname(yes);
keep-hostname(no);
log_msg_size(65536);
log_fifo_size(10000);
threaded(yes);
flush_lines(100);
use_dns(no);
stats_freq(60);
mark_freq(36400);
use_fqdn(no);
# chain-hostnames(yes);
chain-hostnames(no);
};
source s_syslog_over_network {
network(
ip(0.0.0.0)
log-fetch-limit(200)
log-iw-size(1000000)
keep-alive(yes)
max_connections(10000)
port(9999)
transport("tcp")
flags(no-parse)
);
};
destination d_syslog_tcp {
network(
"10.12.86.98"
transport("tcp")
port(12229)
);
};
log {
source(s_syslog_over_network);
destination(d_syslog_tcp);
};
Syslog-Relay2
#version:3.5
#include "scl.conf"
# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
# Note: it also sources additional configuration files (*.conf)
# located in /etc/syslog-ng/conf.d/
options {
time-reap(30);
mark-freq(10);
# keep-hostname(yes);
keep-hostname(no);
log_msg_size(65536);
log_fifo_size(10000);
threaded(yes);
flush_lines(100);
use_dns(no);
stats_freq(60);
mark_freq(36400);
use_fqdn(no);
# chain-hostnames(yes);
chain-hostnames(no);
};
source s_syslog_over_network {
network(
ip(0.0.0.0)
log-fetch-limit(200)
log-iw-size(1000000)
keep-alive(yes)
max_connections(10000)
port(12229)
transport("tcp")
flags(no-parse)
);
};
destination d_syslog_tcp {
network(
"10.12.86.76"
transport("tcp")
port(12221)
);
};
log {
source(s_syslog_over_network);
destination(d_syslog_tcp);
};
If you want to use the Client's IP address in SIEM, you have to:
set keep-hostname(no) and use-dns(no) on Syslog-Relay1
This will discard the orginal HOST field of the messages of Client
and use the IP address of Client instead.
set keep-hostname(yes) on Syslog-Relay2
On Syslog-Relay1, the HOST field of the message was overwritten. You
want to keep this and forward to SIEM.
remove flags(no-parse) from s_syslog_over_network on Syslog-Relay2
The IP of Client is stored in the message, so it has to be parsed before forwarding towards SIEM.

grunt-http-server stops running

I am trying to use the grunt-http-server
https://www.npmjs.com/package/grunt-http-server
I follow the example that is on the link
'http-server': {
'dev': {
// the server root directory
root: apps,
// the server port
// can also be written as a function, e.g.
// port: function() { return 8282; }
port: 8282,
// the host ip address
// If specified to, for example, "127.0.0.1" the server will
// only be available on that ip.
// Specify "0.0.0.0" to be available everywhere
host: "127.0.0.1",
cache: 10,
showDir : true,
autoIndex: true,
// server default file extension
ext: "html",
// run in parallel with other tasks
runInBackground: true,
// specify a logger function. By default the requests are
// sent to stdout.
logFn: function(req, res, error) { }
}
},
and when I run the task
grunt http-server:dev
the task is running but it stops
Running "http-server:dev" (http-server) task
Server running on 127.0.0.1:8282
Hit CTRL-C to stop the server
Done, without errors.
and when I visit 127.0.0.1:8282 I am getting a page is not available.
What do I have to do in order the task to keep running and serve my files.
You need to set runInBackground: false.
Because runInBackground tells grunt:
when true: to keep running the rest of the tasks.
when false: to stop and wait on the server indefinitely.
In your case, when set to true, there is no other task to run, so grunt terminates and takes down with it everything it launched including your server. true would be useful if you followed your server start with a watch task for example.

Resources