How to forward logs using rsyslog client - syslog

I need to forward messages from a log file to another IP - let's say 127.0.0.1 514. How do I achieve this?
I used this example from the docs of rsyslog:
module(load="imfile" PollingInterval="10") #needs to be done just once
# File 2
input(type="imfile"
File="/path/to/file2"
Tag="tag2")
As well as providing it with the following rule:
*.* #127.0.0.1:514
But this ended up sending all of the system's logs including journald.
So how do I correctly use ruleset, input blocks and *.* #127.0.0.1:514 to send logs from file /path/to/file2 to 127.0.0.1:514?
Thanks

When specifying the input, also say which ruleset to apply. Input outside the ruleset will not be processed by the ruleset.
module(load="imfile")
input(type="imfile" File="/path/to/file2" Tag="tag2" ruleset="remote")
ruleset(name="remote"){
action(type="omfwd" target="127.0.0.1" port="514" protocol="udp")
# or use legacy syntax:
# *.* #127.0.0.1:514
}

Related

Error while trying to send logs with rsyslog without local storage

I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}

tcpprep: Command line arguments not allowed

I'm not sure, why executing below command on ubuntu terminal throws error. tcpprep syntax and options are mentioned as per in help doc, still throws error.
root#test-vm:~# /usr/bin/tcpprep --cachefile='cachefile1' —-pcap='/pcaps/http.pcap'
tcpprep: Command line arguments not allowed
tcpprep (tcpprep) - Create a tcpreplay cache cache file from a pcap file
root#test-vm:~# /usr/bin/tcpprep -V
tcpprep version: 3.4.4 (build 2450) (debug)
There are two problems with your command (and it doesn't help that tcpprep errors are vague or wrong).
Problem #1: Commands out of order
tcpprep requires that -i/--pcap come before -o/--cachefile. You can fix this as below, but then you get a different error:
bash$ /usr/bin/tcpprep —-pcap='/pcaps/http.pcap' --cachefile='cachefile1'
Fatal Error in tcpprep_api.c:tcpprep_post_args() line 387:
Must specify a processing mode: -a, -c, -r, -p
Note that the error above is not even accurate! -e/--mac can also be used!
Problem #2: Processing mode must be specified
tcpprep is used to preprocess a capture file into client/server using a heuristic that you provide. Looking through the tcpprep manpage, there are 5 valid options (-acerp). Given this capture file as input.pcapng with server 192.168.122.201 and next hop mac 52:54:00:12:35:02,
-a/--auto
Let tcpprep determine based on one of 5 heuristics: bridge, router, client, server, first. Ex:
tcpprep --auto=first —-pcap=input.pcapng --cachefile=input.cache
-c/--cidr
Specify server by cidr range. We see servers at 192.168.122.201, 192.168.122.202, and 192.168.3.40, so summarize with 192.168.0.0/16:
tcpprep --cidr=192.168.0.0/16 --pcap=input.pcapng --cachefile=input.cache
-e/--mac
This is not as useful in this capture as ALL traffic in this capture has dest mac of next hop of 52:54:00:12:35:02, ff:ff:ff:ff:ff:ff (broadcast), or 33:33:00:01:00:02 (multicast). Nonetheless, traffic from the next hop won't be client traffic, so this would look like:
tcpprep --mac=52:54:00:12:35:02 —-pcap=input.pcapng --cachefile=input.cache
-r/--regex
This is for IP ranges, and is an alternative to summarizing subnets with --cidr. This would be more useful if you have several IPs like 10.0.20.1, 10.1.20.1, 10.2.20.1, ... where summarization won't work and regex will. This is one regex we could use to summarize the servers:
tcpprep --regex="192\.168\.(122|3).*" —-pcap=input.pcapng --cachefile=input.cache
-p/--port
Looking at Wireshark > Statistics > Endpoints, we see ports [135,139,445,1024]/tcp, [137,138]/udp are associated with the server IPs. 1024/tcp, used with dcerpc is the only one that falls outside the range 0-1023, and so we'd have to manually specify it. Per services syntax, we'd represent this as 'dcerpc 1024/tcp'. In order to specify port, we also need to specify a --services file. We can specify one inline as a temporary file descriptor with process substitution. Altogether,
tcpprep --port --services=<(echo "dcerpc 1024/tcp") --pcap=input.pcapng --cachefile=input.cache
Further Reading
For more examples and information, check out the online docs.

Nginx variable for physical server name

I'm trying to setup response headers on my separate webservers that outputs the physical name of the machine that nginx is running on, so that I can tell which servers are serving the responses to our web clients.
Is there a variable that exists to do this already? Or do I just have to hardcode it per-server :(
You're after the $hostname common variable. Common variables are listed in the variable index.
The nginx access log documentation only shows variables that are specific to the access log:
The log format can contain common variables, and variables that exist
only at the time of a log write.
I guess you're looking for $hostname variable.
At first I thought the answer was to use the ENV variable and pull out the hostname from there https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html. But I couldn't get it to work for some reason.
However, this works like a charm:
perl_set $server_int 'sub { use Sys::Hostname; return hostname; }';
And example usage:
add_header 'Server-Int' "$server_int";
Just have to make sure your nginx is compiled with --with-http_perl_module - just run nginx -V to make sure. And that you have Sys::Hostname installed.
Warning: I at first used hostname to return the hostname in the Perl script, but while that did return the name, it for some reason aborted the rest of the output. I don't know if it's a bug with perl_set but you've been warned - using backticks in perl_set may be deadly.

How to nmap without rDNS and write the DNS in the output

I need to use nmap to check if port 443 is open for a list websites. So, I saved them into a file. I need the output to tell me if the port is open or not. I used the command:
nmap -PN -p443 gnmap -oG logs/output.gnmap -iL myfolder/input.txt
The problem: the output file is giving me a different domain names. Nmap made rDNS and I found that the IP points to adifferent domain name. Please, explain. Does this means both domains are hosted in the same server ? However, I checked their certificates and found each domain has different certificate. I am concerned about port 433 in my list to check their certificates later. So, I don't want to check another domain's certificate's other than the one I entered in the file.
To solve the issue, I used the -n option. But the problem is that the output file contains IPs only. How can I produce output file that contains the result of my domains without rDNS ??
The "Grepable" output format (-oG) is deprecated because it cannot show the full output of an Nmap scan. There is no way to get the output you want with the -oG option unless you modify Nmap and recompile it.
Luckily, the XML output format (-oX) contains the information you want and more:
<hostnames>
<hostname name="bonsaiviking.com" type="user"/>
<hostname name="li34-105.members.linode.com" type="PTR"/>
</hostnames>
In this example, from scanning my domain, the hostname provided on the command line has the attribute type="user", and the hostname that was a result of the reverse lookup has type="PTR".

HTTP testing on the command line, is there something better than cURL?

Is there a command line utility where you can simply set up an HTTP request and have the trace simply output back to the console?
Also specifying the method simply would be a great feature instead of the method being a side effect.
I can get all the information I need with cURL but I can't figure out a way to just display it without dumping everything to files.
I'd like the output to show the sent headers the received headers and the body of the message.
There must be something out there but I haven't been able to google for it. Figured I should ask before going off and writing it myself.
I dislike answering my own question but c-smile's answer lead me down the right track:
Short answer shell script over cURL:
curl --dump-header - "$#"
The - [dash] meaning stdout is a convention I was unaware of but also works for wget and a number of other unix utilities. It is apparently not part of the shell but built into each utility. The wget equivalent is:
wget --save-headers -qO - "$#"
Did you try wget:
http://www.gnu.org/software/wget/manual/wget.html#Wgetrc-Commands ?
Like wget --save-headers ...
To include the HTTP headers in the output (as well as the server response), just use curl’s -i/--include option. For example:
curl -i "http://www.google.com/"
Here’s what man curl says about this setting:
-i/--include
(HTTP) Include the HTTP-header in the output. The HTTP-header
includes things like server-name, date of the document, HTTP-
version and more...
If this option is used twice, the second will again disable
header include.
Try http, e.g.
http -v example.org
Further into at https://httpie.org
It even includes a page to try online:
https://httpie.org/run
Telnet has for long been a well-known (though now forgotten, I guess) tool for looking at a web page. The general idea is to telnet to the http port, to type an http 1.1 GET command, and then to see the served page on the screen.
A good detailed explanation is http://support.microsoft.com/kb/279466
A Google search yields a whole bunch more.
Use telnet on port 80
For example:
telnet telehack.com 80
GET / HTTP/1.1
host: telehack.com
<CR>
<CR>
<CR> means Enter

Resources