capture multiline events with rsyslog and storing them to file - tcp

We have a centralized rsyslog infrastructure capturing events from TCP sent by devices around the world using imtcp module.
The idea is to read from syslog (TCP) and store the events to disk, one line per event. The events are later processed by other consumers.
As far as we can see, some events are splitted in multiple events once they are stored on the disk breaking the rest of our process.
Capturing one single package with tcpdump, we confirmed that the source syslog is sending us the whole event containing multiple lines (typical java exceptions).
[root#xx xx.xx.xx.xx]# tcpdump -i bond0 tcp port 50520 -A -c 1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:12:26.062110 IP xx.xx.xx.xx.com.41444 > xx.xx.xx.com.50520: Flags [P.], seq 3270590174:3270590613, ack 2646946316, win 27, options [nop,nop,TS val 3937801207 ecr 2623497312], length 439
E....`#.<.ML..A....N...X..>...2......q.....
....._d`<13> xxx #2.0.#2021 02 10 12:19:50:898#+00#Info#com.xx.xx.xx.xx.xx#
##JavaEE/xx#xx#xx#JavaEE/xx#com.xx.xx.xx.xx.APIServiceHandler#xx#xx##xx#xx##0#Thread[HTTP Worker [#xx],5,Dedicated_Application_Thread]#Plain##
Is the user getting thru SSO? xx:true#
1 packet captured
44 packets received by filter
2 packets dropped by kernel
As this is a global system, we cannot request the device owners to modify the format, all the actions should take place on our side.
This is our rsyslog.conf file
$MaxMessageSize 128k
# Global configuration/modules
module(load="imtcp" MaxListeners="100")
module(load="imfile" mode="inotify")
module(load="impstats" interval="10" resetCounters="on" format="cee" ruleset="monitoring")
module(load="mmjsonparse")
module(load="mmsequence")
module(load="omelasticsearch")
module(load="omudpspoof")
# Include all conf files
$IncludeConfig /etc/rsyslog.d/*.conf
And this is our template that reads from tcp and writes to file (etc/rsyslog.d/template.conf)
template(name="outjsonfmt_device" type="list") {
constant(value="{")
property(outname="device_ip" name="fromhost-ip" format="jsonf")
constant(value=",")
property(outname="time_collect" name="timegenerated" dateFormat="rfc3339" format="jsonf")
constant(value=",")
constant(value="\"device_type\":\"device\"")
constant(value=",")
property(outname="collector_id" name="$myhostname" format="jsonf")
constant(value=",")
property(outname="msg" name="rawmsg-after-pri" format="jsonf" )
constant(value="}\n")
}
template(name="device-out-filename" type="string" string="/data1/input/device/%fromhost-ip%/device_%$now-utc%_%$hour-utc%.log")
ruleset(name="writeRemoteDataToFile_device") {
action(type="omfile" dynaFileCacheSize="10000" dirCreateMode="0700" FileCreateMode="0644" dirOwner="user" dirGroup="logstash" fileOwner="user" fileGroup="user" dynafile="device-out-filename" template="outjsonfmt_device")
}
input(type="imtcp" port="50520" ruleset="writeRemoteDataToFile_device")
How can we configure rsyslog to escape line breaks in the middle of an event, prior to write the event to disk? We already tried $EscapeControlCharactersOnReceive with no success and other similar parameters

The imtcp has a module parameter DisableLFDelimiter which you could try setting to on to ignore line-feed delimiters, assuming your input has an octet-count header. The page says, This mode is non-standard and will probably come with a lot of problems.
module(load="imtcp" MaxListeners="100" DisableLFDelimiter="on")

Related

Tcl http POST - payload gets separated from headers when sent over wlan0 on rPi

I am using ::http::geturl -query to issue an HTTP POST request with a small json payload to an ESP8266 (a 3rd party commercial device) from a rPi. It works when sent over eth0 but fails when sent over wlan0. tcpdump shows that sent over eth0, the message is sent as a single packet but when sent over wlan0 the payload is being split from the headers and sent in a second packet. The ESP8266 most likely due to having overly simple implementation of its packet receivers and/or http server doesn't appear to handle this splitting. It issues a 200 OK response after receiving the packet containing the headers and doesn't process the payload part of the request.
Experimentally I composed the same request message text being sent by ::http::geturl and sent it over wlan0 using nc; it was sent as a single packet and was successfully processed by the ESP8266.
Does anyone happen to know why sending the request using ::http over wlan0 is ending up with this split message, and what if anything can be done to prevent it?
Code fragment:
set s [::http::geturl http://$ip/con?com=cli -query $data -type application/json]
set r [::http::ncode $s]
::http::cleanup $s
Raspbian package versions:
tcl8.6 8.6.9+dfsg-2
tcllib 1.19-dfsg-2
tcl_platform(engine) = Tcl
tcl_platform(machine) = armv7l
tcl_platform(os) = Linux
tcl_platform(osVersion) = 5.4.79-v7+
$ ifconfig wlan0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.101 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::ed38:71ab:13af:ae30 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:26:bf:94 txqueuelen 1000 (Ethernet)
From /proc/cpuinfo:
Hardware : BCM2835
Revision : a020d3
Model : Raspberry Pi 3 Model B Plus Rev 1.3
$ uname -a
Linux raspberrypi 5.4.79-v7+ #1373 SMP Mon Nov 23 13:22:33 GMT 2020 armv7l GNU/Linux
Tcl's http package flushes the headers to the socket (i.e., performs an actual write()/send()) between writing the headers and the body of a query. For any correct implementation of an HTTP server this is fineā€¦ but you're not working with that. For some reason, the wlan and eth drivers in the OS kernel have different policies for what to do with that case, with the eth driver deciding to wait a bit before sending; Tcl definitely doesn't configure this aspect of sockets at all, staying with the system defaults. (I don't know how to configure the OS defaults.)
You can always take a copy of the http code and comment out the flush. It's this one:
https://core.tcl-lang.org/tcl/artifact/d9f8dc4bd7211a37?ln=1463
line 1463: flush $sock
There's a Download button/link at the top of the page for that exact version of that file (it's changed only very slightly from the one in your version of Tcl and should be compatible provided you source the file explicitly before doing any package require calls).

tcpdump not showing HTTP requests

I'm trying to use tcpdump to identify which IP address a particular person is coming from but I'm not seeing the HTTP commands as various web site show. I've used the following to set up tcpdump:
nohup tcpdump -i eth0 -P in -nn -n -tttt -w /home/tcpdump/port80.log -C 100 -W 50 "port 80" > /home/tcpdump/nohup.log 2>&1 &
And I'm peridoically checking the file using
tcpdump -r port80.log00 -n -nn -A
I'm connecting to the following URL from a web browser:
http://10.10.0.50?test
And I was expecting to see a bunch of HTTP "GET" commands but tcpdump doesn't seem to be showing me the incoming messages. Instead I just get something like
16:21:35.708250 IP 10.10.0.222.55924 > 10.10.0.50.80: Flags [S], seq 1869638484, win 8192, options [mss 1340,nop,nop,sackOK], length 0
E..0#H#....\
..
.2.t.PopkT....p. ........<....
Looking at other info on using tcpdump for HTTP logging I'm should be seeing the "GET" command after the first few bytes of garbage. There's no actual web server running - I'm only interested in seeing the incoming request as a test, hence the "?test" on the end to help me search the logs for the right thing. I don't see that that's the issue tho'.
Any help very gratefully received.

Embed video from USB webcam into web page using ffserver and ffmpeg

I need to stream images from a USB webcam to a webpage on my embedded system. The operating system used is Linux.
I successfully installed ffserver and ffmpeg, and also mplayer.
This is my /etc/ffserver.conf (it's not definitive, I am just testing it):
# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
Port 8090
# Address on which the server is bound. Only useful if you have
# several network interfaces.
BindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
# MaxClients maximum limit.
MaxHTTPConnections 2
# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 2
# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 1000
# Access log file (uses standard Apache log file format)
# '-' is the standard output.
CustomLog -
# Suppress that if you want to launch ffserver as a daemon.
NoDaemon
<Feed feed1.ffm>
File /tmp/feed1.ffm #when remarked, no file is beeing created and the stream keeps working!!
FileMaxSize 200K
# Only allow connections from localhost to the feed.
ACL allow 127.0.0.1
# the output stream format - SWF = flash
Format swf
# this must match the ffmpeg -r argument
VideoFrameRate 5
# another quality tweak
VideoBitRate 320
# quality ranges - 1-31 (1 = best, 31 = worst)
VideoQMin 1
VideoQMax 3
VideoSize 640x480
# wecams don't have audio
NoAudio
</Stream>
# FLV output - good for streaming
<Stream test.flv>
# the source feed
Feed feed1.ffm
# the output stream format - FLV = FLash Video
Format flv
VideoCodec flv
# this must match the ffmpeg -r argument
VideoFrameRate 5
# another quality tweak
VideoBitRate 320
# quality ranges - 1-31 (1 = best, 31 = worst)
VideoQMin 1
VideoQMax 3
VideoSize 640x480
# wecams don't have audio
NoAudio
</Stream>
<Stream stat.html>
Format status
</Stream>
<Redirect index.html>
# credits!
URL http://ffmpeg.sourceforge.net/
</Redirect>
From the shell I can execute:
# ffserver -f /etc/ffserver.conf
and
# ffmpeg -f video4linux2 -s 320x240 -r 5 -i /dev/video0 http://127.0.0.1:8090/test.flv
No errors are reported during the execution. Sounds good but maybe it's not OK at all.
Then, in the webpage, I wrote this simple code:
<video controls>
<source src="http://127.0.0.1:8090/test.flv">
</video>
I read on another thread here on stack overflow (I lost the link) that this code should be enough.. But it's not working for me.
But I can see the file /tmp/feed1.ffm has been created, so I think I can use this stream to show the camera images on my webpage. Am I right ?
What it the simplest solution ?
Thank you.
EDIT
I allowed the connections into the ffserver's configuration file:
<Feed feed1.ffm>
File /tmp/feed1.ffm #when remarked, no file is beeing created and the stream keeps working!!
FileMaxSize 200K
ACL allow 127.0.0.1
ACL allow localhost
ACL allow 192.168.2.2 192.168.2.10
</Feed>
But still does not work.
ffmpeg -f video4linux2 -s 320x240 -r 5 -i /dev/video0
http://127.0.0.1:8090/test.flv
As described in the documentation, you should stream to the feed1.ffm file, not to the test.flv file. ffmpeg -> ffserver communication is the ffm file, and ffserver -> webbrowser communication is the .flv file.
I think HTML doesn't like the pseudo files like pipes or .ffm :)
Maybe you could use the <embed>tag from HTML5.
<embed type="video/flv" src="http://127.0.0.1:8090/test.flv" width="320" height="240">
Or however you want to set the size.

Preventing TCP SYN retry in netcat (for port knocking)

I'm trying to write the linux client script for a simple port knocking setup. My server has iptables configured to require a certain sequence of TCP SYN's to certain ports for opening up access. I'm able to successfully knock using telnet or manually invoking netcat (Ctrl-C right after running the command), but failing to build an automated knock script.
My attempt at an automated port knocking script consists simply of "nc -w 1 x.x.x.x 1234" commands, which connect to x.x.x.x port 1234 and timeout after one second. The problem, however, seems to be the kernel(?) doing automated SYN retries. Most of the time more than one SYN is being send during the 1 second nc tries to connect. I've checked this with tcpdump.
So, does anyone know how to prevent the SYN retries and make netcat simply send only one SYN per connection/knock attempt? Other solutions which do the job are also welcome.
Yeah, I checked that you may use nc too!:
$ nc -z example.net 1000 2000 3000; ssh example.net
The magic comes from (-z: zero-I/O mode)...
You may use nmap for port knocking (SYN). Just exec:
for p in 1000 2000 3000; do
nmap -Pn --max-retries 0 -p $p example.net;
done
try this (as root):
echo 1 > /proc/sys/net/ipv4/tcp_syn_retries
or this:
int sc = 1;
setsockopt(sock, IPPROTO_TCP, TCP_SYNCNT, &sc, sizeof(sc));
You can't prevent the TCP/IP stack from doing what it is expressly designed to do.

How can I test an outbound connection to an IP address as well as a specific port?

OK, we all know how to use PING to test connectivity to an IP address. What I need to do is something similar but test if my outbound request to a given IP Address as well as a specif port (in the present case 1775) is successful. The test should be performed preferably from the command prompt.
Here is a small site I made allowing to test any outgoing port. The server listens on all TCP ports available.
http://portquiz.net
telnet portquiz.net XXXX
If there is a server running on the target IP/port, you could use Telnet. Any response other than "can't connect" would indicate that you were able to connect.
To automate the awesome service portquiz.net, I did write a bash script :
NB_CONNECTION=10
PORT_START=1
PORT_END=1000
for (( i=$PORT_START; i<=$PORT_END; i=i+NB_CONNECTION ))
do
iEnd=$((i + NB_CONNECTION))
for (( j=$i; j<$iEnd; j++ ))
do
#(curl --connect-timeout 1 "portquiz.net:$j" &> /dev/null && echo "> $j") &
(nc -w 1 -z portquiz.net "$j" &> /dev/null && echo "> $j") &
done
wait
done
If you're testing TCP/IP, a cheap way to test remote addr/port is to telnet to it and see if it connects. For protocols like HTTP (port 80), you can even type HTTP commands and get HTTP responses.
eg
Command IP Port
Telnet 192.168.1.1 80
The fastest / most efficient way I found to to this is with nmap and portquiz.net described here: http://thomasmullaly.com/2013/04/13/outgoing-port-tester/ This scans to top 1000 most used ports:
# nmap -Pn --top-ports 1000 portquiz.net
Starting Nmap 6.40 ( http://nmap.org ) at 2017-08-02 22:28 CDT
Nmap scan report for portquiz.net (178.33.250.62)
Host is up (0.072s latency).
rDNS record for 178.33.250.62: electron.positon.org
Not shown: 996 closed ports
PORT STATE SERVICE
53/tcp open domain
80/tcp open http
443/tcp open https
8080/tcp open http-proxy
Nmap done: 1 IP address (1 host up) scanned in 4.78 seconds
To scan them all (took 6 sec instead of 5):
# nmap -Pn -p1-65535 portquiz.net
The bash script example of #benjarobin for testing a sequence of ports did not work for me so I created this minimal not-really-one-line (command-line) example which writes the output of the open ports from a sequence of 1-65535 (all applicable communication ports) to a local file and suppresses all other output:
for p in $(seq 1 65535); do curl -s --connect-timeout 1 portquiz.net:$p >> ports.txt; done
Unfortunately, this takes 18.2 hours to run, because the minimum amount of connection timeout allowed integer seconds by my older version of curl is 1. If you have a curl version >=7.32.0 (type "curl -V"), you might try smaller decimal values, depending on how fast you can connect to the service. Or try a smaller port range to minimise the duration.
Furthermore, it will append to the output file ports.txt so if run multiple times, you might want to remove the file first.

Resources