Syslog receives logs from Cisco Switch but doesn't log them - syslog

So, I got the task of transmitting all logs made by one particular Cisco switch to our dedicated Syslog Server. Via Cisco IOS I did the following:
schu-ebd-sw-vt14-11#configure terminal
schu-ebd-sw-vt14-11(config)#logging 10.254.1.103
schu-ebd-sw-vt14-11(config)#logging on
schu-ebd-sw-vt14-11(config)#logging host 10.254.1.103 transport udp port 514
schu-ebd-sw-vt14-11(config)#logging trap debugging
schu-ebd-sw-vt14-11(config)#logging facility local5
10.254.1.103 is the ip to our Syslog server. It has the alias cldlog001. Now entering show log shows the following:
schu-ebd-sw-vt14-11#show log
Syslog logging: enabled (0 messages dropped, 1 messages rate-limited, 0 flushes, 0 overruns, xml disabled, filtering disabled)
No Active Message Discriminator.
No Inactive Message Discriminator.
Console logging: level debugging, 224 messages logged, xml disabled,
filtering disabled
Monitor logging: level debugging, 0 messages logged, xml disabled,
filtering disabled
Buffer logging: level debugging, 226 messages logged, xml disabled,
filtering disabled
Exception Logging: size (4096 bytes)
Count and timestamp logging messages: disabled
File logging: disabled
Persistent logging: disabled
No active filter modules.
Trap logging: level debugging, 112 message lines logged
Logging to 10.254.1.103 (udp port 514, audit disabled,
link up),
110 message lines logged,
0 message lines rate-limited,
0 message lines dropped-by-MD,
xml disabled, sequence number disabled
filtering disabled
Logging Source-Interface: VRF Name:
I can confirm via tcpdump that our Syslog server is receiving messages on port 514 from the Cisco device.
[root#cldlog001 remote]# tcpdump -vv -i any port 514 | grep schu-ebd-sw-vt14-11
dropped privs to tcpdump
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
schu-ebd-sw-vt14-11.switch.schu.64118 > cldlog001.cld.schu.syslog: [udp sum ok] SYSLOG, length: 99
However, no logs are written by cldlog001. Here are the important bits of the config file (/etc/rsyslog.conf).
#### TEMPLATES ####
$template CiscoLog, "/var/log/remote/%HOSTNAME%/cisco.log"
# Log all the mail messages in one place.
#mail.* -/var/log/maillog
local5.* -?CiscoLog
I tried restarting rsyslog but it didn't work.
Any ideas?

You need to add log reception. The imudp module provides the ability to receive syslog messages via UDP.
module(load="imudp")
input(type="imudp" port="514")
Also, when creating a dynamic file, you probably want to use RainerScript, which is the most recent script language for rsyslog. This could look like the following:
# Rsyslog uses templates to generate dynamic files
template(name="DynaFile" type="string"
string="/var/log/remote/%hostname%/cisco.log")
# Custom template to generate the log folder dynamically based on the client's hostname.
action(type="omfile" template="someMessageTemplate" dynaFile="DynaFile")
Note: You'll also have to make sure, that you (or rsyslog) have the needed permissions to create folders and files.

Related

Why cant I connect more than 8000 clients to MQTT brokers via HAProxy?

I am trying to establish 10k client connections(potentially 100k) with my 2 MQTT brokers using HAProxy as a load balancer.
I have a working simulator(using Java Paho library) that can simulate 10k clients. On the same machine I run 2 MQTT brokers in docker. For LB im using another machine with virtual image of Ubuntu 16.04.
When I connect directly to a MQTT Broker those connections are established without a problem, however when I use HAProxy I only get around 8.8k connections, while the rest throw: Error at client{insert number here}: Connection lost (32109) - java.net.SocketException: Connection reset. When I connect simulator directly to a broker (Same machine) about 20k TCP connections open, however when I use load balancer only 17k do. This leaves me thinking that LB is causing the problem.
It is important to add that whenever I run the simulator I'm unable to use the browser (Cannot connect to the internet). I havent tested if this is browser only, but could that mean that I actually run out of ports or something similar and the real issue here is not in the LB?
Here is my HAProxy configuration:
global
log /dev/log local0
log /dev/log local1 notice
maxconn 500000
ulimit-n 500000
maxpipes 500000
defaults
log global
mode http
timeout connect 3h
timeout client 3h
timeout server 3h
listen mqtt
bind *:8080
mode tcp
option tcplog
option clitcpka
balance leastconn
server broker_1 address:1883 check
server broker_2 address:1884 check
listen stats
bind 0.0.0.0:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
This is what MQTT broker shows for every successful/unsuccessful connection
...
//Successful connection
1613382861: New connection from xxx:32850 on port 1883.
1613382861: New client connected from xxx:60974 as 356 (p2, c1, k1200, u'admin').
...
//Unsuccessful connection
1613382699: New connection from xxx:42861 on port 1883.
1613382699: Client <unknown> closed its connection.
...
And this is what ulimit -a shows on LB machine.
core file size (blocks) (-c) 0
data seg size (kb) (-d) unlimited
scheduling priority (-e) 0
file size (blocks) (-f) unlimited
pending signals (-i) 102355
max locked memory (kb) (-l) 82000
max memory size (kb) (-m) unlimited
open files (-n) 500000
POSIX message queues (bytes) (-q) 819200
real-time priority (-r) 0
stack size (kb) (-s) 8192
cpu time (seconds) (-t) unlimited
max user processes (-u) 500000
virtual memory (kb) (-v) unlimited
file locks (-x) unlimited
Note: The LB process has the same limits.
I followed various tutorials and increased open file limit as well as port limit and TCP header size, etc. The number of connected users increased from 2.8k to about 8.5-9k (Which is still way lower than the 300k author of the tutorial had). ss -s command shows about 17000ish TCP and inet connections.
Any pointers would greatly help!
Thanks!
You can't do a normal LB of MQTT traffic, as you can't "pin" the connection based on the MQTT Topic. If you send in a SUBSCRIBE to Broker1 for Topic "test/blatt/#", but the next client PUBLISHes to Broker2 "test/blatt/foo", then if the two brokers are not bridged, your first subscriber will never get that message.
If your clients are terminating the TCP connection sometime after the CONNECT, or the HAproxy is round-robin'ing the packets between the two brokers, you will get errors like this. You need to somehow persist the connections, and I don't know how you do that with HAproxy. Non-free LB's like A10 Thunder or F5 LTM can persist TCP connections...but you still need the MQTT brokers bridged for it all to work.
Turns out I was running out of resources on my computer.
I moved simulator to another machine and managed to get 15k connections running. Due to resource limits I cant get more than that. Computer thats running the serverside uses 20/32GB of RAM and the computer running simulator used 32/32GB for approx 15k devices. Now I see why running both on the same computer is not an option.

capturing raw syslog messages with tcpdump

i am currently collecting logs from a cloud platform which i would like to keep anonymous. while trying to create a custom parser for the syslogs that i am collecting i am trying to capture the raw syslog by using tcpdump.exe for windows. the syntax that i am using to capture the raw syslog messages are: tcpdump.exe -s 0 -A udp port 514
the issue that i am having is that at the beginning of each syslog message it starts with:
..s....#._<133>
and ends with:
E..T..#.#..C#.?O
does anyone know what that means and/or how i can capture the raw syslog messages with tcpdump without the beginning and ending garbage?

Logstash TCP input retrieves all past logs once it comes up

Application Logback configuration -
<appender name="stash"
class="net.logstash.logback.appender.LogstashAccessTcpSocketAppender">
<destination>localhost:5001</destination>
<!-- encoder is required -->
<encoder>
<pattern>%d{dd/MM/YY HH:mm:ss.SSS} - %-5level[%-5thread] - %logger{32} - %msg%n</pattern>
</encoder>
</appender>
Logstash input is TCP plugin and output is ElasticSearch.
Initially Logstash server is down and the application is generating logs continuously. When viewed in Kibana, no new logs are getting added. After sometime the logstash is started. Now when logs are viewed in Kibana,it seems all the logs which were generated when logstash was down, is flushed to ES and can be viewed.
I have checked ss | grep 5001 when the logstash server was down, the port 5001 is in CLOSED-WAIT state and queues are empty.
What can be the reason for this?
The appender net.logstash.logback.appender.LogstashAccessTcpSocketAppender extends [net.logstash.logback.appender.AbstractLogstashTcpSocketAppender](https://github.com/logstash/logstash-logback-encoder/blob/ master/src/main/java/net/logstash/logback/appender/AbstractLogstashTcpSocketAppender.java) which has an internal ring buffer that buffers the log events. Buffering is required to achieve non blocking behavior. Otherwise the appender would block your code when writing events to the TCP socket.
The ring buffer holds by default 8192 bytes. If the buffer gets full before the events can be send to the socket, the appender starts dropping events. The buffer size and many other properties can be configured via the appender interface.

Get rsyslog forwarding messages after remote server restart

I have syslog successfully forwarding logs to an upstream server like so:
$MainMsgQueyeType LinkedList
$MainMsgQueueSize 10000
$MainMsgQueusDiscardMark 8000
$MainMsgQueueDiscardSeverity 1
$MainMsgQueueSaveOnShutdown off
$MainMsgQueueTimeoutEnqueue 0
$ActionQueueType LinkedList # in memory queue
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueSize 10000 # Only allow 10000 elements in the queue
$ActionQueueDiscardMark 8000 # Only allow 8000 elements in the queue before dropping msgs
$ActionQueueDiscardSeverity 1 # Discard Alert,Critical,Error,Warning,Notice,Info,Debug, NOT Emergency
$ActionQueueSaveOnShutdown off # save messages to disk on shutdown
$ActionQueueTimeoutEnqueue 0
$ActionResumeRetryCount -1 # infinite retries if host is down
$RepeatedMsgReduction off
*.* ##remoteserver.mynetwork.com:5544
On the remoteserver I have something that talks syslog and listens on that port. To test, I have a simple log client that logs 100 messages a second to syslog.
This all works fine, and I have configured the queues above so that in the event that the remoteserver is unavailable, the queues start filling up, and then eventually messages get discarded, thus safeguarding syslog from blocking its logging clients.
When I stop the remote log sink on remoteserver:5544, syslog is still stable (queues filling up / full up), but when I restart the remote log sink a while later, rsyslog detects the server again, reestablishes a TCP connection
HOWEVER - syslog only forwards 1 message to it, despite the queue having many thousands of messages in it, and the logging client continuing to log 100 messages a second
How can I make syslog start forwarding messages again once it has detected the remoteserver is back up? (Without restarting syslog).
Am using rsyslog 4.6.2-2
I am using, and want to use TCP
The problem in case anybody comes across this was that workdirectory was set to:
$WorkDirectory /var/spool/rsyslog
And the above config, does this:
$ActionQueueFileName fwdRule1
Even though its supposed to be an in-memory queue. Because of this, when the queue reached 800 (bizarrely, not 8000), disk-assisted mode was activated, and syslog attempted to write messages to /var/spool/rsyslog. This directory didn't exist . Randomly, (hence a race condition must exist and a bug in rsyslog), after continually trying to open a queue file on the disk in that directory, rsyslog got into a twisted state and gave up and continued queueing messages, until it hit the high 10,000 mark. Restarting the downstream logserver failed to make it recover.
Taking out all references to ActionQueueFileName and making WorkDirectory exist fixed this issue.

Linux Syslog Server Format

I am creating a syslog formatted message according to RFC3164 and sending it to my linux default syslog server which is listining of port 514.
The message i am sending is
<187>Nov 19 02:58:57 nms-server6 %cgmesh-2-outage: Outage detected on this device
I open a socket, make a datagram packet and send this packet on that socket.
Now in the var/log/syslog.log which i have configured to receive all the syslog messages as
. /var/log/syslog.log
I am getting this extra hostname getting inserted by the server automatically as show below
Nov 19 02:58:57 nms-server6 nms-server6 %cgmesh-2-outage: Outage detected on this device
as you see nms-server6 is getting repeated twice while i am sending it just once...so somehow the server is inserting it by default..
can some one share some knowledge on this ?
Are you adding the hostname in your message? If so, I don't think that's necessary as the hostname will be taken from the packet - which would explain the duplication.
Also, as a side note - it's nice that you've added the %fac-sev-mnemonic: portion, but that is not a standard, it's used by Cisco devices.
Here's a link to a good whitepaper that covers Cisco Mnemonics (and syslog management):
Building Scalable Syslog Management Solutions:
http://www.cisco.com/en/US/technologies/collateral/tk869/tk769/white_paper_c11-557812.html

Resources