Application Logback configuration -
<appender name="stash"
class="net.logstash.logback.appender.LogstashAccessTcpSocketAppender">
<destination>localhost:5001</destination>
<!-- encoder is required -->
<encoder>
<pattern>%d{dd/MM/YY HH:mm:ss.SSS} - %-5level[%-5thread] - %logger{32} - %msg%n</pattern>
</encoder>
</appender>
Logstash input is TCP plugin and output is ElasticSearch.
Initially Logstash server is down and the application is generating logs continuously. When viewed in Kibana, no new logs are getting added. After sometime the logstash is started. Now when logs are viewed in Kibana,it seems all the logs which were generated when logstash was down, is flushed to ES and can be viewed.
I have checked ss | grep 5001 when the logstash server was down, the port 5001 is in CLOSED-WAIT state and queues are empty.
What can be the reason for this?
The appender net.logstash.logback.appender.LogstashAccessTcpSocketAppender extends [net.logstash.logback.appender.AbstractLogstashTcpSocketAppender](https://github.com/logstash/logstash-logback-encoder/blob/ master/src/main/java/net/logstash/logback/appender/AbstractLogstashTcpSocketAppender.java) which has an internal ring buffer that buffers the log events. Buffering is required to achieve non blocking behavior. Otherwise the appender would block your code when writing events to the TCP socket.
The ring buffer holds by default 8192 bytes. If the buffer gets full before the events can be send to the socket, the appender starts dropping events. The buffer size and many other properties can be configured via the appender interface.
Related
So, I got the task of transmitting all logs made by one particular Cisco switch to our dedicated Syslog Server. Via Cisco IOS I did the following:
schu-ebd-sw-vt14-11#configure terminal
schu-ebd-sw-vt14-11(config)#logging 10.254.1.103
schu-ebd-sw-vt14-11(config)#logging on
schu-ebd-sw-vt14-11(config)#logging host 10.254.1.103 transport udp port 514
schu-ebd-sw-vt14-11(config)#logging trap debugging
schu-ebd-sw-vt14-11(config)#logging facility local5
10.254.1.103 is the ip to our Syslog server. It has the alias cldlog001. Now entering show log shows the following:
schu-ebd-sw-vt14-11#show log
Syslog logging: enabled (0 messages dropped, 1 messages rate-limited, 0 flushes, 0 overruns, xml disabled, filtering disabled)
No Active Message Discriminator.
No Inactive Message Discriminator.
Console logging: level debugging, 224 messages logged, xml disabled,
filtering disabled
Monitor logging: level debugging, 0 messages logged, xml disabled,
filtering disabled
Buffer logging: level debugging, 226 messages logged, xml disabled,
filtering disabled
Exception Logging: size (4096 bytes)
Count and timestamp logging messages: disabled
File logging: disabled
Persistent logging: disabled
No active filter modules.
Trap logging: level debugging, 112 message lines logged
Logging to 10.254.1.103 (udp port 514, audit disabled,
link up),
110 message lines logged,
0 message lines rate-limited,
0 message lines dropped-by-MD,
xml disabled, sequence number disabled
filtering disabled
Logging Source-Interface: VRF Name:
I can confirm via tcpdump that our Syslog server is receiving messages on port 514 from the Cisco device.
[root#cldlog001 remote]# tcpdump -vv -i any port 514 | grep schu-ebd-sw-vt14-11
dropped privs to tcpdump
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
schu-ebd-sw-vt14-11.switch.schu.64118 > cldlog001.cld.schu.syslog: [udp sum ok] SYSLOG, length: 99
However, no logs are written by cldlog001. Here are the important bits of the config file (/etc/rsyslog.conf).
#### TEMPLATES ####
$template CiscoLog, "/var/log/remote/%HOSTNAME%/cisco.log"
# Log all the mail messages in one place.
#mail.* -/var/log/maillog
local5.* -?CiscoLog
I tried restarting rsyslog but it didn't work.
Any ideas?
You need to add log reception. The imudp module provides the ability to receive syslog messages via UDP.
module(load="imudp")
input(type="imudp" port="514")
Also, when creating a dynamic file, you probably want to use RainerScript, which is the most recent script language for rsyslog. This could look like the following:
# Rsyslog uses templates to generate dynamic files
template(name="DynaFile" type="string"
string="/var/log/remote/%hostname%/cisco.log")
# Custom template to generate the log folder dynamically based on the client's hostname.
action(type="omfile" template="someMessageTemplate" dynaFile="DynaFile")
Note: You'll also have to make sure, that you (or rsyslog) have the needed permissions to create folders and files.
I write a simple server application. In that application, I created a server socket and put it into the listen state with listen call.
After that, I did not write any code to accept the incoming connection request. I simply waited for the termination with pause call.
I want to figure out practically that how many bytes are buffered in the server side if the connection is not accepted. Then I want to validate the number with the theory of the TCP.
To do that,
First, I started my server application.
Then I used "dd" and "netcat" to send the data from client to server. Here is the command:
$> dd if=/dev/zero count=1 bs=100000 | nc 127.0.0.1 45001
Then I opened wireshark and wait for the zero-window message.
From the last properly acknowledged tcp frame. the client side can successfully send 64559 byte data to the server.
Then I execute the above dd-netcat command to create another client and send data again.
In this case, I got the following wireshark output:
From the last successfully acknowledged tcp frame, I understand that the client application can successfully sent 72677 bytes to the server.
So, it seems that the size of the related buffer can change in runtime. Or, I misinterpret the output of the wireshark.
How can I understand the size of the related receive buffer? What is the correct name to refer that receive buffer in terminology? How can I show the default size of the related receive buffer?
Note that the port number of the tcp server is "45001".
Thank you!
We have an upstream application that will generate at times functionally invalid transaction sets.
I'm trying to push the message bodies of the failed transactions from the interchange and associated 999s to a send port or some other logging mechanism, while forwarding the valid transaction sets to downstream mapping process.
Any ideas on accomplishing this would be helpful.
First check "Enable Routing for failed messages" on the Receive Port
Then add a filter to your send port to subscribe to those messages.
e.g.
ErrorReport.ReceivePortName = <your port name> AND
ErrorReport.FailureCode Exists
If you have an existing filter you need to have an OR on the last line of that filter.
<existing filer line1> AND
<existing filer line2> OR
ErrorReport.ReceivePortName = <your port name> AND
ErrorReport.FailureCode Exists
the following are my understanding
.net core api with serilog singk to ELK can directly send logs to ELK
Logstash & Fluentd is needed only if we want to send a log file (by massaging the data) to ELK
my question is
why do I need logstash | fluentd if I can directly send my logs to ELK using a serilog sink in my api?
If I send using serilog sing to ELK directly what happens if the connection to ELK is down? will it save temporarily and re send?
I read in article it says FluentD uses persistent queue and Logstash doesn't but why exactly this queue needed? lets say If my app have 1 logfile and it gets updated every second. So logstash sends the whole file to ELK every second? even if it fails it can resend my log file to ELK right? so why a persistent queue needed here for Fluentd/ logstash comparasion.
Appreciate some clear explanation on this.
why do I need logstash | fluentd if I can directly send my logs to ELK using a serilog sink in my API?
If I send using serilog sing to ELK directly what happens if the connection to ELK is down? will it save temporarily and re send?
Question 2 answers question 1 here. FluentD has a battle-tested buffering mechanism to deal with ELK outages. Moreover, you don't want to use the app thread to deal with a task completely unrelated to an app - log shipping. This complicates your app and decreases portability.
I read in article it says FluentD uses persistent queue and Logstash doesn't but why exactly this queue needed? lets say If my app have 1 logfile and it gets updated every second. So logstash sends the whole file to ELK every second? even if it fails it can resend my log file to ELK right? so why a persistent queue needed here for Fluentd/ logstash comparasion.
Correct. FluentD has a buffer https://docs.fluentd.org/configuration/buffer-section. It will send whatever came for the period of time you've set in match (buffer is used to accumulate logs for the time period here). If the log backend (ELK) is down, it will keep storing the unsuccessful log records in the buffer. Depending on the buffer size this can handle pretty severe log backend outages. Once the log backend (ELK) is up again, all the buffers are sent to it and you don't lose anything.
Logstash's persistent queue is a similar mechanism, but they went further and after the in-memory buffer they added external queues like Kafka. FluentD is also capable to use the queue when you use kafka input/output, and you still have a buffer on top of this in case a Kafka goes down.
I have syslog successfully forwarding logs to an upstream server like so:
$MainMsgQueyeType LinkedList
$MainMsgQueueSize 10000
$MainMsgQueusDiscardMark 8000
$MainMsgQueueDiscardSeverity 1
$MainMsgQueueSaveOnShutdown off
$MainMsgQueueTimeoutEnqueue 0
$ActionQueueType LinkedList # in memory queue
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueSize 10000 # Only allow 10000 elements in the queue
$ActionQueueDiscardMark 8000 # Only allow 8000 elements in the queue before dropping msgs
$ActionQueueDiscardSeverity 1 # Discard Alert,Critical,Error,Warning,Notice,Info,Debug, NOT Emergency
$ActionQueueSaveOnShutdown off # save messages to disk on shutdown
$ActionQueueTimeoutEnqueue 0
$ActionResumeRetryCount -1 # infinite retries if host is down
$RepeatedMsgReduction off
*.* ##remoteserver.mynetwork.com:5544
On the remoteserver I have something that talks syslog and listens on that port. To test, I have a simple log client that logs 100 messages a second to syslog.
This all works fine, and I have configured the queues above so that in the event that the remoteserver is unavailable, the queues start filling up, and then eventually messages get discarded, thus safeguarding syslog from blocking its logging clients.
When I stop the remote log sink on remoteserver:5544, syslog is still stable (queues filling up / full up), but when I restart the remote log sink a while later, rsyslog detects the server again, reestablishes a TCP connection
HOWEVER - syslog only forwards 1 message to it, despite the queue having many thousands of messages in it, and the logging client continuing to log 100 messages a second
How can I make syslog start forwarding messages again once it has detected the remoteserver is back up? (Without restarting syslog).
Am using rsyslog 4.6.2-2
I am using, and want to use TCP
The problem in case anybody comes across this was that workdirectory was set to:
$WorkDirectory /var/spool/rsyslog
And the above config, does this:
$ActionQueueFileName fwdRule1
Even though its supposed to be an in-memory queue. Because of this, when the queue reached 800 (bizarrely, not 8000), disk-assisted mode was activated, and syslog attempted to write messages to /var/spool/rsyslog. This directory didn't exist . Randomly, (hence a race condition must exist and a bug in rsyslog), after continually trying to open a queue file on the disk in that directory, rsyslog got into a twisted state and gave up and continued queueing messages, until it hit the high 10,000 mark. Restarting the downstream logserver failed to make it recover.
Taking out all references to ActionQueueFileName and making WorkDirectory exist fixed this issue.