Does Syslog really have a 1KB message limit? - syslog

It seems Syslog has a 1KB message limit. Is this hardcoded into the Syslog protocol, or is this a parameter that can be set for each server?
I am hoping the article I read was out of date, so if you have any info please share.

This is correct, as can be seen in the syslog protocol RFC. This, and other deficiencies in the syslog protocol, is the reason why modern syslog daemons such as rsyslog support enhanced protocols with features such as TCP transport, encryption etc. There was also some effort within the IETF to standardize an improved syslog protocol, which resulted in RFC5424, RFC5425, and RFC 5426. Here, the minimum maximum message size is relatively small (depending on the transport layer), however implementations are allowed to support larger messages as well.

From my reading of the syslog protocol spec (well, draft standard), message packets can't be more than 1KiB, but (using a fragmentation feature) messages can be. RFC 5424, however, says message size depends on transport, but is at least 480 octets.

Yes, but you can increase this limit to an arbitrary length by recompiling from source.
See instructions in this blog post I found about truncated syslog messages: http://bsdpants.blogspot.com/2010/08/truncated-syslog-messages.html

Related

Syslog RFC5424 Vs RFC6587

I was reading across the net, and I wasn't able to find what are the differences between those protocols when Syslog message is being sent, nor a proper example for how RFC6587 log messages look like. If someone can share some insight regarding those two questions.
The 2 RFCs are for different purposes.
RFC 5424 defines a "modern" log format with structural elements, while RFC 6587 can be considered as transport for such a log format over TCP.
RFC 6587 defines frames around syslog messages, and it also mentions/suggests RFC 5424 as payload:
https://datatracker.ietf.org/doc/html/rfc6587#section-3.4.1
SYSLOG-MSG is defined in the syslog protocol [RFC5424] and may
also be considered to be the payload in [RFC3164]
Example for RFC 5424:
<165>1 2003-10-11T22:14:15.003Z mymachine.example.com evntslog - ID47 [exampleSDID#32473 iut="3" eventSource="Application" eventID="1011"] BOMAn application event log entry...
RFC 6587 is just about framing, so the example would be the same, but with prepending the length of the message: MSG-LEN SP SYSLOG-MSG.

Icecast transport layer protocol - TCP or UDP?

I don't seem to find a answer, so I'm asking you.
Does a stock Icecast2 server use TCP or UDP to broadcast the streaming data? I know that it uses a custom HTTP based Application Layer protocol, so one might think its TCP, but on the other hand it is a broadcast application, so UDP would be more logical to me. If it uses TCP nonetheless, why does it do that?
Icecast and SHOUTcast both use TCP for both the source streams and streaming to end clients. There are many reasons this is beneficial:
The codecs used by most internet radio stations do not lend themselves well to having lost chunks of data. If the stream were corrupt, either by lost or out-of-order packets, the decoder will sometimes be able to re-sync and continue, but many will simply fail.
Most internet radio stations have no real latency requirement. Nobody knows or cares if they get the audio delayed by a few seconds. It is actually typical to crank up the buffer size to allow clients to start playback quickly, causing delays of 10-30 seconds.
It is important to be compatible with HTTP. I suspect that when Nullsoft originally built SHOUTcast, their goal was to get up and running with it as simply as possible, so it makes sense that they mimicked HTTP. I suspect that the reason Icecast and SHOUTcast are so popular is that it is easy to write a client for them because it is essentially HTTP. Now that web-based players are a reality (with Flash and even HTML5), it is critical that the protocol be compatible with HTTP as many browsers do not support other streaming protocols. (Flash has its own protocol, but it is not nearly as simple as HTTP to implement.) If a client can play a file streamed from an HTTP server, it can stream from Icecast (and SHOUTcast if it is lenient in its HTTP implementation).
You mentioned broadcast... I don't know if you meant in the sense of UDP broadcast packets, but those do not work well in practice over the internet. Therefore, the only benefit to using UDP would be to reduce overhead, but I think you will see that for the reasons above, the few bytes of overhead don't outweigh the benefits of TCP for this type of application.
In short, this is not a telephony application where latency matters and custom clients can be used.

IP header checksum: 0x0000

I have a JAX-RS web service which is secured via TLS. Hence encryption is very important I decided to check the network traffic with RawCap and analyze it with WireShark. Doing this, I stumbled over the following message:
Header checksum: 0x0000 [incorrect, should be 0xac15 (may be caused by "IP checksum offload"?)]
What is the reason for this message?
Are there any further consequences?
I'm pretty sure that it isn't a problem with my RESTEasy client, because retrieving a ressource via FireFox causes the same message.
This doesn't come from your application - it is caused by the TCP/IP stack. Many implementations do not (or not always) fill in the header checksum, leaving it a 0x0000.
As Wireshark indicated, one reason for this is, that some combinations of OS and NIC driver make the OS think, that the checksum will be filled in by the NIC (hardware-accelerated), but in fact it will be not.
This is not a real problem, as long as your transmission path is reliable. AFAIK it is not a security risk.
Was this an outgoing packet?
As the error message suggests, IP checksum offload is enabled. This means that the computer’s TCP/IP stack does not calculate the checksum. Instead the NIC hardware does the calculation before sending the packet out.
This is not a real error. You can safely ignore it.
in this case, the checksum field has been ignored with no consequences. However, the checksum field in general in intended to verify integrity of a packet. An incorrect checksum generally indicates errors (possibly EMI) or loss of integrity and may indicate a compromise.

TCP Persistent Connections with HTTP?

So i thought with HTTP 1.1 your TCP connections are sustained for as long are you are communicating with that server? How does it actually work, how does the TCP connection know when you are done writing into the socket? Any formation would be awesome, i have done research but i cant find what im looking for short of reading the RFC.
The typical implementation is that the HTTP server will have a timeout (typically called KeepAliveTimeout or such) after which it will close an idle connection.
A server which reserves a thread or an entire process per connection (such as apache with the usual mpm_prefork or mpm_worker), keepalives are usually disabled entirely or kept quite short (a few seconds). For an event-based server such as nginx which uses much less memory per connection, the keepalive timeout can be left at a much higher value (typically a minute or so).
See section 8.1 of RFC 2616. Basically, HTTP 1.1 treats all connections as persistent but the langauage of the RFC doesn't mandate this behaviour, since it uses the word "SHOULD". If it was mandated, it would use "MUST".
However, the RFC does not specify in detail how an implementation does this. As can be seen from the HTTP Persistent Connection page on Wikipedia, Apache's default timeout (beyond which it returns persistent connections for other uses) may be as low as five seconds. (though this is almost certainly configurable, given all the other knobs and dials that Apache provides).
In other words, it's meant for numerous requests to the same address within a short time frame, so as to not waste time opening and closing a bucket-load of sessions where one will do. Increasing this timeout is not a "free ride", since resources are tied up while the connection is held open. In an environment where you expect lots of incoming clients, tying up these resources can be fatal to performance.

How much network overhead does TLS add compared to a non-encrypted connection?

(Approximately) how many more bits of data must be transferred over the network during an encrypted connection compared to an unencrypted connection?
IIUC, once the TLS handshake has completed, the number of bits transferred is equal to those transferred during an unencrypted connection. Is this accurate?
As a follow up, is transferring a large file over https significantly slower than transferring that file over http, given fast processors and the same (ideal) network conditions?
I've gotten this question a few times, so I decided to write up a small explanation of the overhead with some sample numbers based on common case. You can read it on my blog at http://netsekure.org/2010/03/tls-overhead/.
Summary from blog post:
The total overhead to establish a new TLS session comes to about 6.5k bytes on average.
The total overhead to resume an existing TLS session comes to about 330 bytes on average.
The total overhead of the encrypted data is about 40 bytes.
The short answer is: Your Milage May Vary (YMMV) - it all depends on your traffic pattern. There are a number of factors to take into account:
The additional handshakes and certs will add 4-6KB to the TCP stream, this will result in more ethernet frames going across the wire as well
Clients should download the certificate revocation list. Some tools like cURL omit this step. The crl may be cached by the browser, however, it usually doesn't have a long age associate with it. Verisign sets theirs to expire after four minutes. In my testing, I see Safari on Windows downloading the same 91KB file two times.
TLS Session resumption can avoid the public key-exchange part of the handshake, as well as the certificate verification.
HTTP keep-alives will keep the socket open, same as http, but has more savings when the socket is TLS.
SSL compression support is starting to show up server side, but AFAIK, most browsers aren't implementing this yet. Additionally, if you are already compressing at the http layer, not much will be gained here. Potentially large gains could be had if the client is sending large amounts of text to the server, which ordinarily isn't compressed at the http layer.
In 2020, TLS 1.2 and 1.3 are more typical with AES-GCM being a streaming cipher mode with lower overhead.
See https://tools.ietf.org/id/draft-mattsson-uta-tls-overhead-01.xml#rfc.section.3.
Per packet, the overhead for AES-GCM is 29 bytes. The TCP MSS may be as large as 1460 (https://blog.apnic.net/2014/12/15/ip-mtu-and-tcp-mss-missmatch-an-evil-for-network-performance/). So for a large download (where the maximum MSS is used), the overhead would be 29:1431 which is 2.03%.
(Handshake overhead is separate being once-off)
An order of magnitude. See this. This is not too significant, if the information that is protected is worth securing. And remember that processor speeds can only go up, so performance will keep getting better.

Resources