QUIC-Transfer Protocol need not TLS? - tcp

If I need't a secure data.
Can I use TLS without QUIC? Why must use TLS with QUIC?
Thanks.

I assume you meant:
Can I use QUIC without TLS?
Yes and no.
Both RFC 9000 and RFC 9001 (the QUIC standard) require QUIC to be secure, and through TLS. And it is highly unlikely you will see a QUIC server on the internet that is not encrypted.
But what exactly is stopping you from doing whatever you want? In particular using QUIC without encryption? In fact there already are implementations that allow it.
That being said, I encourage you to actually always use encryption, even for testing or development. It is a good practice. Because it is mandatory for production (otherwise it is not safe, unless you use it in a weird setup like locally without network), and it is a good idea to keep different envs as close to each other as possible. In order to avoid subtle issues.

Related

Using websocket over raw tcp even when no web browser is involved: good idea?

After reviewing the differences between raw TCP and websocket, I am thinking to use websocket, even though it will be a client/server system with no web browser in the picture. My motivation stems from:
websocket is message-oriented, so I do not have to write down a protocol on top of the tcp layer to delimit messages myself.
The initial handshake of websocket is quite fitting for my use case as I can authenticate/authorize the user in this initial response-request exchange.
Performance does matter a lot here though, I am wondering if, excluding the websocket handshake, there would be a loss of performance between the websocket messages vs writing a custom protocol on raw tcp? If not, then websocket is the most convenient choice to me, even if I don't use the benefits related to the "web" part.
Also would using wss change the answer to the above question?
You are basically asking if using an already implemented library which perfectly fits your requirements and which even has the option for secure connections (wss) is better then designing and implementing your own message based protocol on TCP, assuming that performance and overhead are not relevant for your use case.
If you rephrase your question this way the answer should be obvious: using an existing implementation which fits your purpose saves you a lot of time and hassle for design, implementation and testing. It is also easier to train developers to use this protocol. It is easier to debug problems since common tools like Wireshark understand the protocol already.
Apart from this websockets have an established mechanism to use proxies, use a common protocol so that they can easier pass firewalls etc. So you will likely run into less problems when rolling out your application.
In other words: I can see no reason on why you should not use websockets if they fit your purpose.

How to add HTTP/2 in G-WAN

I would like to know if it's possible to make G-WAN 100% compatible with HTTP/2 by using for example the solution nghttp2 (https://nghttp2.org)
Sorry for the late answer - for any reason Stackoverflow did not notify us this question and I have found it only because a more recent one was notified.
I have not looked at this library so I can't tell for sure if it can be used without modifications, but it could certainly be used as the basis of an event-based G-WAN protocol handler.
But, from a security point of view, there are severe issues with HTTP-2, and this is why we have not implemented it in G-WAN: HTTPS-2 lets different servers use the same TCP connection - even if they weren't listed in the original TLS certificate.
That may be handy for legit applications, but that's a problem for security: DOH (DNS over HTTP-2) prevents users from blocking (or even detecting) unwanted hosts at the traditionally used DNS requests level (the "hosts" file in various operating systems).
In facts, this new HTTP standard is defeating the purpose of SSL certificates, and defeating domain-name monitoring and blacklisting.
Is it purely a theoretical threat?
Google ads have been used in the past to inject malware designed to attack both the client and server sides.

Varnish to be used for https

Here's the situation. I have clients over a secured network (https) that talk to multiple backends. Now, I wanted to establish a reverse proxy for majorly load balancing (based on header data or cookies) and a little caching. So, I thought varnish could be of use.
But, varnish does not support ssl-connection. As I've read at many places, quoting, "Varnish does not support SSL termination natively". But, I want every connection, ie. client-varnish and varnish-backend to be over https. I cannot have plaintext data anywhere throughout network (there are restrictions) so nothing else can be used as SSL-Terminator (or can be?).
So, here are the questions:
Firstly, what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively".
Secondly, is this scenario good to implement using varnish?
and Finally, if varnish is not a good contender, should I switch to some other reverse proxy. If yes, then which will be suitable for the scenario? (HA, Nginx etc.)
what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively"
It means Varnish has no built-in support for SSL. It can't operate in a path with SSL unless the SSL is handled by separate software.
This is an architectural decision by the author of Varnish, who discussed his contemplation of integrating SSL into Varnish back in 2011.
He based this on a number of factors, not the least of which was wanting to do it right if at all, while observing that the de facto standard library for SSL is openssl, which is a labyrinthine collection of over 300,000 lines of code, and he was neither confident in that code base, nor in the likelihood of a favorable cost/benefit ratio.
His conclusion at the time was, in a word, "no."
That is not one of the things I dreamt about doing as a kid and if I dream about it now I call it a nightmare.
https://www.varnish-cache.org/docs/trunk/phk/ssl.html
He revisited the concept in 2015.
His conclusion, again, was "no."
Code is hard, crypto code is double-plus-hard, if not double-squared-hard, and the world really don't need another piece of code that does an half-assed job at cryptography.
...
When I look at something like Willy Tarreau's HAProxy I have a hard time to see any significant opportunity for improvement.
No, Varnish still won't add SSL/TLS support.
Instead in Varnish 4.1 we have added support for Willys PROXY protocol which makes it possible to communicate the extra details from a SSL-terminating proxy, such as HAProxy, to Varnish.
https://www.varnish-cache.org/docs/trunk/phk/ssl_again.html
This enhancement could simplify integrating varnish into an environment with encryption requirements, because it provides another mechanism for preserving the original browser's identity in an offloaded SSL setup.
is this scenario good to implement using varnish?
If you need Varnish, use it, being aware that SSL must be handled separately. Note, though, that this does not necessarily mean that unencrypted traffic has to traverse your network... though that does make for a more complicated and CPU hungry setup.
nothing else can be used as SSL-Terminator (or can be?)
The SSL can be offloaded on the front side of Varnish, and re-established on the back side of Varnish, all on the same machine running Varnish, but by separate processes, using HAProxy or stunnel or nginx or other solutions, in front of and behind Varnish. Any traffic in the clear is operating within the confines of one host so is arguably not a point of vulnerability if the host itself is secure, since it never leaves the machine.
if varnish is not a good contender, should I switch to some other reverse proxy
This is entirely dependent on what you want and need in your stack, its cost/benefit to you, your level of expertise, the availability of resources, and other factors. Each option has its own set of capabilities and limitations, and it's certainly not unheard-of to use more than one in the same stack.

Other common protocols besides HTTP?

I usually pass data between my web servers (in different locations) using HTTP requests (sometimes using SSL if it's sensitive). I was wondering if there were any lighter protocols that I might be able to swap HTTP(S) for that would also support public/private keys like SSH or something.
I used PHP sockets to build a SMTP client before so I wouldn't mind doing that if required.
There are lots and lots and lots of protocols. Lots. Start here for a list.
http://en.wikipedia.org/wiki/Internet_Protocol_Suite
SFTP is fun for passing data around. It works well. You'll find that it's not much better than HTTP, however, because HTTP is pretty simple.
http://en.wikipedia.org/wiki/SSH_file_transfer_protocol
SMTP would work. http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol
SNMP can be made to work. http://en.wikipedia.org/wiki/Simple_Network_Management_Protocol You have to really push the envelope.
All of these, however, involve TCP/IP sockets, which involve a fair amount of overhead because of the negotiation for a connection and the acknowledgement of packets.
If you want real fun with very low overhead, use UDP.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
You might want to use Reliable UDP if you're worried about messages getting dropped.
http://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
I'd like to mention XMPP in addition to protocols already listed in other answers.
It's lightweight, and it is used in some "realtime" communication systems (for example, in GTalk).
WebSocket is a good option if you are interested in keeping a connection open to pass multiple messages back and forth. It's useful for issuing updates from the server to clients in real time, for example.
Why don't you simply use FTPS:
http://en.wikipedia.org/wiki/FTPS
or SFTP
http://en.wikipedia.org/wiki/SSH_file_transfer_protocol

Still need checksum in application protocol when tcp/ip already has it?

I am designing an application protocol, and i am wondering if i still need include checksum in the protocol since tcp/ip already has checksum.
what's your opinion?
The BitTorrent protocol has a heavy amount of additional error correction and detection layered on top of TCP, so clearly the protocol designers saw the need for it.
The TCP checksum is quite weak, so you probably want an application level one if you are at all worried about reliability.
In particular the TCP checksum is not a secure hash, and there is no signature, so if you're worried about malicious changes then you need to add the security yourself.
To add to the other answers, you should probably look into Message Authentication Codes. MACs are a more robust way to detect errors than a simple TCP checksum.
If you want something robust, take a look at [HMAC][2]. HMAC provides both error detection and authentication (via shared keys).
If you want something quick and dirty, why not use sha1 hashes?

Resources