Suppose there is an encrypted communication between A and B, through an unsecure medium, such that A and B shared a secret key with DH protocol.
If A sends an encrypted message and the hash/MAC/HMAC of this message to B, wouldn't it be easy for an eavesdropper to just intercept the hash/MAC/HMAC, change some bits in it, and send it to B?
B wouldn't be able to check the integrity of all messages sent by A and thus will destroy them everytime he gets a message from A, right?
B will then become non available ???
Thank you
The process you describe is just a very specific form of corrupting the data. If an attacker can corrupt the data, then of course the attacker can prevent A from speaking to B. The attacker could just drop the packets on the ground. That would also prevent A from speaking to B.
Any data corruption, not just modifying the HMAC, will cause this same situation. If I modify the authenticated stream, then the (unmodified) HMAC won't match and it will be discarded.
The point of an HMAC is to ensure integrity. It has nothing to do with availability. Any Man-in-the-Middle can always trivially destroy availability in any system as long as the connection goes through them. (If they can't, they're not a MitM.)
Related
I need to secure a database where data comes from different clients at a time. To maintain the integrity in mid-way network the message needs to be encrypted in some way. If database server knows the encryption method, then it decrypts and stores the data. Which is not ok in this case as if some breach happens then the message might be leaked in mid-way network.
So, need some algorithm like SSH. Client sends data locked, database server locks the message and resends it. The client then unlocks the data and finally database server has the message with it's locking system which can be removed and the data can be read.
I don't care if the database server is breached. All I want is to maintain integrity of message while travelling through the network.
Any suggestions or references to achieve this will be appreciated.
I have two ESP8266 microcontroller boards:
Board A is running a HTTP server and is able to switch a relay by GET request from Board B, which is the HTTP client.
To ensure that only Board B, and nobody else, will switch the relais on Board A, I want to implement some kind of challenge response authentication.
My idea was the following:
Board B asks Board A to switch the relay
Board A sends some random bytes as a challenge
Board B encrypts these raw bytes with XTEA algorithm and returns the value to Board A
Board A deciphers the response from Board B and compares it with its own result. If the response arrives too late (e.g. after one second) or the response is invalid, the authentication will be aborted and a new challenge will be generated next time. If the response is valid the relay will switch and there will also be a new challenge for the next attempt.
So if an attacker is sniffing network communication, he will receive both the raw bytes and the encrypted ones.
My questions to you:
Is it (easily) possible to calculate the XTEA key if the attacker knows raw bytes and the encryptes ones?
Is the described method a reasonable solution for my problem?
Thanks in advance,
Chris
DISCLAIMER: i am not a cryptography expert.
Is it (easily) possible to calculate the XTEA key if the attacker knows raw bytes and the encryptes ones?
nope, you still have to do a bruteforce to deduce the key and number of rounds used, AFAIK. (at least if you're using 19 rounds or more, as currently the only known XTEA cryptographic attacks affects 18 rounds or less, as of 2009. but given that the default-and-recommended is 32 rounds, that shouldn't be an issue unless you use a custom and low number of rounds.. like 18)
Is the described method a reasonable solution for my problem?
your protocol is vulnerable to bit-flipping attacks from a MITM attacker, as well as not providing protection against snooping/monitoring, a MITM attacker will know what command you're giving, and be able to change the command given, both of which could be easily avoided...
i think it would be better if the client just asks for the random bytes as a token, and sends the actual command together with the token, encrypted. this will protect your command from snooping, it will make a MITM attacker unable to deduce what command you sent even IF the attacker knows how the protocol works, as the token now serves as a salt for the encrypted command.. but you're still vulnerable to bit-flipping from a MITM attacker even if the attacker does not know the key, thus you should also add a checksum to make sure the ciphertext has not been tampered with... how about for the client:
// start with the actual command
$data=encrypt("switch_relay(5);"); // or whatever
function encrypt(string $command){
// because of XTEA length padding, we need to tell the server the inner command length, so add a big endian 16 bit `size header`
$data=to_big_endian_uint16_t(strlen($data)).$data;
// get a unique 1-time-token? this should serve as salt AND protect against replay attack
$token=fetchToken();
// add the token
$data=$token.$data;
// now calculate a checksum to protect against bit-flipping attacks
$checksum=hash('adler32be',$data); // or whatever checksum you prefer. just has to be strong enough to detect random bit-flipping from attackers that can't decrypt-modify-encrypt because they don't know the encryption key, see https://en.wikipedia.org/wiki/Malleability_(cryptography) / https://en.wikipedia.org/wiki/Bit-flipping_attack
// add checksum
$data=$checksum.$data;
// encrypt data
$data=XTEA::encrypt($data, $key, XTEA::PAD_RANDOM, 32);
return $data;
}
after this i would normally add another size header so the server knows how many bytes to read for the entire packet, but since you say you're using the HTTP protocol, i assume you'll use a Content-Length: X header as the outer size header.. (or if you don't, you should probably do another $data=big_endian_uint16_t(strlen($data)).$data; after xtea-encrypting it)
and for the server do like
function decrypt(string $data){
// 4=checksum 8=token 2=inner_command_length
if(strlen($data) < (4+8+2) || strlen($data) % 8 !== 0){
// can't be an xtea-encrypted command, wrong length.
return ERR_INVALID_LENGTH;
}
$data=XTEA::decrypt($data,$key,32);
$checksum=substr($data,0,4);
$data=substr($data,4);
if(hash('adler32be',$data)!=$checksum){
// checksum fail, can't be an xtea-encrypted command (or maybe it was corrupted or tampered with?)
return ERR_INVALID_CHECKSUM;
}
$token=substr($data,0,8);
$data=substr($data,8);
if(!is_valid_token($token)){
return ERR_INVALID_TOKEN;
}
$inner_size_len=big_endian_uint16_t_to_native_number(substr($data,0,2));
$data=substr($data,2);
if(strlen($data) < $inner_size_len){
return ERR_INVALID_INNER_SIZE;
}
// remove padding bytes
$data=substr($data,0,$inner_size_len);
return $data; // the actual decrypted command
}
..?
(i still see 3 potential issues with this, 1: forward secrecy is not provided, for that you'd need something much more complex, i think. 2: an attacker could maybe DoS-attack you by requesting one-time-tokens until you run out of ram or whatever, preventing legitimate clients from generating tokens, but given the token lifetime of 1 second, it would have to be a continuous active attack, and stop working once the attacker is blocked/removed. 3: if your commands can be larger than 65535 bytes, you may want to switch to a 32bit size header, or if your commands can be over 4GB, you may want to switch to an 64bit size header, and so on. but if your commands are small, a 16bit size header at 65535 bytes should suffice?)
I understand the end-to-end principle from the classic MIT paper, which states that executing a function between 2 remote nodes does not depend on the states of the in-between nodes.
But what is end-to-end encryption, end-to-end guarantees, end-to-end protocols, etc...? I couldn't find a precise definition of end-to-end. The term seems to be over-used.
In other words, when one describes a system property X as end-to-end, what does it mean? What is the opposite of end-to-end?
I don't think end-to-end is over-used. It merely says that the property holds from one end to the other. An "end" can be a node or a layer in the computing stack.
Consider three nodes: A, B and C. Node A wants to talk with C. B sits between A and C and forwards messages between them. B is, for example, a load balancer or a gateway.
Encryption is end-to-end if B cannot read and tamper messages sent from A to C. A concrete example is the following: A is your laptop, C is a remote machine in your network at home or work. B is a VPN gateway. The encryption here is not end-to-end because only the link between A and B is actually encrypted. An attacker sitting between B and C would be able to read the clear text. That might be fine in practice, but it is not end-to-end.
Another example. Say we don't care about encryption, but about reliable message transmission. You know that the network might corrupt bits of messages. Therefore, TCP and other protocols have a checksum field that is checked whenever messages are received. But the guarantees of these checksums is not necessarily end-to-end.
If A sends a message m to C relying on the TCP's checksum, a node B sitting in the middle could corrupt the message in an undetectable way. Abstracting most details, node B basically (1) receives m, (2) checks m's checksum, (3) finds the route to C and creates a new message with m's payload, (4) calculates a new checksum for m, and (5) sends m (with the new checksum) to C. Now, if node B corrupts the message after (2) but before step (4), the resulting message arriving on C is corrupted but that cannot be detected by looking at m's checksum! Therefore, such checksum is not end-to-end. Node B does not even have to be malicious. Such a corruption can be caused by hardware errors or more probably by bugs in node B. This has happened a couple of times in Amazon S3 service, for example: this case and this case and this case.
The solution is, obviously, to use application-level checksums, which are end-to-end. Here, a checksum of m's payload is appended to the payload before calculating the lower layer checksum.
Good morning everyone.
I've been reading (most of it here in stack overflow) about how to make a secure password authentication (hashing n times, using salt, etc) but I'm in doubt of how I'll actually implement it in my TCP client-server architecture.
I have already implemented and tested the methods I need (using jasypt digester), but my doubt is where to do the hashing and its verification.
As for what I read, a good practice is to avoid transmitting the password. In this case, the server would send the hashed password and the client would test it with the one entered by the user. After that I have to tell the server if the authentication was successful or not. Ok, this won't work becouse anyone who connect to the socket the server is reading and send a "authentication ok" will be logged on.
The other option is to send the password's has to the server. In this case I don't see any actual benefit from hashing, since the "attacker" will have to just send the same hash to authenticate.
Probably I'm not getting some details, so, can anyone give me a light on this?
The short answer to your question is definitely on the side that permanently stores the hashes of the passwords.
The long answer: hashing passwords only allows to prevent an attacker with read-only access to your passwords storage (e.g. database) from escalating to higher power levels and to prevent you knowing the actual secret password, because lots of users use same pass across multiple services (good description here and here). That is why you need to do the validation on the storage side (because otherwize, as you've mentioned, attacker would just send "validation ok" message and that's it).
However if you want to implement truly secure connection, simple passwords hashing is not enough (as you've also mentioned, attacker could sniff TCP traffic and reveal the hash). For this purpose you need to establish a secure connection, which is much harder than just hashing password (in web world a page where you enter your pass should always be served over HTTPS). The SSL/TLS should be used for this, however these protocols lie on top of TCP, so you might need another solution (in common, you need to have a trusted certificate source, need to validate the server cert, need to generate a common symmetric encryption key and then encrypt all data you send). After you've established secure encrypted connection, encrypted data is useless to sniff, the attacker would never know the hash of the password.
I am having an application where I have to send several small data per second through the network using UDP. The application needs to send the data in real-time (no waiting). I want to encrypt these data and ensure that what I am doing is as secure as possible.
Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated.
Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets).
However I am concerned about few things:
Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way smaller (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application.
So my question is, what are the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)?
Is my way the best way to do it? ...flowed? ...Overkill?
Please note that I am not asking for a 100% secure solution, as there is no such thing.
You have several choices. You can use DTLS, which is a version of TLS adapated for datagrams. It is specified in an RFC and implemented in the openssl library. You can also use the IKE/IPsec protocol and use a UDP encapsulation of the IPsec portion. Usually IPsec is available at the OS level. You can also use OpenVPN, which looks to be a hybrid of TLS for key exchange and a proprietary UDP-based packet encryption protocol.
If your problem is that the data is too small, how about extending the data with random bytes? This will make the plaintext much harder to guess.
This question is a little old, but what about using a One Time Pad type approach? You could use a secure reliable transport mechanism (like HTTPS) to transmit the one time keys from the server to your client. There could be two sets of keys -- one for client to sever, and one for server to client. Each datagram would then include a sequence number (used to identify the one time key) and then the encrypted message. Because each key is used for only one datagram, you shouldn't be exposed to the small data problem. That said, I'm not an expert at this stuff, so definitely check this idea out before using it...
Use Ecdh key exchange (use a password to encrypt the client private key; left on the client) instead of a password. This is a very strong key.
Aes cbc does not help you; the messages are too short and you want to prevent replay attacks. Pad your 64 bit message (two integers) with a counter (starting with 0) 64 bits means 2^64 messages can be sent. Encrypt the block twice (aes ecb) and send e(k;m|count)|e(k;e(k;m|count)). Receiver only accepts monotonically increasing counts where the second block is the encryption of the first. These are 32 byte messages that fit fine in a udp packet.
if 2^64 messages is too small; see if your message could be smaller (3 byte integers means the counter can be 80 bits); or go back to step 1 (new private keys for at least one side) once you are close (say 2^64-2^32) to the limit.
You could always generate a fresh pair of IVs and send them alongside the packet.
These days a good streaming cipher is the way to go. ChaCha20 uses AES for a key stream. Block ciphers are the ones that need padding.
Still that's only part of the picture. Don't roll your own crypto. DTLS is probably a mature option. Also consider QUIC which is emerging now for general availability on the web.
Consider using ECIES Stateless Encryption https://cryptopp.com/wiki/Elliptic_Curve_Integrated_Encryption_Scheme where you sending devices use the public key of the central system and an ephemeral key to generate a symmetric key pair, then a KDF, then AES-256-GCM. You end up with modest size packets which are stateless and complete. No need for an out-of-band key agreement protocol.
There are good examples on the internet, for example: https://github.com/insanum/ecies/blob/master/ecies_openssl.c
I am using such a system to deliver telemetry from mobile devices over an unsecure channel.