I'm using encryption (blowfish symmetric) to send a packet. Is it a bad idea from a security point of view to have a header at the beginning of the packet (that is also encrypted with the rest of the packet) that I can use to verify the packet is valid?
Pseudo code example:
byte[] verificationHeader = [1,2,3,4,5];
receive(packet);
unencrypt(packet);
if (packet.getData().beginswith(verificationHeader)) {
// assume packet is good, try to do something with it
} else {
// drop packet
}
I want to verify it because any other application could be broadcasting in my group and I don't want to get mixed up with other stuff.
Could it potentially help a hacker decrypt my packet?
If it is a bad idea then can you suggest an alternative?
At least in theory, it's a pretty bad idea -- it gives somebody doing a brute-force attack a known "target", so when/if they get the right key the know it (and quickly at that).
At least from a viewpoint of security, it would be much better to leave that part in plaintext. It might be more practical as well -- it saves you from decrypting something if it's not going to be useful anyway.
I'm not sure if it bad idea or not, but if you decide it is then maybe you could send a random number, followed by the random number or'ed with the header. then on the other side you or that number with the random number again. Then you aren't sending the same thing a bunch of times, but it is all encrpyted so it isn't hackable. I think my math is right, but the idea holds anyway.
ie
value = random | header;
send (random, value);
header = random | value;
Related
Why not to use OTP to encrypt more than one message but every encryption after the XOR do something like subtitution/ceasar cipher on the CT?
Reusing a one-time-pad is bad because it gives you information about the key.
p: a plaintext message to be encrypted: p_1 p_2 ... p_n
e_i: encryption of p_i with key k_i
otp: e_i = p_i^k_i for ii in 1..n
If you encrypt multiple messages and you xor them together you get something like
e1_1^e2_1 = p1_1^k_1^k_1^p2_1
and since k_1^k_1 cancels that becomes
e1_1^e2_1 = p1_1^p2_1
So you instantly learn information about the messages, but if you happened to know something about the input, you also learn something about the key.
By something like Caesar cipher you might mean
e2_1 = p2_1^(k_1+13)
That's assuming a 26-letter alphabet for your key and message space.
Unfortunately after 2 messages, your key wraps again, and you're back to the same problem you had before.
(there are other big problems too)
more generally, whatever simple thing you do , you give away information about the messages and typically key. The attacker can typically set up a big matrix of equations and use linear algebra to solve for the key once you give them enough information.
However if you take the simple thing you're doing and make it more and more complex and eventually get to a point where
kn: the key for the nth message
kn = f(k,n) for some function k
such that an attacker cannot learn significant information about f(k,n) givenf(k,m)forn != m, you've invented a stream cipher.
People do use stream ciphers all the time; they are not as secure as OTP, but they are a core of internet security.
The trick of course is figuring out a good functionf`; describing how to do that is beyond the margin of this question. (And besides I don't actually have that skill).
I've done some research into this, but I'm still not sure why this cannot be implemented. Provided we share an initial OTP, possibly via USB or some other physically secure method, surely we can include the next one in the messages that follow.
[Edit: More specifically, if I were to take a pad of double length, splitting it into x and y. Then using x to encrypt the message, and using y twice to encrypt the next pad, would that be insecure?]
You have to pair each bit of message with a same size bit of OTP. There's a limited amount of OTP.
If you pair up all of the OTP bits with bits for the next OTP...
a b c d e ...
q w e r t ...
There's no room for a message. And if you keep spending your OTP transferring another OTP, there never will be room for a message.
You can't compress the OTP, because the strength of the OTP is that it's completely random - that's what makes it impossible for codebreakers, because there's no pattern to latch onto.
Compression is a technology that works by finding patterns and replacing them with shorter "that large repetitive block goes here and here and there" signals - and by definition there are no patterns in complete randomness, so OTPs are not compressible.
If you can compress it a bit, you could do this but it's not right to describe it as OTP anymore, it's weak - and also massively wasteful of bandwidth. If you can compress it a lot, throw your random number generator away it's terrible.
Quick test demonstration of concept on a linux machine:
$ dd if=/dev/urandom of=/tmp/test count=10k
-> 5Mb file of randomness
$ bzip2 /tmp/test
-> 5.1Mb file
$ gzip /tmp/test
-> 5.1Mb file
Compressing a pad makes it bigger, by adding all the bzip/gzip file format information and doing nothing else.
What makes a One-Time Pad strong is, in addition to the lack of a pattern, the fact that there is no way to tell that the key used was the correct key. A message could be decrypted to reveal some "take over the world" scenario, but literally every message encrypted with a key of that exact length has a key that reveals that exact same message, word for word. This means you could have the actual decrypted message and the correct key, but it would be impossible to know that this is the case, and because literally any message (and I do mean literally) of that length can be a result. Even rubber-hose-decryption won't work. Even if the person being "persuaded" gives the correct key, there's no way to be sure. It's even common practice for people to possess fake keys that decrypt messages to reveal a message that isn't what an investigator is looking for, but would definitely be something even a completely innocent person would hide. A OTP hiding confidential information could, for instance, have a fake key that reveals someone bad-mouthing their commanding officer.
I'm currently sitting with the problem of passing messages that might contain different data over a network. I have created a prototype of my game, and now I'm busy implementing networking for my game.
I want to send different types of messages, as I think it would be silly to constantly send all the information every network-tick and I would rather send different messages that contain different data. What would be the best way to distinguish what message is received on the receiving side?
Currently I have a system where I prepend a string which distinguishes a certain type of message. My message is then sent through my own message parser class where it determines the type, and deserializes it to the correct type.
What I would like to know is if there is a better way of doing this? It seems like it should be a fairly common problem and so there must be a more trivial solution, unless I'm already doing it the trivial way.
Thanks!
I have read again carefully your question, and now I do not understand what is your problem, you say Currently I have a system where I prepend a string which distinguishes a certain type of message. My message is then sent through my own message parser class where it determines the type, and deserializes it to the correct type.
Looks OK, you may reduce the size of your message with my answer below horizontal line but the principle stays identical.
This the right way for asynchronous communication, but if you do synchrone you know that when you send A message you will receive B answer, so you do not have to prepend with a string which distinguishes the message, but you have to take care not sending another message before having the answer from the previous ...
So if you know how is formatted the answer you do not need any identification bytes, for example you know that the first four bytes is an integer, then a float on eight bytes, etc ...
Use boost::serialization, typically you save your structures, even with pointers, within a dumb bytes buffer, send that buffer over your network, and the other side de-serialize.
This example shows how Boost.Serialization can be used with asio to encode and decode structures for transmission over a socket.
Even if it is using boost::asio you could extract only the serialization part easily.
Will any encryption scheme safely allow me to encrypt the same integer repeatedly, with different random material prepended each time? It seems like the kind of operation that might get me in hot water.
I want to prevent spidering of items at my web application, but still have persistent item IDs/URLs so content links don't expire over time. My security requirements aren't really high for this, but I'd rather not do something totally moronic that obviously compromises the secret.
// performed on each ID before transmitting item search results to the client
public int64 encryptWithRandomPadding(int32 id) {
int32 randomPadding = getNextRandomInt32();
return encrypt(((int64)randomPadding << 32) + id), SECRET);
}
// performed on an encrypted/padded ID for which the client requests details
public int32 decryptAndRemoveRandomPadding(int64 idToDecrypt) {
int64 idWithPadding = decrypt(idToDecrypt, SECRET);
return (int32)idWithPadding;
}
static readonly string SECRET = "thesecret";
Generated IDs/URLs are permanent, the encrypted IDs are sparsely populated (less than 1 in uint32.Max are unique, and I could add another constant padding to reduce the likelyhood of a guess existing), and the client may run the same search and get the same results with different representative IDs each time. I think it meets my requirements, unless there's a blatant cryptographic issue.
Example:
encrypt(rndA + item1) -> tokenA
encrypt(rndB + item1) -> tokenB
encrypt(rndC + item2) -> tokenC
encrypt(rndD + item175) -> tokenD
Here, there is no way to identify that tokenA and tokenB both point to identical items; this prevents a spider from removing duplicate search results without retrieving them (while retrieving increments the usage meter). Additionally, item2 may not exist.
Knowing that re-running a search will return the same int32 padded multiple ways with the same secret, can I do this safely with any popular crypto algorithms? Thanks, crypto experts!
note: this is a follow-up to a question that didn't work out as I'd hoped: Encrypt integer with a secret and shared salt
If your encryption is secure, then random padding makes cracking neither easier nor harder. For a message this short, a single block long, either everything is compromised or nothing is. Even with a stream cipher, you'd still need the key to get any further; the point of good encryption is that you don't need extra randomness. Zero padding or other known messages at least a block long at the beginning are obviously to be avoided if possible, but that's not the issue here. It's pure noise, and once someone discovered that, they'd just skip ahead and start cracking from there.
Now, in a stream cipher, you can add all the randomness in the beginning and the later bytes will still be the same with the same key, don't forget that. This only actually does anything at all for a block cipher, otherwise you'd have to xor the random bits into the real value to get any use out of it.
However, you might be better off using a MAC as padding: with proper encryption, the encrypted mac won't give any information away, but it looks semi-randomish and you can use it to verify that there were no errors or malicious attacks during decryption. Any hash function you like can create the MAC, even a simple CRC-32, without giving anything away after encryption.
(A cryptographer might find a way to shave a bit or two off due to the relatedness, will tons of plaintexts if they knew beforehand how they were related, but that's still far beyond practicality.)
As you asked before, you can safely throw in an unecrypted salt in front of every message; a salt can only compromise an encrypted value if the implementation is broken or the key compromised, as long as the salt is properly mixed into the key, particularly if you can mix it into the expanded key schedule before decryption. Modern hash algorithms with lots of bits are really good at that, but even mixing into a regular input key will always have the same security as the key alone.
Scenario:
I send encrypted information to a client program.
I want to the information to display 1 year later.
No further information will be sent by me.
If the user of the client program can do analysis of the binary file of the program, is it possible to prevent the early reveal of the information?
In general, such a thing is not possible. If the program is able to decrypt the data without further interaction, it must possess the key.
Therefore, even with signed timestamping, you cannot prevent someone from reverse-engineering your program, taking the key, and doing the decryption.
EDIT: Though you could at least in theory implement something like this indirectly, by requiring a computionally intensive puzzle to be solved for retrieving the key (which takes a year on the average!), but this is unreliable at best (faster/slower hardware) and will certainly not find acceptance among your users/customers. Be prepared to receive hate mails if you do that :-)
Interesting question. I think it is not possible as you described, unless server store a part of the secret and deliver it to the client at the right moment.
If you've sent the client all the info they need to do the decryption, there is no way to force them to wait a year before doing so.
You can always use a timemachine and base your key on a hash of, say, the Dow Jones index one year from now as well as some other data that can't be pre-calculated. So unless you have some inside info that only you know about the day the decryption should occur, I think you're facing a quite impossible task