File Delimiters on AES 256 Encrypted fields - encryption

I have a requirement for one of my projects in which I am expecting a few of the incoming fields encrypted as AES-256 when sent to us by upstream. The incoming file is comma delimited. Is there a possibility that the AES encrypted fields may contain "," throwing off the values to different fields? What about if it is pipe delimited or some other delimiter?
Also, how what should be the datatype of these encrypted fields in order to read these encrypted fields using an ETL tool?
Thanks in Advance

AES as a block cipher is a family of permutations selected by the key. The output is expected to be random ( more precisely we believe that AES is a Pseudo-Random-Permutation)
AES ( like any block cipher) outputs binary data, usually as a byte array and bytes can take any value between 0 and 256 with equal probability.
You are not alone;
Transmitting binary data can create problems, especially in protocols that are designed to deal with textual data. To avoid it altogether, we don't transmit binary data. Many of the programming errors related to encryption on Stack Overflow are due to sending binary data over text-based protocols. Most of the time this works, but occasionally it fails and the coders wonder about the problem. The binary data corrupts the network protocol.
Therefore hex, base64, or similar encodings are necessary to mitigate this. Base64 is not totally URL safe and one can make it URL safe with a little work.
And note that has nothing to do with security; it is about visibility and interoperability.

Related

Sending raw bytes over network. Bad?

This post to the question "What is base 64 encoding used for?" says:
When you have some binary data that you want to ship across a network, you generally don't do it by just streaming the bits and bytes over the wire in a raw format. Why? because some media are made for streaming text. You never know -- some protocols may interpret your binary data as control characters (like a modem), or your binary data could be screwed up because the underlying protocol might think that you've entered a special character combination (like how FTP translates line endings).
I've used sockets in Java a hundert times to send binary data over networks. And as far as I know it very common to send binary data over networks especially if you have big data. I don't see why some devices could interpret binary data wrong, since it contains TCP header etc.
SOAP MTOM also sends binary data over networks.
Am I misunderstanding something? I'm irritated, because this post has many upvotes and is accepted.
The answer you link to isn't incorrect, it just fails to explicitly mention some examples. The answer is in the quote as well:
because some media are made for streaming text
Sockets deal in bytes, they don't care what they transport. It is the higher-level protocols, or the message formats they transport, that do.
It's when this binary data is wrapped in envelopes of such protocols or formats that they can wreak havoc. A less than (<) character in image bytes is perfectly valid, but when used in an XML message, it will break the XML. Other characters, like control characters, can have an influence on how further data is to be interpreted by a protocol handler.
So base64 is used to wrap binary data in a safe-for-transport way where that would otherwise not be safe.

How to distribute init. vector with data encrypted via a symmetric cipher?

I want to provide for the user a service of encrypting some data via symmetric cipher to a file. The user simply provide a key and he/she may provide an initialize vector for the cipher.
Is there a standard how the file should look like? It makes sense to fill the file with the encrypted data and show the corresponding initialize vector in a dialog window. It may seem reasonable to someone else that the initialize vector should be stored in the file with the encrypted data.
The important thing for me is that the result is useful for a user and he/she won't need to bother with adjustment of the result.
Thank for a comment!
It is common practice to provide the IV as the first block of the cyphertext file. That way the receiver just treats the first 8 bytes (DES) or 16 bytes (AES) as the IV and the rest of the file as the actual cyphertext.
Use the same format for the IV as you are using for the cyphertext: Base64, hex, byte data or whatever.
In principle, you can use any format you want, as long as the decrypting part of the program knows how to read it. For efficiency, having the initialization vector before the data seems a good idea.
If you want to encrypt files, a good idea would be to not create your own format (which leads to you having to do decisions like the one here), but use an existing file format (which then also is a cryptographic protocol).
I recommend the OpenPGP message format, as defined in RFC 4880 (or some subset thereof, if you don't need all features). This also has the advantage that your clients then can decrypt your files using any OpenPGP implementation (like pgp or gpg), if your program somehow ceases to work (of course, only if they have the key/password).
you should be fine if you store the IV together with the encrypted data in the file ...

Dictionary based SMS encryption

I am trying to create end to end SMS encryption application, but don't want to use standard encryption algorithm.The idea is to convert the text of message to completely different meaningful text so that over the network it doesn't seem encrypted. I am assuming messages only in English language.
In implementation part, I have first compressed the message using huffman encoding which gives me compressed stream of bits. Now for encryption I don't have any idea. Is it possible to build a dictionary of some random text or what other way can be used for getting such encryption? Decryption at other receiving end is also required.
Are you asking for a a codebook? If you have access to an English language dictionary of 65536 entries for example, you can take every 16-bits of your message as an index into this table to get a word. Good luck converting this into a real cryptosystem.

Difference between encoding and encryption

What is the difference between encoding and encryption?
Encoding transforms data into another format using a scheme that is publicly available so that it can easily be reversed.
Encryption transforms data into another format in such a way that only specific individual(s) can reverse the transformation.
For Summary -
Encoding is for maintaining data usability and uses schemes that are publicly available.
Encryption is for maintaining data confidentiality and thus the ability to reverse the transformation (keys) are limited to certain people.
More details in SOURCE
Encoding:
Purpose: The purpose of encoding is to transform data so that it can be properly (and safely) consumed by a different type of system.
Used for: Maintaining data usability i.e., to ensure that it is able to be properly consumed.
Data Retrieval Mechanism: No key and can be easily reversed provided we know what algorithm was used in encoding.
Algorithms Used: ASCII, Unicode, URL Encoding, Base64.
Example: Binary data being sent over email, or viewing special characters on a web page.
Encryption:
Purpose: The purpose of encryption is to transform data in order to keep it secret from others.
Used for: Maintaining data confidentiality i.e., to ensure the data cannot be consumed by anyone other than the intended recipient(s).
Data Retrieval Mechanism: Original data can be obtained if we know the key and encryption algorithm used.
Algorithms Used: AES, Blowfish, RSA.
Example: Sending someone a secret letter that only they should be able to read, or securely sending a password over the Internet.
Reference URL: http://danielmiessler.com/study/encoding_vs_encryption/
Encoding is the process of transforming data so that it may be transmitted without danger over a communication channel or stored without danger on a storage medium. For instance, computer hardware does not manipulate text, it merely manipulates bytes, so a text encoding is a description of how text should be transformed into bytes. Similarly, HTTP does not allow all characters to be transmitted safely, so it may be necessary to encode data using base64 (uses only letters, numbers and two safe characters).
When encoding or decoding, the emphasis is placed on everyone having the same algorithm, and that algorithm is usually well-documented, widely distributed and fairly easily implemented. Anyone is eventually able to decode encoded data.
Encryption, on the other hand, applies a transformation to a piece of data that can only be reversed with specific (and secret) knowledge of how to decrypt it. The emphasis is on making it hard for anyone but the intended recipient to read the original data. An encoding algorithm that is kept secret is a form of encryption, but quite vulnerable (it takes skill and time to devise any kind of encryption, and by definition you can't have someone else create such an encoding algorithm for you - or you would have to kill them). Instead, the most used encryption method uses secret keys : the algorithm is well-known, but the encryption and decryption process requires having the same key for both operations, and the key is then kept secret. Decrypting encrypted data is only possible with the corresponding key.
Encoding is the process of putting a sequence of characters into a special format for transmission or storage purposes
Encryption is the process of translation of data into a secret code. Encryption is the most effective way to achieve data security. To read an encrypted file, you must have access to a secret key or password that enables you to decrypt it. Unencrypted data is called plain text ; encrypted data is referred to as cipher text
Encoding is for maintaining data usability and can be reversed by employing the same algorithm that encoded the content, i.e. no key is used.
Encryption is for maintaining data confidentiality and requires the use of a key (kept secret) in order to return to plaintext.
Also there are two major terms that brings confusion in the world of security Hashing and Obfuscation
Hashing is for validating the integrity of content by detecting all modification thereof via obvious changes to the hash output.
Obfuscation is used to prevent people from understanding the meaning of something, and is often used with computer code to help prevent successful reverse engineering and/or theft of a product’s functionality.
Read more # Danielmiessler article
See encoding as a way to store or communicate data between different systems. For example, if you want to store text on a hard drive, you're going to have to find a way to convert your characters to bits. Alternatively, if all you have is a flash light, you might want to encode your text using Morse. The result is always "readable", provided you know how it's stored.
Encryption means you want to make your data unreadable, by encrypting it using an algorithm. For example, Caesar did this by substituting each letter by another. The result here is unreadable, unless you know the secret "key" with which is was encrypted.
I'd say that both operations transform information from one form to another, the difference being:
Encoding means transforming information from one form to another, in most cases it is easily reversible
Encryption means that the original information is obscured and involves encryption keys which must be supplied to the encryption / decryption process to do the transformation.
So, if it involves (symmetric or asymmetric) keys (aka a "secret"), it's encryption, otherwise it's encoding.
Encoding -》 example data is 16
Then encoding is 10000 means it's binary format or ASCII or UNCODED etc
Which can be read by any system eassily and eassy to understand it's real meaning
Encryption -》 example data is 16
Then encryprion is 3t57 or may be anything depend upon which algo is used to encryption
Which can be read by any system eassily BUT ony who can understand it's real meaning who has it's decryption key
These are little bit different from each other. The encoding used when we want to convert text in a specific computer coding technique and in the encryption we hide data between a specific key or text.
Encoding is process of transforming given set of characters in relevant accepted format, take this question's URL,
This is what we see -->
hhttps://stackoverflow.com/questions/4657416/difference-between-encoding-and-encryption
Over transmission this will be transformed to -->
https%3A%2F%2Fstackoverflow.com%2Fquestions%2F4657416%2Fdifference-between-encoding-and-encryption
^ is example of URL encoding using ASCII char set where,
: = %3A
/ = %2F
The reverse of Encoding is Decoding to original form and with given ASCII standard.
Encryption is process of converting plane text to cipher text so only authorized party can decipher it.
For example a simple HELLO is encrypted into KHOOR if just 3 characters are shifted.
p.s. Encoding (to code in some form) is form of encryption. :)
what-is-encryption
Encryption converts data to non-readable format (Possibly containing special non-readable characters).
Encoding helps to convert that data to readable format (characters) so that it can be stored for future use i.e. possibly during decryption.

What encryption algorithm is best for small strings?

I have a string of 10-15 characters and I want to encrypt that string. The problem is I want to get a shortest encrypted string as possible. I will also want to decrypt that string back to its original string.
Which encryption algorithm fits best to this situation?
AES uses a 16-byte block size; it is admirably suited to your needs if your limit of 10-15 characters is firm. The PKCS#11 (IIRC) padding scheme would add 6-1 bytes to the data and generate an output of exactly 16 bytes. You don't really need to use an encryption mode (such as CBC) since you're only encrypting one block. There is an issue of how you'd be handling the keys - there is always an issue of how you handle encryption keys.
If you must go with shorter data lengths for shorter strings, then you probably need to consider AES in CTR mode. This uses the key and a counter to generate a byte stream which is XOR'd with the bytes of the string. It would leave your encrypted string at the same length as the input plaintext string.
You'll be hard pressed to find a general purpose compression algorithm that reliably reduces the length of such short strings, so compressing before encrypting is barely an option.
If it's just one short string, you could use a one-time pad which is mathematically perfect secrecy.
http://en.wikipedia.org/wiki/One-time_pad
Just be sure you don't use the key more than one time.
If the main goal is shortening, I would look for a compression library that allows a fixed dictionary built on a corpus of common strings.
Personally I do not have experience with that, but I bet LZMA can do that.

Resources