Random access encrypted file - encryption

I'm implementing a web based file storage service (C#). The files will be encrypted when stored on the server, but the challenge is how to implement the decryption functionality.
The files can be of any size, from a few KB to several GB. The data transfer is done in chunks, so that the users download the data from say offset 50000, 75000 etc. This works fine for unencrypted files, but if encryption is used the entire file has to be decrypted before each chunk is read from the offset.
So, I am looking at how to solve this. My research so far shows that ECB and CBC can be used. ECB is the most basic (and most insecure), with each block encrypted separately. The way ECB works is pretty much what I am looking for, but security is a concern. CBC is similar, but you need the previous block decrypted before decrypting the current block. This is ok as long as the file is read from beginning to end, and you keep the data as you decrypt it on the server, but at the end of the day its not much better than decrypting the entire file server side before transferring.
Does anyone know of any other alternatives I should consider? I have no code at this point, because I am still only doing theoretical research.

Do not use ECB (Electronic Code Book). Any patterns in your plaintext will appear as patterns in the ciphertext. CBC (Cipher Block Chaining) has random read access (the calling code knows the key and the IV is the previous block's result) but writing a block requires rewriting all following blocks.
A better mode is Counter (CTR). In effect, each block uses the same key and the IV for each block is the sum of offset of that block from a defined start and an initial IV. For example, the IV for block n is IV + n. CTR mode is described in detail on page 15 in NIST SP800-38a. Guidance on key and IV derivation can be found in NIST SP800-108.
There are a few other similar modes such as (Galois Counter Mode) GCM.

Related

Does the IV of AES-128-cbc need to be random during encryption and decryption?

I am using node and the crypto module to encrypt and decrypt a large binary file. I encrypt the file using crypto.createCipheriv and decrypt it using crypto.createDecipheriv.
For the encryption I use a random IV as follows:
const iv = crypto.randomBytes(16);
const encrypt = crypto.createCipheriv('aes-128-cbc', key, iv)
What I don't understand, do I need to pass a random IV for createDecipheriv as well? The SO here says:
The IV needs to be identical for encryption and decryption.
Can the IV be static? And if it can't, is it considered to be a secret? Where would I store the IV? In the payload?
If I use different random IVs for the encryption and decryption, my payload gets decrypted but the first 16 bytes are corrupt. This means, it looks like the IV needs to be the same but from a security perspective there is also not much value as the payload is decrypted except 16 bytes.
Can anyone elaborate what the go-to approach is? Thanks for your help!
The Key+IV pair must never be duplicated on two encryptions using CBC. Doing so leaks information about the first block (in all cases), and is creates duplicate cipher texts (which is a problem if you ever encrypt the same message prefix twice).
So, if your key changes for every encryption, then your IV could be static. But no one does that. They have a key they reuse. So the IV must change.
There is no requirement that it be random. It just shouldn't repeat and it must not be predictable (in cases where the attacker can control the messages). Random is the easiest way to do that. Anything other than random requires a lot of specialized knowledge to get right, so use random.
Reusing a Key+IV pair in CBC weakens the security of the cipher, but does not destroy it, as in CTR. IV reused with CTR can lead to trivial decryptions. In CBC, it generally just leaks information. It's a serious problem, but it is not catastrophic. (Not all insecure configurations are created equal.)
The IV is not a secret. Everyone can know it. So it is typically prepended to the ciphertext.
For security reasons, the IV needs to be chosen to meet cryptographic randomness security requirements (i.e. use crypto.randomBytes( ) in node). This was shown in Phil Rogaway's research paper. The summary is in Figure 1.2 of the paper, which I transcribe here:
CBC (SP 800-38A): An IV-based encryption scheme, the mode is secure as a probabilistic encryption scheme, achieving indistinguishability from random bits, assuming a random IV. Confidentiality is not achieved if the IV is merely a nonce, nor if it is a nonce enciphered under the same key used by the scheme, as the standard incorrectly suggests to do.
The normal way to implement this is to include the IV prepended to the ciphertext. The receiving party extracts the IV and then decrypts the ciphertext. The IV is not a secret, instead it is just used to bring necessary security properties into the mode of operation.
However, be aware that encryption with CBC does not prevent people from tampering with the data. If an attacker fiddles with ciphertext bits within a block, it affects exactly two plaintext blocks, one of which is in a very controlled way.
To make a very long story short, GCM is a better mode to use to prevent such abuses. In that case, you do not need a random IV, but instead you must never let the IV repeat (in cryptography, we call this property a "nonce"). Luke Park gives an example of how to implement it, here. He uses randomness for the nonce, which achieves the nonce property for all practical purposes (unless you are encrypting 2^48 texts, which is crazy large).
But whatever mode you do, you must never repeat an IV for a given key, which is a very common mistake.

Cryptography: Mixing CBC and CTR?

I have some offline files that have to be password-protected. My strategy is as follows:
Cipher Algorithm: AES, 128-bit block, 256-bit key (PBKDF2-SHA-256
10000 iterations with a random salt stored plainly elsewhere)
Whole file is divided into pages with page size 1024 bytes
For a complete page, CBC is used
For an incomplete page,
Use CBC with cipher text stealing if it has at least one block
Use CTR if it has less one block
With this setup, we can keep the same file size
IV or nonce will be based on the salt and deterministic. Since this is not for network communication, I reckon we don't need to concern about replay attacks?
Question: Will this kind of mixing lower the security? Would we better off just use CTR throughout the whole file?
You're better off just using CTR for the entire file. Otherwise, you're adding a lot of extra work, in supporting multiple modes (CBC, CTR, and CTS) and determining which mode to use. It's not clear there's any value in doing so, since CTR is perfectly fine for encrypting a large amount of data.
Are you planning on reusing the same IV for each page? You should expand a bit on what you mean by a page, but I'd recommend unique IV's for each page. Are these pages addressable somehow? You might want to look into some of the new disk encryption modes for an idea on generating unique IV's
You also really need to MAC your data. In CTR for example, if someone flips a bit of the ciphertext, it'll flip the bit when you decrypt, and you'll never know it was tampered with. You can use HMAC or if you want to simplify your entire scheme, use AES GCM mode, which combines CTR for encryption and GMAC for integrity
There are a few things you need to know about CTR mode. After you know them all you could happily apply a stream cipher in your situation:
never ever reuse a data key with the same nonce;
above, not even in time;
be aware that CTR mode really shows the size of the encrypted data; always encrypting full blocks can hide this somewhat (in general a 1024 byte block takes as much as a single bit block if the file system boundaries are honored);
CTR mode in itself does not provide authentication (for completion, as this was already discussed);
If you don't keep to the first two rules, an attacker will immediately see the place of the edit and the attacker will be able to retrieve data directly related to the plain text.
On a possitive node:
you can happily use the offset (in, e.g., blocks) in the file to be part of the nonce;
it is very easy to seek in files, buffer ciphertext and create multi-threaded code around CTR.
And in general:
it pays off to use a data specific key specific sets of files, in such a way that if a key is compromised or changed that you don't have to re-encrypt everything;
think very well about how your keys are used, stored, backed up etc. Key management is the hardest part;

AES Encryption - Key versus IV

The application I am working on lets the user encrypt files. The files could be of any format (spreadsheet, document, presentation, etc.).
For the specified input file, I create two output files - an encrypted data file and a key file. You need both these files to obtain your original data. The key file must work only on the corresponding data file. It should not work on any other file, either from the same user or from any other user.
AES algorithm requires two different parameters for encryption, a key and an initialization vector (IV).
I see three choices for creating the key file:
Embed hard-coded IV within the application and save the key in the key file.
Embed hard-coded key within the application and save the IV in the key file.
Save both the key and the IV in the key file.
Note that it is the same application that is used by different customers.
It appears all three choices would achieve the same end goal. However, I would like to get your feedback on what the right approach should be.
As you can see from the other answers, having a unique IV per encrypted file is crucial, but why is that?
First - let's review why a unique IV per encrypted file is important. (Wikipedia on IV). The IV adds randomness to your start of your encryption process. When using a chained block encryption mode (where one block of encrypted data incorporates the prior block of encrypted data) we're left with a problem regarding the first block, which is where the IV comes in.
If you had no IV, and used chained block encryption with just your key, two files that begin with identical text will produce identical first blocks. If the input files changed midway through, then the two encrypted files would begin to look different beginning at that point and through to the end of the encrypted file. If someone noticed the similarity at the beginning, and knew what one of the files began with, he could deduce what the other file began with. Knowing what the plaintext file began with and what it's corresponding ciphertext is could allow that person to determine the key and then decrypt the entire file.
Now add the IV - if each file used a random IV, their first block would be different. The above scenario has been thwarted.
Now what if the IV were the same for each file? Well, we have the problem scenario again. The first block of each file will encrypt to the same result. Practically, this is no different from not using the IV at all.
So now let's get to your proposed options:
Option 1. Embed hard-coded IV within the application and save the key in the key file.
Option 2. Embed hard-coded key within the application and save the IV in the key file.
These options are pretty much identical. If two files that begin with the same text produce encrypted files that begin with identical ciphertext, you're hosed. That would happen in both of these options. (Assuming there's one master key used to encrypt all files).
Option 3. Save both the key and the IV in the key file.
If you use a random IV for each key file, you're good. No two key files will be identical, and each encrypted file must have it's key file. A different key file will not work.
PS: Once you go with option 3 and random IV's - start looking into how you'll determine if decryption was successful. Take a key file from one file, and try using it to decrypt a different encryption file. You may discover that decryption proceeds and produces in garbage results. If this happens, begin research into authenticated encryption.
The important thing about an IV is you must never use the same IV for two messages. Everything else is secondary - if you can ensure uniqueness, randomness is less important (but still a very good thing to have!). The IV does not need to be (and indeed, in CBC mode cannot be) secret.
As such, you should not save the IV alongside the key - that would imply you use the same IV for every message, which defeats the point of having an IV. Typically you would simply prepend the IV to the encrypted file, in the clear.
If you are going to be rolling your own cipher modes like this, please read the relevant standards. The NIST has a good document on cipher modes here: http://dx.doi.org/10.6028/NIST.SP.800-38A IV generation is documented in Appendix C. Cryptography is a subtle art. Do not be tempted to create variations on the normal cipher modes; 99% of the time you will create something that looks more secure, but is actually less secure.
When you use an IV, the most important thing is that the IV should be as unique as possible, so in practice you should use a random IV. This means embedding it in your application is not an option. I would save the IV in the data file, as it does not harm security as long as the IV is random/unique.
Key/Iv pairs likely the most confused in the world of encryption. Simply put, password = key + iv. Meaning you need matching key and iv to decrypt an encrypted message. The internet seems to imply you only need iv to encrypt and toss it away but its also required to decrypt. The reason for spitting the key/iv values is to make it possible to encrypt same messages with the same key but use different Iv to get unequal encrypted messages. So, Encrypt("message", key, iv) != Encrypt("message", key, differentIv). The idea is to use a new random Iv value every time a message is encrypted. But how do you manage an ever changing Iv value? There's a million possibilities but the most logical way is to embed the 16 byte Iv within the encrypted message itself. So, encrypted = Iv + encryptedMessage. This way the contently changing Iv value can be pulled and removed from the encrypted message then decrypted. So decryptedMessage = Decrypt("messageWithoutIv", key, IvFromEncryptedMessage). Alternatively if storing encrypted messages in a database Iv could be stored in a field there. Although its true Iv is part of the secret, its tiny in comparison to the 32 bit key and is never reused so it is practically safe to expose publicly. Keep in mind, iv has nothing to do with encruotion, it has to do with masking encryption of messages having the same content.
IV is used for increase the security via randomness, but that does not mean it is used by all algorithm, i.e.
The trick thing is how long should the IV be? Usually it is the same size as the block size, or cipher size. For example, AES would have 16 bytes for IV. Besides, IV type can also be selected, i.e. eseqiv, seqiv, chainiv ...

Which encryption mode ECB, CBC, CFB

My php script and my c# application will pass a hash string to each other that is 32 chars long, what is the best mode for this? I thought ECB but am unsure as it says if using more then 1 block dont use. How do I know how big the block is?
They will occasionally pass a large text file, which would be the best mode for encrypting this...CBC?
Any good useful reads welcome...
Thanks
One problem of ECB (among many other problems) is that it encrypts deterministically. That is each time you encrypt the same ID, you get the same ciphertext. Thus this mode does not prevent traffic analysis. An attacker may not be able to learn the IDs that are encrypted. However, he can still determine when and how frequently the same ID is sent.
CBC and OFB when used properly use a new random IV for each encryption thus encrypting the same ID differently each time. Since you also make sure that all IDs have the same length the result should ciphertexts where the attacker cannot distinguish repeating IDs from non-repeating ones.
ECB is the simplest mode and really not recommended (it's not quite as secure as other modes).
Personally I'd use CBC.
In Addition to Accipitridae's Answer. You would need to supply the IV to the decryption procedure. That will also be a overhead in case of CBC or OFB.

How to choose an AES encryption mode (CBC ECB CTR OCB CFB)?

Which of them are preferred in which circumstances?
I'd like to see the list of evaluation crtieria for the various modes, and maybe a discussion of the applicability of each criterion.
For example,
I think one of the criteria is "size of the code" for encryption and decryption, which is important for micro-code embedded systems, like 802.11 network adapters. IF the code required to implement CBC is much smaller than that required for CTR (I don't know this is true, it's just an example), then I could understand why the mode with the smaller code would be preferred. But if I am writing an app that runs on a server, and the AES library I am using implements both CBC and CTR anyway, then this criterion is irrelevant.
See what I mean by "list of evaluation criteria and applicability of each criterion" ??
This isn't really programming related but it is algorithm related.
Please consider long and hard if you can't get around implementing your own cryptography
The ugly truth of the matter is that if you are asking this question you will probably not be able to design and implement a secure system.
Let me illustrate my point: Imagine you are building a web application and you need to store some session data. You could assign each user a session ID and store the session data on the server in a hash map mapping session ID to session data. But then you have to deal with this pesky state on the server and if at some point you need more than one server things will get messy. So instead you have the idea to store the session data in a cookie on the client side. You will encrypt it of course so the user cannot read and manipulate the data. So what mode should you use? Coming here you read the top answer (sorry for singling you out myforwik). The first one covered - ECB - is not for you, you want to encrypt more than one block, the next one - CBC - sounds good and you don't need the parallelism of CTR, you don't need random access, so no XTS and patents are a PITA, so no OCB. Using your crypto library you realize that you need some padding because you can only encrypt multiples of the block size. You choose PKCS7 because it was defined in some serious cryptography standards. After reading somewhere that CBC is provably secure if used with a random IV and a secure block cipher, you rest at ease even though you are storing your sensitive data on the client side.
Years later after your service has indeed grown to significant size, an IT security specialist contacts you in a responsible disclosure. She's telling you that she can decrypt all your cookies using a padding oracle attack, because your code produces an error page if the padding is somehow broken.
This is not a hypothetical scenario: Microsoft had this exact flaw in ASP.NET until a few years ago.
The problem is there are a lot of pitfalls regarding cryptography and it is extremely easy to build a system that looks secure for the layman but is trivial to break for a knowledgeable attacker.
What to do if you need to encrypt data
For live connections use TLS (be sure to check the hostname of the certificate and the issuer chain). If you can't use TLS, look for the highest level API your system has to offer for your task and be sure you understand the guarantees it offers and more important what it does not guarantee. For the example above a framework like Play offers client side storage facilities, it does not invalidate the stored data after some time, though, and if you changed the client side state, an attacker can restore a previous state without you noticing.
If there is no high level abstraction available use a high level crypto library. A prominent example is NaCl and a portable implementation with many language bindings is Sodium. Using such a library you do not have to care about encryption modes etc. but you have to be even more careful about the usage details than with a higher level abstraction, like never using a nonce twice. For custom protocol building (say you want something like TLS, but not over TCP or UDP) there are frameworks like Noise and associated implementations that do most of the heavy lifting for you, but their flexibility also means there is a lot of room for error, if you don't understand in depth what all the components do.
If for some reason you cannot use a high level crypto library, for example because you need to interact with existing system in a specific way, there is no way around educating yourself thoroughly. I recommend reading Cryptography Engineering by Ferguson, Kohno and Schneier. Please don't fool yourself into believing you can build a secure system without the necessary background. Cryptography is extremely subtle and it's nigh impossible to test the security of a system.
Comparison of the modes
Encryption only:
Modes that require padding:
Like in the example, padding can generally be dangerous because it opens up the possibility of padding oracle attacks. The easiest defense is to authenticate every message before decryption. See below.
ECB encrypts each block of data independently and the same plaintext block will result in the same ciphertext block. Take a look at the ECB encrypted Tux image on the ECB Wikipedia page to see why this is a serious problem. I don't know of any use case where ECB would be acceptable.
CBC has an IV and thus needs randomness every time a message is encrypted, changing a part of the message requires re-encrypting everything after the change, transmission errors in one ciphertext block completely destroy the plaintext and change the decryption of the next block, decryption can be parallelized / encryption can't, the plaintext is malleable to a certain degree - this can be a problem.
Stream cipher modes: These modes generate a pseudo random stream of data that may or may not depend the plaintext. Similarly to stream ciphers generally, the generated pseudo random stream is XORed with the plaintext to generate the ciphertext. As you can use as many bits of the random stream as you like you don't need padding at all. Disadvantage of this simplicity is that the encryption is completely malleable, meaning that the decryption can be changed by an attacker in any way he likes as for a plaintext p1, a ciphertext c1 and a pseudo random stream r and attacker can choose a difference d such that the decryption of a ciphertext c2=c1⊕d is p2 = p1⊕d, as p2 = c2⊕r = (c1 ⊕ d) ⊕ r = d ⊕ (c1 ⊕ r). Also the same pseudo random stream must never be used twice as for two ciphertexts c1=p1⊕r and c2=p2⊕r, an attacker can compute the xor of the two plaintexts as c1⊕c2=p1⊕r⊕p2⊕r=p1⊕p2. That also means that changing the message requires complete reencryption, if the original message could have been obtained by an attacker. All of the following steam cipher modes only need the encryption operation of the block cipher, so depending on the cipher this might save some (silicon or machine code) space in extremely constricted environments.
CTR is simple, it creates a pseudo random stream that is independent of the plaintext, different pseudo random streams are obtained by counting up from different nonces/IVs which are multiplied by a maximum message length so that overlap is prevented, using nonces message encryption is possible without per message randomness, decryption and encryption are completed parallelizable, transmission errors only effect the wrong bits and nothing more
OFB also creates a pseudo random stream independent of the plaintext, different pseudo random streams are obtained by starting with a different nonce or random IV for every message, neither encryption nor decryption is parallelizable, as with CTR using nonces message encryption is possible without per message randomness, as with CTR transmission errors only effect the wrong bits and nothing more
CFB's pseudo random stream depends on the plaintext, a different nonce or random IV is needed for every message, like with CTR and OFB using nonces message encryption is possible without per message randomness, decryption is parallelizable / encryption is not, transmission errors completely destroy the following block, but only effect the wrong bits in the current block
Disk encryption modes: These modes are specialized to encrypt data below the file system abstraction. For efficiency reasons changing some data on the disc must only require the rewrite of at most one disc block (512 bytes or 4kib). They are out of scope of this answer as they have vastly different usage scenarios than the other. Don't use them for anything except block level disc encryption. Some members: XEX, XTS, LRW.
Authenticated encryption:
To prevent padding oracle attacks and changes to the ciphertext, one can compute a message authentication code (MAC) on the ciphertext and only decrypt it if it has not been tampered with. This is called encrypt-then-mac and should be preferred to any other order. Except for very few use cases authenticity is as important as confidentiality (the latter of which is the aim of encryption). Authenticated encryption schemes (with associated data (AEAD)) combine the two part process of encryption and authentication into one block cipher mode that also produces an authentication tag in the process. In most cases this results in speed improvement.
CCM is a simple combination of CTR mode and a CBC-MAC. Using two block cipher encryptions per block it is very slow.
OCB is faster but encumbered by patents. For free (as in freedom) or non-military software the patent holder has granted a free license, though.
GCM is a very fast but arguably complex combination of CTR mode and GHASH, a MAC over the Galois field with 2^128 elements. Its wide use in important network standards like TLS 1.2 is reflected by a special instruction Intel has introduced to speed up the calculation of GHASH.
Recommendation:
Considering the importance of authentication I would recommend the following two block cipher modes for most use cases (except for disk encryption purposes): If the data is authenticated by an asymmetric signature use CBC, otherwise use GCM.
ECB should not be used if encrypting more than one block of data with the same key.
CBC, OFB and CFB are similar, however OFB/CFB is better because you only need encryption and not decryption, which can save code space.
CTR is used if you want good parallelization (ie. speed), instead of CBC/OFB/CFB.
XTS mode is the most common if you are encoding a random accessible data (like a hard disk or RAM).
OCB is by far the best mode, as it allows encryption and authentication in a single pass. However there are patents on it in USA.
The only thing you really have to know is that ECB is not to be used unless you are only encrypting 1 block. XTS should be used if you are encrypting randomly accessed data and not a stream.
You should ALWAYS use unique IV's every time you encrypt, and they should be random. If you cannot guarantee they are random, use OCB as it only requires a nonce, not an IV, and there is a distinct difference. A nonce does not drop security if people can guess the next one, an IV can cause this problem.
A formal analysis has been done by Phil Rogaway in 2011, here. Section 1.6 gives a summary that I transcribe here, adding my own emphasis in bold (if you are impatient, then his recommendation is use CTR mode, but I suggest that you read my paragraphs about message integrity versus encryption below).
Note that most of these require the IV to be random, which means non-predictable and therefore should be generated with cryptographic security. However, some require only a "nonce", which does not demand that property but instead only requires that it is not re-used. Therefore designs that rely on a nonce are less error prone than designs that do not (and believe me, I have seen many cases where CBC is not implemented with proper IV selection). So you will see that I have added bold when Rogaway says something like "confidentiality is not achieved when the IV is a nonce", it means that if you choose your IV cryptographically secure (unpredictable), then no problem. But if you do not, then you are losing the good security properties. Never re-use an IV for any of these modes.
Also, it is important to understand the difference between message integrity and encryption. Encryption hides data, but an attacker might be able to modify the encrypted data, and the results can potentially be accepted by your software if you do not check message integrity. While the developer will say "but the modified data will come back as garbage after decryption", a good security engineer will find the probability that the garbage causes adverse behaviour in the software, and then he will turn that analysis into a real attack. I have seen many cases where encryption was used but message integrity was really needed more than the encryption. Understand what you need.
I should say that although GCM has both encryption and message integrity, it is a very fragile design: if you re-use an IV, you are screwed -- the attacker can recover your key. Other designs are less fragile, so I personally am afraid to recommend GCM based upon the amount of poor encryption code that I have seen in practice.
If you need both, message integrity and encryption, you can combine two algorithms: usually we see CBC with HMAC, but no reason to tie yourself to CBC. The important thing to know is encrypt first, then MAC the encrypted content, not the other way around. Also, the IV needs to be part of the MAC calculation.
I am not aware of IP issues.
Now to the good stuff from Professor Rogaway:
Block ciphers modes, encryption but not message integrity
ECB: A blockcipher, the mode enciphers messages that are a multiple of n bits by separately enciphering each n-bit piece. The security properties are weak, the method leaking equality of blocks across both block positions and time. Of considerable legacy value, and of value as a building block for other schemes, but the mode does not achieve any generally desirable security goal in its own right and must be used with considerable caution; ECB should not be regarded as a “general-purpose” confidentiality mode.
CBC: An IV-based encryption scheme, the mode is secure as a probabilistic encryption scheme, achieving indistinguishability from random bits, assuming a random IV. Confidentiality is not achieved if the IV is merely a nonce, nor if it is a nonce enciphered under the same key used by the scheme, as the standard incorrectly suggests to do. Ciphertexts are highly malleable. No chosen ciphertext attack (CCA) security. Confidentiality is forfeit in the presence of a correct-padding oracle for many padding methods. Encryption inefficient from being inherently serial. Widely used, the mode’s privacy-only security properties result in frequent misuse. Can be used as a building block for CBC-MAC algorithms. I can identify no important advantages over CTR mode.
CFB: An IV-based encryption scheme, the mode is secure as a probabilistic encryption scheme, achieving indistinguishability from random bits, assuming a random IV. Confidentiality is not achieved if the IV is predictable, nor if it is made by a nonce enciphered under the same key used by the scheme, as the standard incorrectly suggests to do. Ciphertexts are malleable. No CCA-security. Encryption inefficient from being inherently serial. Scheme depends on a parameter s, 1 ≤ s ≤ n, typically s = 1 or s = 8. Inefficient for needing one blockcipher call to process only s bits . The mode achieves an interesting “self-synchronization” property; insertion or deletion of any number of s-bit characters into the ciphertext only temporarily disrupts correct decryption.
OFB: An IV-based encryption scheme, the mode is secure as a probabilistic encryption scheme, achieving indistinguishability from random bits, assuming a random IV. Confidentiality is not achieved if the IV is a nonce, although a fixed sequence of IVs (eg, a counter) does work fine. Ciphertexts are highly malleable. No CCA security. Encryption and decryption inefficient from being inherently serial. Natively encrypts strings of any bit length (no padding needed). I can identify no important advantages over CTR mode.
CTR: An IV-based encryption scheme, the mode achieves indistinguishability from random bits assuming a nonce IV. As a secure nonce-based scheme, the mode can also be used as a probabilistic encryption scheme, with a random IV. Complete failure of privacy if a nonce gets reused on encryption or decryption. The parallelizability of the mode often makes it faster, in some settings much faster, than other confidentiality modes. An important building block for authenticated-encryption schemes. Overall, usually the best and most modern way to achieve privacy-only encryption.
XTS: An IV-based encryption scheme, the mode works by applying a tweakable blockcipher (secure as a strong-PRP) to each n-bit chunk. For messages with lengths not divisible by n, the last two blocks are treated specially. The only allowed use of the mode is for encrypting data on a block-structured storage device. The narrow width of the underlying PRP and the poor treatment of fractional final blocks are problems. More efficient but less desirable than a (wide-block) PRP-secure blockcipher would be.
MACs (message integrity but not encryption)
ALG1–6: A collection of MACs, all of them based on the CBC-MAC. Too many schemes. Some are provably secure as VIL PRFs, some as FIL PRFs, and some have no provable security. Some of the schemes admit damaging attacks. Some of the modes are dated. Key-separation is inadequately attended to for the modes that have it. Should not be adopted en masse, but selectively choosing the “best” schemes is possible. It would also be fine to adopt none of these modes, in favor of CMAC. Some of the ISO 9797-1 MACs are widely standardized and used, especially in banking. A revised version of the standard (ISO/IEC FDIS 9797-1:2010) will soon be released [93].
CMAC: A MAC based on the CBC-MAC, the mode is provably secure (up to the birthday bound) as a (VIL) PRF (assuming the underlying blockcipher is a good PRP). Essentially minimal overhead for a CBCMAC-based scheme. Inherently serial nature a problem in some application domains, and use with a 64-bit blockcipher would necessitate occasional re-keying. Cleaner than the ISO 9797-1 collection of MACs.
HMAC: A MAC based on a cryptographic hash function rather than a blockcipher (although most cryptographic hash functions are themselves based on blockciphers). Mechanism enjoys strong provable-security bounds, albeit not from preferred assumptions. Multiple closely-related variants in the literature complicate gaining an understanding of what is known. No damaging attacks have ever been suggested. Widely standardized and used.
GMAC: A nonce-based MAC that is a special case of GCM. Inherits many of the good and bad characteristics of GCM. But nonce-requirement is unnecessary for a MAC, and here it buys little benefit. Practical attacks if tags are truncated to ≤ 64 bits and extent of decryption is not monitored and curtailed. Complete failure on nonce-reuse. Use is implicit anyway if GCM is adopted. Not recommended for separate standardization.
authenticated encryption (both encryption and message integrity)
CCM: A nonce-based AEAD scheme that combines CTR mode encryption and the raw
CBC-MAC. Inherently serial, limiting speed in some contexts. Provably secure, with good bounds, assuming the underlying blockcipher is a good PRP. Ungainly construction that demonstrably does the job. Simpler to implement than GCM. Can be used as a nonce-based MAC. Widely standardized and used.
GCM: A nonce-based AEAD scheme that combines CTR mode encryption and a GF(2128)-based universal hash function. Good efficiency characteristics for some implementation environments. Good provably-secure results assuming minimal tag truncation. Attacks and poor provable-security bounds in the presence of substantial tag truncation. Can be used as a nonce-based MAC, which is then called GMAC. Questionable choice to allow nonces other than 96-bits. Recommend restricting nonces to 96-bits and tags to at least 96 bits. Widely standardized and used.
Anything but ECB.
If using CTR, it is imperative that you use a different IV for each message, otherwise you end up with the attacker being able to take two ciphertexts and deriving a combined unencrypted plaintext. The reason is that CTR mode essentially turns a block cipher into a stream cipher, and the first rule of stream ciphers is to never use the same Key+IV twice.
There really isn't much difference in how difficult the modes are to implement. Some modes only require the block cipher to operate in the encrypting direction. However, most block ciphers, including AES, don't take much more code to implement decryption.
For all cipher modes, it is important to use different IVs for each message if your messages could be identical in the first several bytes, and you don't want an attacker knowing this.
Have you start by reading the information on this on Wikipedia - Block cipher modes of operation? Then follow the reference link on Wikipedia to NIST: Recommendation for Block Cipher Modes of Operation.
You might want to chose based on what is widely available. I had the same question and here are the results of my limited research.
Hardware limitations
STM32L (low energy ARM cores) from ST Micro support ECB, CBC,CTR GCM
CC2541 (Bluetooth Low Energy) from TI supports ECB, CBC, CFB, OFB, CTR, and CBC-MAC
Open source limitations
Original rijndael-api source - ECB, CBC, CFB1
OpenSSL - command line CBC, CFB, CFB1, CFB8, ECB, OFB
OpenSSL - C/C++ API CBC, CFB, CFB1, CFB8, ECB, OFB and CTR
EFAES lib [1] - ECB, CBC, PCBC, OFB, CFB, CRT ([sic] CTR mispelled)
OpenAES [2] - ECB, CBC
[1] http://www.codeproject.com/Articles/57478/A-Fast-and-Easy-to-Use-AES-Library
[2] https://openaes.googlecode.com/files/OpenAES-0.8.0.zip
There are new timing vulnerabilities in the CBC mode of operation.
https://learn.microsoft.com/en-us/dotnet/standard/security/vulnerabilities-cbc-mode
Generally the sole existence of a chaining mode already reduces the theoretically security as chaining widens the attack surface and also make certain kind of attacks more feasible. On the other hand, without chaining, you can at most encrypt 16 bytes (128 bits) securely, as that's the block size of AES (also of AES-192 and AES-256) and if your input data exceeds that block size, what else would you do than using chaining? Just encrypting the data block by block? That would be ECB and ECB has the worst security to begin with. Anything is more secure than ECB.
Most security analyses recommend that you always use either CBC or CTR, unless you can name any reason why you cannot use one of these two modes. And out of these two modes, they recommend CBC if security is your main concern and CTR if speed is your main concern. That's because CTR is slightly less secure than CBC because it has a higher likeliness of IV (initialization vector) collision, since the presence of the CTR counter reduces the IV value space, and attackers can change some ciphertext bits to damage exactly the same bits in plaintext (which can be an issue if the attacker knows exact bit positions in the data). On the other hand, CTR can be fully parallelized (encryption and decryption) and requires no data padding to a multiple of the block size.
That said, they still claim CFB and OFB to be secure, however slightly less secure and they have no real advantages to begin with. CFB shares the same weaknesses as CBC and on top of that isn't protected against replay attacks. OFB shares the same weaknesses as CTR but cannot be parallelized at all. So CFB is like CBC without padding but less secure and OFB is like CTR but without its speed benefits and a wider attack surface.
There is only one special case where you may want to use OFB and that's if you need to decrypt data in realtime (e.g. a stream of incoming data) on hardware that is actually too weak for doing so, yet you will know the decryption key way ahead of time. As in that case, you can pre-calculate all the XOR blocks in advance and store them somewhere and when the real data arrives, the entire decryption is just XOR'ing the incoming data with the stored XOR blocks and that requires very little computational power. That's the one thing you can do with OFB that you cannot do with any other chaining.
For performance analysis, see this paper.
For a detailed evaluation, including security, see this paper.
I know one aspect: Although CBC gives better security by changing the IV for each block, it's not applicable to randomly accessed encrypted content (like an encrypted hard disk).
So, use CBC (and the other sequential modes) for sequential streams and ECB for random access.

Resources