C++: OpenSSL, aes cfb encryption [duplicate] - encryption

I tried to implement a "very" simple encryption/decryption example. I need it for a project where I would like to encrypt some user information. I can't encrypt the whole database but only some fields in a table.
The database and most of the rest of the project works, except the encryption:
Here is a simplified version of it:
#include <openssl/aes.h>
#include <openssl/evp.h>
#include <iostream>
#include <string.h>
using namespace std;
int main()
{
/* ckey and ivec are the two 128-bits keys necessary to
en- and recrypt your data. Note that ckey can be
192 or 256 bits as well
*/
unsigned char ckey[] = "helloworldkey";
unsigned char ivec[] = "goodbyworldkey";
int bytes_read;
unsigned char indata[AES_BLOCK_SIZE];
unsigned char outdata[AES_BLOCK_SIZE];
unsigned char decryptdata[AES_BLOCK_SIZE];
/* data structure that contains the key itself */
AES_KEY keyEn;
/* set the encryption key */
AES_set_encrypt_key(ckey, 128, &keyEn);
/* set where on the 128 bit encrypted block to begin encryption*/
int num = 0;
strcpy( (char*)indata , "Hello World" );
bytes_read = sizeof(indata);
AES_cfb128_encrypt(indata, outdata, bytes_read, &keyEn, ivec, &num, AES_ENCRYPT);
cout << "original data:\t" << indata << endl;
cout << "encrypted data:\t" << outdata << endl;
AES_cfb128_encrypt(outdata, decryptdata, bytes_read, &keyEn, ivec, &num, AES_DECRYPT);
cout << "input data was:\t" << decryptdata << endl;
return 0;
}
But the output of "decrypted" data are some random characters, but they are the same after every execution of the code. outdata changes with every execution...
I tried to debug and search for a solution, but I couldn't find any solution for my problem.
Now my question, what is going wrong here? Or do I completely misunderstand the provided functions?

The problem is that AES_cfb128_encrypt modifies the ivec (it has to in order to allow for chaining). Your solution is to create a copy of the ivec and initialize it before each call to AES_cfb128_encrypt as follows:
const char ivecstr[AES_BLOCK_SIZE] = "goodbyworldkey\0";
unsigned char ivec[AES_BLOCK_SIZE];
memcpy( ivec , ivecstr, AES_BLOCK_SIZE);
Then repeat the memcpy before your second call to AES_cfb128_encrypt.
Note 1: Your initial vector was a byte too short, so I put an explicit additional \0 at the end of it. You should make sure all of your strings are of the correct length when copying or passing them.
Note 2: Any code which uses encryption should REALLY avoid using strcpy or any other copy of unchecked length. It's a hazard.

Related

In-memory file to intercept stdout on function call

I've inherited this function that I have to call from my code. The function is
from a bizzare library in an arcane programming language -- so I cannot assume
almost anything about it, except for the fact that it prints some useful
infomation to stdout.
Let me simulate its effect with
void black_box(int n)
{
for(int i=0; i<n; i++) std::cout << "x";
std::cout << "\n";
}
I want to intercept and use the stuff it outputs. To that end I redirect stdout
to a temporary file, call the black_box, then restore the stdout and read the
stuff from the temporary file:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <iostream>
int main(void){
int fd = open( "outbuff", O_RDWR | O_TRUNC | O_CREAT, 0600);
// Redirect stdout to fd
int tmp = dup(1);
dup2( fd, 1);
// Execute
black_box(100);
std::cout << std::flush;
// Restore old stdout
dup2(tmp, 1);
// Read output from the outbuff flie
struct stat st;
fstat(fd, &st);
std::string buf;
buf.resize(st.st_size);
lseek(fd, 0, SEEK_SET);
read(fd, &buf[0], st.st_size);
close(fd);
std::cout << "Captured: " << buf << "\n";
return 0;
}
This works. But creating a file on disk for such a task is not something I'm
proud of. Can I make something like a file, but in-memory?
Before suggesting a pipe, please consider what would happen if
black_box overflows its buffer. And no, I need it single-threaded --
starting an extra process/thread defeats the whole purpose ot what I'm trying
to achieve.
I want to intercept and use the stuff it outputs.
[...] please consider what would happen if black_box overflows its buffer.
I see two alternatives.
If you know the maximum size of the output, and the size is not too excessive, use the socketpair instead of pipe. Unlike pipes, sockets allow to change the size of the egress/ingress buffers.
Use a temporary file on /tmp. In normal case it will not touch disk (unless system is swapping). There are few functions for the purpose, for example mkstemp (or tmpfile).

How to read an unsigned int from a std::unique_ptr<unsigned char[]>?

So basically I'm working on a file reader and the binary file gets loaded into a std::unique_ptr<unsigned char[]> containing all the bytes from the file.
I'm trying to read an unsigned int from the start of it. Usually, if it were just a raw pointer (unsigned char*) it would be as follows:
unsigned int magic = *(reinterpret_cast<unsigned int*>(buffer));
However, I'm currently trying to the same, where buffer is the smart pointer. So far I've came up with this:
unsigned int magic = *(reinterpret_cast<unsigned int*>(classFile_.get()));
Upon outputting magic like this:
std::cout << std::hex << magic;
I get 1. Where I should be getting: 0xbebafeca (this is a Java class file reader, 0xCAFEBABE is the unsigned int magic number).
Any ideas as to why it's not working? I'm also not sure if storing a smart pointer for the unsigned char* is good practice rather than doing something like storing a raw pointer and deleting the allocated array in the de-constructor.

checking EOF on unix cp program

I'm writing a unix cp program, but I'm unclear about checking for EOF. The code I have is:
int main(int argc, const char * argv[]) {
int in, out;
char buf[BUFFER_SIZE];
if (argc != 3)
cout << "Error: incorrect number of params" << endl;
if ((in = open(argv[1], O_RDONLY, 0666)) == -1)
cout << "Error: cannot open input file" << endl;
if ((out = open(argv[2], O_WRONLY | O_CREAT, 0666)) == -1)
cout << "Cannot create output file" << endl;
else
while ((read(in, buf, BUFFER_SIZE)) != -1)
write(out, buf, BUFFER_SIZE);
return 0;
}
It reads and writes fine, but writes past EOF when writing the output file. So I get a couple lines of gibberish past the end of the file. Am I just not checking for EOF correctly? I appreciate the input.
You should read the man page for the read function.
On end-of-file, read returns 0. It returns -1 only if there's an error.
read can read fewer bytes than you asked to (and it must do so if there aren't that many bytes remaining to be read). Your write call assumes that read actually read BUFFER_SIZE bytes.
You need to save the result returned by read and write only that many bytes -- and you need to terminate the loop when read returns 0 (indicating end-of-file) or -1 (indicating an error). In the latter case, you should probably do something to handle the error, or at least inform the user.
Incidentally, you don't need the 0666 mode argument when calling open to open the file for reading; that applies only with O_CREAT. Since open is actually a variadic function (like printf), you don't have to supply all the arguments.
The man page is not clear on this point; it pretends that there are two different forms of the open function:
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
but in fact that's not legal in C. The POSIX description correctly shows the declaration as:
int open(const char *path, int oflag, ...);

Assign pair of raw pointers returned by a function to unique_ptr

I've looked around a little bit but couldn't find an answer to this.
I have a function returning a pair of pointers to objects, the situation can be simplified to:
#include <iostream>
#include <utility>
#include <memory>
std::pair<int *, int *> shallow_copy()
{
int *i = new int;
int *j = new int;
*i = 5;
*j = 7;
return std::make_pair(i, j);
}
int main(int argc, char *argv[])
{
std::pair<int *, int *> my_pair = shallow_copy();
std::cout << "a = " << my_pair.first << " b = " << *my_pair.second << std::endl;
// This is just creating a newpointer:
std::unique_ptr<int> up(my_pair.first);
std::cout << "a = " << &up << std::endl;
delete my_pair.first;
delete my_pair.second;
return 0;
}
I cannot change the return value of the function. From std::cout << "a = " << &up << std::endl; I can see that the address of the smart pointer is different from the address of the raw pointer.
Is there a way to capture tha std::pair returned by the function in a std::unique_ptr and prevent memory leaks without calling delete explicitly?
NB: The question have been edited to better state the problem and make me look smarter!
You're doing it the right way, but testing it the wrong one. You're comparing the address in first with the address of up. If you print up.get() instead (the address stored in up), you'll find they're equal.
In addition, your code has a double-delete problem. You do delete my_pair.first;, which deallocates the memory block pointed to by my_pair.first and also by up. Then, the destructor of up will deallocate it again when up goes out of scope, resulting in a double delete.
You also asked how to capture both pointers in smart pointers. Since the constructor of std::unique_ptr taking a raw pointer is explicit, you cannot directly do this with a simple std::pair<std::unique_ptr<int>, std::unique_ptr<int>>. You can use a helper function, though:
std::pair<std::unique_ptr<int>, std::unique_ptr<int>> wrapped_shallow_copy()
{
auto orig = shallow_copy();
std::pair<std::unique_ptr<int>, std::unique_ptr<int>> result;
result.first.reset(orig.first);
result.second.reset(orig.second);
return result;
}
Now, use wrapped_shallow_copy() instead of shallow_copy() and you will never leak memory from the call.

Failing to convert raw binary/hex to int interpretation

I'm trying to convert raw hex/binary data to different file types.
#include <QByteArray>
#include <QDebug>
int main(int argc, char *argv[])
{
QByteArray package;
package.append( QByteArray::fromHex("a1"));
// "a1" is what is written to the memory, not the string representation of "a1"
qDebug() << package.toHex(); // "a1"
qDebug() << package; // "�"
qDebug() << package.toInt(); // 0
}
Why is the int representation 0 and not 161?
toInt has totally different purpose. It parses string representation of integer. If you want integer representing the value of the first byte of the array, use package[0]. It has char type. I don't remember how qDebug() represents char type, but if you have any problems with it, just static_cast it to unsigned int.
QByteArray::toInt expects that QByteArray contains a string of characters (in ASCII probably), not the binary representation of the number.
If you want to convert binary representation to integer you can use reinterpret_cast:
int i = *reinterpret_cast<quint8*>(package.constData());
Or better use qFromBigEndian/qFromLittleEndian:
int i = qFromLittleEndian<quint8>((const uchar*)package.constData())
In both cases you must know exactly in what format the number is stored and use proper type and endianness.

Resources