I am working on e-mail security project ... It encrypts message text and attachments
I use AES 128-bit key ... the problem that it takes significant long time to encrypt large files (> 3mb ) ... for txt files I can compress it and encrypt it, but for binary files (pdf, jpg, exe) compression doesn't help (just get size >= 75% of original file)
So am thinking just to encrypt the header of binary file, how I know header size of binary file in windows?
.NET has built-in AES support. Maybe you were using it in the wrong way.
Related
I was hit by a ransomware infection that encrypts the first 512 bytes at the top of the file and puts them at the bottom. Upon looking at the encrypted text it seems to be some type of XOR cipher. I know the whole plain text of one of the files that was encrypted, so i figured in theory i should be able to xor it to get the key to decrypt the rest of my files. Well i am having a very hard time with this because i don't understand how the creator xor'ed it really. Im thinking he would use a binaryreader to read the first 512 bytes into an array, XOR it, and replace it. But does that mean he XOR'ed it in HEX? or Decimal? Im quite confused at this point, but i believe i am simply missing something.
I have tried Xor Tool with python, and everything it attempts to crack looks like non sense. I also tried a python script called Unxor that you give the known plain text to, but the dump file it outputs is always blank.
Good Header file dump:
Good-Header.bin
Encrypted Header file dump:
Enc-Header.bin
This may not be the best file example to see the XOR pattern, but its the only file i have that also has the original header 100% before encryption. In other headers where there is more changes the encrypted header changes with it.
Any advice on a method i should try, or application i should use to try and take this further? Thanks so much for your help!
P.S Stackoverflow yelled at me when i tried to post 4 links because im so new, so if you would rather see the hex dumps on pastebin than download the header files, please let me no. The files are in no way malicious, and are only the extracted 512 bytes and not a whole file.
To recover the keystream XOR the plaintext bytes with the cyphertext bytes. Do this with two different files so you can see if the ransomware is using the same keystream or a different keystream for each file.
If it is using the same keystream (unlikely) then your problem is solved. If the keystreams are different, then your easiest solution is to restore the affected files from backups. You did keep backups, didn't you? Alternatively research the particular infection you have got and see if anyone else has broken that particular variant, so you can derive the key(s) they used and hence regenerate the required keystreams.
If you have a lot of money then a data recovery firm might be able to help you, but they will certainly charge.
A rule of thumb to tell a decent cipher from a toy cipher is to encrypt a highly compressible file and try to compress it in its encrypted form: a dumb cipher will produce a file with a level of entropy similar to that of the original one, so the encrypted file will compress as well as the original one; on the other side, a good cipher (even without an initialization vector) will produce a file that will look like a random garbage and thus will not compress at all.
When I compressed your Enc-Header.bin of 512 bytes with PKZIP, the output was also 512 bytes, so the cipher is not as dumb as you expected — bad luck. (But it does not mean that the malware has no weak spots at all.)
I am developing a scanner application in C++. Currently I am able to scan the documents and get the images in file transfer mode. But all the scanned documents have same size even though the content of the documents are different.
FileFormat:TWFF_TIFF
Pixel flavout: TWPF_CHOCOLATE
Xresoultion:75
Yresoultion:75
ICAP_UNITS: TWUN_INCHES
ICAP_PIXELTYPE: TWPT_GRAY
ICAP_BRIGHTNESS:0
ICAP_CONTRAST:0
ICAP_BITDEPTH: 8
Every time scanned image size as 327kb. Why would this be?
Also, how can I set JPEG_Compression. Does file transfer mode supports JPEG_compression?
Probably your scanner/driver is writing uncompressed TIFF files, so the file size depends only on the dimensions of the image. If each image is the same width & height, the resulting files will be the same size.
All the file-transfer stuff in TWAIN is implemented by the driver (not TWAIN itself) and all the features are optional. So you need to check if your scanner/driver supports JPEG compression when transferring TIFF files. It might, it might not.
You can try setting ICAP_COMPRESSION to TWCP_JPEG, after setting ICAP_IMAGEFILEFORMAT to TWFF_TIFF. Probably if both succeed you will get JPEG compression in your TIFFs, although it might be either "Old Style" JPEG or "New Style" JPEG. If you don't know what that means, you probably should find out.
I wrote a tool for this kind of experimenting, years ago, still maintained and free from Atalasoft: Twirl TWAIN Probe
Caution: Many scanners don't support File Transfer Mode (it is optional) and those that do may not support the TIFF file format (the only required file format is BMP!) If you need to support a wide variety of scanners, you'll have to use TWAIN's Native Transfer Mode or Memory Transfer Mode, and write the images to file yourself e.g. using LibTiff.
I have an original video coded at 20Mbps, 1920x1080, 30fps and want to convert it down to be 640x480 30fps at a range of (3 different) bitrates for use by Adobe Live Streaming.
Should I use ffmpeg to resize and encode at the 3 bitrates then use f4fpackager to create the f4m f4f and f4x files or just use ffmpeg to reduce the resolution and then f4fpackager to encode the relevant bitrates?
I've had several tries so far, but when encoded the videos seem to play at a much larger bitrate than they've been encoded at. For example, if I set up the OSMF to play from my webserver, I'd be expecting my best encoded video to play at 1,500kbps but it's way above that.
Has anyone had any experience of encoding for use like this?
I'm using the following options to f4fpackager
--bitrate=1428 --segment-duration 30 --fragment-duration 2
f4fpackager doesn't do any encoding, it does 2 things:
- fragment the mp4 files (mp4 -> f4f)
- generate a Manifest (f4m) file referencing all you fragmented files (f4f)
So the process is:
- transcode your source file in all the size/bitrate that you want to provide (eg: 1920x01080#4Mbps, 1280x720#2Mbps, etc)
- use f4fpackager to convert the mp4 to f4f (this is the fragmentation step)
- use f4fpackager to generate the Manifest.f4m referencing the files that you generated in the previous step
the --bitrate option of f4fpackager should match the value that you use with ffmpeg, this parameter is used to generate the manifest file with the correct bitrate value of each quality
I would like to encrypt a file with most secure algorithm that also meets the following requirement.
Let's say we have a text file that has 100 Bytes and we encrypt it.
Now we change 1 byte in original file and encrypt again.
If we make a diff of the encrypted files then ideal encryption algorithm should produce the shortest diff possible - e.g. 1 byte.
(Essentially I want to do a incremental backup of encrypted files and minimize bandwidth requirements)
If you use CTR (counter) mode, I believe you will get the result you require.
I have got an ActiveX Control that gets an image from a fingerprint device as base64 string. The Active works great and I could transfer the returned base64 string to the server to be converted to a binary data and then to be saved to a database. I use ASP.NET as server side technology and JavaScript as client side technology. The problem is that the base64 string is tool large and it would take from 30 to 40 seconds for the string to be transferred to the server. My question is: Is there any way to compress this base64 string on client (Browser) and deflate it back on server.
If the base64 image is really a jpeg or some other compressed image format, I wouldn't expect you to be able to get much extra compression out of it in the first place. You'd also need to work out a way of posting the binary compressed data afterwards in a portable way, which may not be terribly easy. (You may be able to pretend it's a file or something like that.)
Just how large is this image? 30 seconds is a very long time...
on my linux system, using the bzip2 utility (which uses burrows-wheeler transform and then compresses), I reduce a jpg encoded in Base64 from 259.6KB to 194.5KB.
Using gzip (which uses an LZ algorithm of some variety), I reduce it to 194.4KB.
So, yes you can compress it. the question is why do you want to? It sounds as though your problems are really lying elsewhere.
Base64 is a format that is usually only used to get around technical limitaions of sending binary data. For example, a base64 string could be used in a GET request, like "website.com/page?q=BASE64ENCODED".
You have two options:
Figure out how to send/recieve binary data in a POST request, then send the data in a different form and convert it appropriately on the server.
-OR-
Since this is a fingerprint device, I assume you're using it as a password, so you actually don't have to send the raw fingerprint data every time. Just make an SHA-1 hash of it, and compare it to a stored hash on the server. This is just as secure and will take a fraction of a second to upload.
Hope I helped!