Decrypting a XOR encrypted file - encryption

I'm trying to decrypt a XOR encrypted file, after running the key length test using xortool I got this key: "fallen"..
# python xortool.py -c 00 /cygdrive/c/Users/Me/Desktop/ch3.bmp
The most probable key lengths:
1: 10.6%
3: 11.6%
6: 18.5%
9: 8.8%
12: 13.8%
15: 6.6%
18: 10.4%
24: 8.1%
30: 6.4%
36: 5.2%
Key-length can be 3*n
1 possible key(s) of length 6:
fallen
Whatever is there a way to decipher the file (a bmp file) and get the original one, using tools like openssl or gpg?? Do they have a XOR operation?

Neither OpenSSL nor GPG have such XOR functionality that I'm aware of, however writing a program to do it yourself should be trivial.
Given that you know that the file is a .bmp, you should be able to use this fact to decrypt the file quite easily, especially given that .bmp files have a well defined structure. For example, the first two bytes when decrypted should be 0x42, 0x4D (that's ASCII BM), and the following 4 bytes are the (big-endian) size of the entire file in bytes, so you should be able to get at least 6 bytes of the key immediately.

Since you already have xortool, just use xortool-xor from the xortool distribution:
python xortool/xortool-xor -s fallen /cygdrive/c/Users/Me/Desktop/ch3.bmp > decoded.bmp
Also note that xortool itself saves the decoded output in the xortool_out folder, so after using xortool to find the key, you could just do:
mv xortool_out/0_fallen decoded.bmp

Related

API for ldd (or objdump)?

I need to programmatically inspect the library dependencies of a given executable. Is there a better way than running the ldd (or objdump) commands and parsing their output? Is there an API available which gives the same results as ldd ?
I need to programmatically inspect the library dependencies of a given executable.
I am going to assume that you are using an ELF system (probably Linux).
Dynamic library dependencies of an executable or a shared library are encoded as a table on Elf{32_,64}_Dyn entries in the PT_DYNAMIC segment of the library or executable. The ldd (indirectly, but that's an implementation detail) interprets these entries and then uses various details of system configuration and/or LD_LIBRARY_PATH environment variable to locate the needed libraries.
You can print the contents of PT_DYNAMIC with readelf -d a.out. For example:
$ readelf -d /bin/date
Dynamic section at offset 0x19df8 contains 26 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000c (INIT) 0x3000
0x000000000000000d (FINI) 0x12780
0x0000000000000019 (INIT_ARRAY) 0x1a250
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x1a258
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x308
0x0000000000000005 (STRTAB) 0xb38
0x0000000000000006 (SYMTAB) 0x358
0x000000000000000a (STRSZ) 946 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000015 (DEBUG) 0x0
0x0000000000000003 (PLTGOT) 0x1b000
0x0000000000000002 (PLTRELSZ) 1656 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x2118
0x0000000000000007 (RELA) 0x1008
0x0000000000000008 (RELASZ) 4368 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffb (FLAGS_1) Flags: PIE
0x000000006ffffffe (VERNEED) 0xf98
0x000000006fffffff (VERNEEDNUM) 1
0x000000006ffffff0 (VERSYM) 0xeea
0x000000006ffffff9 (RELACOUNT) 170
0x0000000000000000 (NULL) 0x0
This tells you that the only library needed for this binary is libc.so.6 (the NEEDED entry).
If your real question is "what other libraries does this ELF binary require", then that is pretty easy to obtain: just look for DT_NEEDED entries in the dynamic symbol table. Doing this programmatically is rather easy:
Locate the table of program headers (the ELF file header .e_phoff tells you where it starts).
Iterate over them to find the one with PT_DYNAMIC .p_type.
That segment contains a set of fixed sized Elf{32,64}_Dyn records.
Iterate over them, looking for ones with .d_tag == DT_NEEDED.
Voila.
P.S. There is a bit of a complication: the strings, such as libc.so.6 are not part of the PT_DYNAMIC. But there is a pointer to where they are in the .d_tag == DT_STRTAB entry. See this answer for example code.

Parsing Hex dump

I recently came across a kaitai struct to deal with arbitrary binary formats. Now the thing is I have a hex-dump what I mean by that is I have a file which i want to parse and its in hex format when i use the visualizer in the web ide of kaitai for the mapping of data, it's converting the hex data again into hex is there any way i can convert the data from hex and get the exact hex data when i use the visualizer.
for example consider this
3335363330
and then again its mapping it to 33 33 33 35 33 36 33 33 33 30
thanks in advance
Currently the Kaitai WebIDE & the console visualizer (ksv) does not support reading hex-encoded files, only raw binary files.
The solution is to convert the hex-encoded (text) file to a binary one first and then load the binary file into Kaitai.
You can do this by calling xxd -r -p <input_file >output_file on Linux or eg. calling this small Python script: python -c "open('output_file','wb').write(open('input_file','r').read().strip().decode('hex'))". The latter works on any machine where Python 2 is installed.

encrypt key hexadecimal with another key same lenght

i need to encrypt an input of 16 bytes with a plain key of 16 bytes with 2 method of encryption CBC and ECB.
i have 2 files with :
1.txt : CB18D7B3101924314051647990A9C4E1
2.txt : 0000000000000000F05FBEFD564A164D
i need after encrypt the second key with the first key , to have a 3.txt file with result
like :
3.txt : D057E1DB9458102CFD06AFCA5504E598
thanks to help me with a program or command to do that.
It seems that the question is how to encrypt the data 2 with the key 1 to obtain the encrypted data 3. Further it looks like the encryption method is CAST, see:CAST.
Where:
1: CB18D7B3101924314051647990A9C4E1
2: 0000000000000000F05FBEFD564A164D
3: D057E1DB9458102CFD06AFCA5504E598
The methods ECB and CBC can not both be used at the same time. See Block cipher mode of operation.

Why a hex file is used in burning program in micro controller?

When ever we program a micro controller we convert the C file into a hex file and then we burn that into controller.
My question is that why a hex file only, is that hex file a hexadecimal version of binary executable?
If yes then why do not we use a binary file instead?
if you are talking about an "intel hex" file the reason being is that it is ascii which makes it easy to examine and parse. true, it is innefficient in one way but compared to a raw binary it might be smaller. With a raw binary you only have one if any address associated, the starting address (not embedded in the file) in a hex file or motorola srecord which is a similar and often used format as well. both the ihex and srec formats are basically lines of ascii/hex numbers that represent a type a starting address, length data, and a checksum. there are non data lines in there but much of it will be data. so if your program has a few bytes at address 0x1000 and a few bytes at 0x80000000 then a .bin file would be at its smallest 0x8000000-0x1000 plus a few bytes but would typically be 0x80000000+ a few bytes (right, 2 gigabytes). Where an ihex or srec would be in the dozens of bytes total. the ihex and srec have built in checksums to help protect against corrupt files, not perfect of course but better than nothing at all...
Since then elf and coff and other formats have become popular. these are also based on blocks of data and not a complete memory image. these are binary, not ascii formats, but they are not just a memory image. chunks of data with address, type, etc are provided.
Because the ihex and srec are so simple to create and parse they will continue to be used for a long time, it does not take a lot of resources in a bootloader for example to handle receiving an ihex or srec file. (same with a binary of course, but the binary has a lot of fill data in it costing a lot of unnecessary transmission time).

Can I determine if the terminal interprets the C1 control codes?

ISO/IEC 2022 defines the C0 and C1 control codes. The C0 set are the familiar codes between 0x00 and 0x1f in ASCII, ISO-8859-1 and UTF-8 (eg. ESC, CR, LF).
Some VT100 terminal emulators (eg. screen(1), PuTTY) support the C1 set, too. These are the values between 0x80 and 0x9f (so, for example, 0x84 moves the cursor down a line).
I am displaying user-supplied input. I do not wish the user input to be able to alter the terminal state (eg. move the cursor). I am currently filtering out the character codes in the C0 set; however I would like to conditionally filter out the C1 set too, if terminal will interpret them as control codes.
Is there a way of getting this information from a database like termcap?
The only way to do it that I can think of is using C1 requests and testing the return value:
$ echo `echo -en "\x9bc"`
^[[?1;2c
$ echo `echo -e "\x9b5n"`
^[[0n
$ echo `echo -e "\x9b6n"`
^[[39;1R
$ echo `echo -e "\x9b0x" `
^[[2;1;1;112;112;1;0x
The above ones are:
CSI c Primary DA; request Device Attributes
CSI 5 n DSR; Device Status Report
CSI 6 n CPR; Cursor Position Report
CSI 0 x DECREQTPARM; Request Terminal Parameters
The terminfo/termcap that ESR maintains (link) has a couple of these requests in user strings 7 and 9 (user7/u7, user9/u9):
# INTERPRETATION OF USER CAPABILITIES
#
# The System V Release 4 and XPG4 terminfo format defines ten string
# capabilities for use by applications, .... In this file, we use
# certain of these capabilities to describe functions which are not covered
# by terminfo. The mapping is as follows:
#
# u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)
# u8 terminal answerback description
# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)
# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)
#
# The terminal enquire string should elicit an answerback response
# from the terminal. Common values for will be ^E (on older ASCII
# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).
#
# The cursor position request () string should elicit a cursor position
# report. A typical value (for VT100 terminals) is \E[6n.
#
# The terminal answerback description (u8) must consist of an expected
# answerback string. The string may contain the following scanf(3)-like
# escapes:
#
# %c Accept any character
# %[...] Accept any number of characters in the given set
#
# The cursor position report () string must contain two scanf(3)-style
# %d format elements. The first of these must correspond to the Y coordinate
# and the second to the %d. If the string contains the sequence %i, it is
# taken as an instruction to decrement each value after reading it (this is
# the inverse sense from the cup string). The typical CPR value is
# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).
#
# These capabilities are used by tac(1m), the terminfo action checker
# (distributed with ncurses 5.0).
Example:
$ echo `tput u7`
^[[39;1R
$ echo `tput u9`
^[[?1;2c
Of course, if you only want to prevent display corruption, you can use less approach, and let the user switch between displaying/not displaying control characters (-r and -R options in less). Also, if you know your output charset, ISO-8859 charsets have the C1 range reserved for control codes (so they have no printable chars in that range).
Actually, PuTTY does not appear to support C1 controls.
The usual way of testing this feature is with vttest, which provides menu entries for changing the input- and output- separately to use 8-bit controls. PuTTY fails the sanity-check for each of those menu entries, and if the check is disabled, the result confirms that PuTTY does not honor those controls.
I don't think there's a straightforward way to query whether the terminal supports them. You can try nasty hacky workarounds (like print them and then query the cursor position) but I really don't recommend anything along these lines.
I think you could just filter out these C1 codes unconditionally. Unicode declares the U+0080.. U+009F range as control characters anyway, I don't think you should ever use them for anything different.
(Note: you used the example 0x84 for cursor down. It's in fact U+0084 encoded in whichever encoding the terminal uses, e.g. 0xC2 0x84 for UTF-8.)
Doing it 100% automatically is challenging at best. Many, if not most, Unix interfaces are smart (xterms and whatnot), but you don't actually know if connected to an ASR33 or a PC running MSDOS.
You could try some of the terminal interrogation escape sequences and timeout if there is no reply. But then you might have to fall back and maybe ask the user what kind of terminal they are using.

Resources