equivalent of objdump for dex files - dex

Is there any equivalent to objdump for dex files?
I don't want to decompile them, just be able to watch their content

Just in case it helps someone in the future:
$ sudo apt install dexdump

baksmali has a dump command that generates an annotated hex dump of a dex file
baksmali dump helloworld.dex > helloworld.txt
|-----------------------------
|code_item section
|-----------------------------
|
|[0] code_item: LHelloWorld;->main([Ljava/lang/String;)V
0001c4: 0200 | registers_size = 2
0001c6: 0100 | ins_size = 1
0001c8: 0200 | outs_size = 2
0001ca: 0000 | tries_size = 0
0001cc: 0000 0000 | debug_info_off = 0x0
0001d0: 0800 0000 | insns_size = 0x8
| instructions:
0001d4: 6200 0000 | sget-object v0, Ljava/lang/System;->out:Ljava/io/PrintStream;
0001d8: 1a01 0000 | const-string v1, "Hello World!"
0001dc: 6e20 0100 1000 | invoke-virtual {v0, v1}, Ljava/io/PrintStream;->println(Ljava/lang/String;)V
0001e2: 0e00 | return-void
It is also able to list the items in some of the various constant pools
baksmali list classes helloworld.dex
LHelloWorld;
baksmali list methods helloworld.dex
LHelloWorld;->main([Ljava/lang/String;)V
Ljava/io/PrintStream;->println(Ljava/lang/String;)V
baksmali list fields helloworld.dex
Ljava/lang/System;->out:Ljava/io/PrintStream;
baksmali list strings helloworld.dex
"Hello World!"
"LHelloWorld;"
"Ljava/io/PrintStream;"
"Ljava/lang/Object;"
"Ljava/lang/String;"
"Ljava/lang/System;"
"V"
"VL"
"[Ljava/lang/String;"
"main"
"out"
"println"

Related

Decrypt Word document knowing part of its content

I have a ciphered .docx document I would like to recover and I don't remember the password. I'm trying brute-forcing it but it's taking way too long, so it's not going to be option. However, I know the exact content of part of it (296 characters). Any help?
Unfortunately, part of the document wouldn't help.
To get to the cleartext, any cracker would still need to go through trying to crack the password hash that is exported from the document, and with your logic try to decrypt the file and interpret it's content, compare it to the known cleartext. There is no such funcitonality, especially for specialized document formats.
Here is an example how to approach it:
Document: encrypted_doc.docx
Password: 123horse123
You will have to use office2john to export the hash to be cracked from the document.
wget https://raw.githubusercontent.com/magnumripper/JohnTheRipper/bleeding-jumbo/run/office2john.py
python office2john.py encrypted_doc.docx > doc_pass_hash.txt
cat doc_pass_hash.txt
encrypted_doc.docx:$**office$*2013***100000*256*16*e77e386a8e68462d2a0a703718febbc9*08ee275ccf4946ae0e5922e9ff3114b7*0ab5fc00964f7ed4be9e45c77a33b441b2c4874d28e4bc30f38e99bfb169fcf4
Remembering some information about the password(complexity, some chosen words if any, character set etc.) mask attack could help you run a more effective way to uncover the document.
Run hashcat --help to see which document file are you dealing with:
9700 | MS Office <= 2003 $0/$1, MD5 + RC4 | Documents
9710 | MS Office <= 2003 $0/$1, MD5 + RC4, collider #1 | Documents
9720 | MS Office <= 2003 $0/$1, MD5 + RC4, collider #2 | Documents
9800 | MS Office <= 2003 $3/$4, SHA1 + RC4 | Documents
9810 | MS Office <= 2003 $3, SHA1 + RC4, collider #1 | Documents
9820 | MS Office <= 2003 $3, SHA1 + RC4, collider #2 | Documents
9400 | MS Office 2007 | Documents
9500 | MS Office 2010 | Documents
9600 | MS Office 2013 | Documents
Based on what you can recall from the password, you can choose from the following:
- [ Attack Modes ] -
# | Mode
===+======
0 | Straight
1 | Combination
3 | Brute-force
6 | Hybrid Wordlist + Mask
7 | Hybrid Mask + Wordlist
Here are the options for hashcat to specify the password:
?l = abcdefghijklmnopqrstuvwxyz
?u = ABCDEFGHIJKLMNOPQRSTUVWXYZ
?d = 0123456789
?h = 0123456789abcdef
?H = 0123456789ABCDEF
?s = «space»!"#$%&'()*+,-./:;<=>?#[\]^_`{|}~
?a = ?l?u?d?s
?b = 0x00 - 0xff
You can also create your own dictionary, which then will be used when generating the passwords, if you remember at least part of the password. This can be the most efficient help.
So in my example, let's run a brute force attack with mask(3 digits, 5 alphabetical characters, and another 3 digits):
hashcat -m 9600 -a 3 doc_pass_hash.txt --username -o cracked_pass.txt ?d?d?d?l?l?l?l?l?d?d?d --force
You can hit [s] for status:
[s]tatus [p]ause [b]ypass [c]heckpoint [q]uit => s
Session..........: hashcat
Status...........: Running
Hash.Type........: MS Office 2013
Hash.Target......: $office$*2013*100000*256*16*e77e386a8e68462d2a0a703...69fcf4
Time.Started.....: Sat May 30 16:59:30 2020 (3 mins, 41 secs)
Time.Estimated...: Next Big Bang (17614 years, 157 days)
Guess.Mask.......: ?d?d?d?l?l?l?l?l?d?d?d [11]
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........: 21 H/s (7.50ms) # Accel:128 Loops:32 Thr:1 Vec:8
Recovered........: 0/1 (0.00%) Digests, 0/1 (0.00%) Salts
Progress.........: 4608/11881376000000 (0.00%)
Rejected.........: 0/4608 (0.00%)
Restore.Point....: 0/1188137600000 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:9-10 Iteration:24672-24704
Candidates.#1....: 623anane123 -> 612kerin123
As you see, this one doesn't seem to be very effective (Time.Estimated...: Next Big Bang (17614 years, 157 days)), however, adding a wordlist is a good idea:
cat wordlist.txt
dog
horse
cat
hashcat -m 9600 -a 6 doc_pass_hash.txt wordlist.dict ?d?d?d?l?l?l?l?l?d?d?d --username -o cracked_pass.txt --forces
Session..........: hashcat
Status...........: Running
Hash.Type........: MS Office 2013
Hash.Target......: $office$*2013*100000*256*16*e77e386a8e68462d2a0a703...69fcf4
Time.Started.....: Sat May 30 17:15:34 2020 (1 min, 25 secs)
Time.Estimated...: Next Big Bang (734631 years, 226 days)
Guess.Base.......: File (wordlist.dict), Left Side
Guess.Mod........: Mask (?d?d?d?l?l?l?l?l?d?d?d) [11], Right Side
Guess.Queue.Base.: 1/1 (100.00%)
Guess.Queue.Mod..: 1/1 (100.00%)
Speed.#1.........: 2 H/s (0.47ms) # Accel:128 Loops:32 Thr:1 Vec:8
Recovered........: 0/1 (0.00%) Digests, 0/1 (0.00%) Salts
Progress.........: 129/35644128000000 (0.00%)
Rejected.........: 0/129 (0.00%)
Restore.Point....: 0/3 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:43-44 Iteration:32000-32032
Candidates.#1....: dog360verin123 -> cat360verin123
As we see this is not yet correct, as the candidates generate prior the mask. So this needs some more tweaking.
Masks you can define specific characters as well, for instance:
hashcat -m 9600 -a 3 doc_pass_hash.txt ?d?d?dhorse?d?d?d --username -o cracked_pass.txt --force
Session..........: hashcat
Status...........: Cracked
Hash.Type........: MS Office 2013
Hash.Target......: $office$*2013*100000*256*16*e77e386a8e68462d2a0a703...69fcf4
Time.Started.....: Sat May 30 17:24:32 2020 (28 secs)
Time.Estimated...: Sat May 30 17:25:00 2020 (0 secs)
Guess.Mask.......: ?d?d?dhorse?d?d?d [11]
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........: 18 H/s (8.21ms) # Accel:128 Loops:32 Thr:1 Vec:8
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 512/1000000 (0.05%)
Rejected.........: 0/512 (0.00%)
Restore.Point....: 0/100000 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:0-1 Iteration:99968-100000
Candidates.#1....: 123horse123 -> 112horse778
cat cracked_pass.txt
$office$*2013*100000*256*16*e77e386a8e68462d2a0a703718febbc9*08ee275ccf4946ae0e5922e9ff3114b7*0ab5fc00964f7ed4be9e45c77a33b441b2c4874d28e4bc30f38e99bfb169fcf4:123horse123
Cracked password in the end of the file: 123horse123
There is more to be read about rules and cracking with increased password lenght (--incremental) and combined attacks, but you get the idea.
Here are the official basic examples to get you started:
- [ Basic Examples ] -
Attack- | Hash- |
Mode | Type | Example command
==================+=======+==================================================================
Wordlist | $P$ | hashcat -a 0 -m 400 example400.hash example.dict
Wordlist + Rules | MD5 | hashcat -a 0 -m 0 example0.hash example.dict -r rules/best64.rule
Brute-Force | MD5 | hashcat -a 3 -m 0 example0.hash ?a?a?a?a?a?a
Combinator
| MD5 | hashcat -a 1 -m 0 example0.hash example.dict example.dict

How to get file name and total count in zip (Linux)

How do I get result will have file names at beginning total #count of files. When I run a command line below only shows number of files but not file names.
I would like to have zip file name and count number of documents in zip. Thanks
the output:
IAD1.zip 30000 files
IAD2.zip 24000 files
IAD3.zip 32000 files
.....
command line
zipinfo IAD${count}.zip |grep ^-|wc -l >>TotalCount.txt
with command above the result show number of documents in zip files:
30000
24000
32000
.....
zipinfo -h <file name> | tr '\n' ':' | awk -F':' '{print $2 , $5 , "files"}'
explanation:
zipinfo -h -- list header line. The archive name, actual size (in bytes) and total number of files is printed.
tr '\n' ':' -- Replace new line with ":"
awk -F':' '{print $2 , $5 , "files"}' -- Read file as ":" delimited and print 2nd and 5th field
Demo:
:>zipinfo test.zip
Archive: test.zip
Zip file size: 2798 bytes, number of entries: 7
-rw-r--r-- 3.0 unx 18 tx stor 20-Mar-10 13:00 file1.dat
-rw-r--r-- 3.0 unx 32 tx defN 20-Mar-10 13:00 file2.dat
-rw-r--r-- 3.0 unx 16 tx stor 20-Mar-10 12:26 file3.dat
-rw-r--r-- 3.0 unx 1073 tx defN 20-Mar-12 05:24 join1.txt
-rw-r--r-- 3.0 unx 114 tx defN 20-Mar-12 05:25 join2.txt
-rw-r--r-- 3.0 unx 254 tx defN 20-Mar-11 09:39 sample.txt
-rw-r--r-- 3.0 unx 1323 bx stor 20-Mar-14 09:14 test,zip.zip
7 files, 2830 bytes uncompressed, 1746 bytes compressed: 38.3%
:>zipinfo -h test.zip | tr '\n' ':' | awk -F':' '{print $2 , $5 , "files"}'
test.zip 7 files

Drop 4 first columns

I have a command that can drop first 4 columns, but unfortunately if 2nd column name and 4th column name likely similar, it will truncate at 2nd column but if 2nd column and 4th column name are not same it will truncate at 4th column. Is it anything wrong to my commands?
**
awk -F"|" 'NR==1 {h=substr($0, index($0,$5)); next}
{file= path ""$1""$2"_"$3"_"$4"_03042017.csv"; print (a[file]++?"": "DETAILS 03042017" ORS h ORS) substr($0, index($0,$5)) > file}
END{for(file in a) print "EOF " a[file] > file}' filename
**
Input:
Account Num | Name | Card_Holder_Premium | Card_Holder| Type_Card | Balance | Date_Register
01 | 02 | 03 | 04 | 05 | 06 | 07
Output
_Premium | Card_Holder| Type_Card | Balance | Date_Register
04 | 05 | 06 | 07
My desired output:
Card_Holder| Type_Card | Balance | Date_Register
05 | 06 |07
Is this all you're trying to do?
$ sed -E 's/([^|]+\| ){4}//' file
April | May | June
05 | 06 | 07
$ awk '{sub(/([^|]+\| ){4}/,"")}1' file
April | May | June
05 | 06 | 07
The method you use to remove columns using index is not correct. As you have figured out, index can be confused and match the previous field when the previous field contains the same words as the next field.
The correct way is the one advised by Ed Morton.
In this online test, bellow code based on Ed Morton suggestion, gives you the output you expect:
awk -F"|" 'NR==1 {sub(/([^|]+\|){3}/,"");h=$0;next} \
{file=$1$2"_"$3"_"$4"_03042017.csv"; sub(/([^|]+\|){3}/,""); \
print (a[file]++?"": "DETAILS 03042017" ORS h ORS) $0 > file} \
END{for(file in a) print "EOF " a[file] > file}' file1.csv
#Output
DETAILS 03042017
Card_Holder| Type_Card | Balance | Date_Register
04 | 05 | 06 | 07
EOF 1
Due to the whitespace that you have include in your fields, the filename of the generated file appears as 01 02 _ 03 _ 04 _03042017.csv. With your real data this filename should appear correct.
In any case, i just adapt Ed Morton answer to your code. If you are happy with this solution you should accept Ed Morton answer.
PS: I just removed a space from Ed Morton answer since it seems to work a bit better with your not so clear data.
Ed Suggested:
awk '{sub(/([^|]+\| ){4}/,"")}1' file
#Mind this space ^
This space here it might fail to catch your data if there is no space after each field (i.e April|May).
On the other hand, by removing this space it seems that Ed Solution can correctly match either fields in format April | May or in format April|May

Sort history on number of occurrences

Basically I want to print the 10
most used commands that are stored in the
bash history but they still have to be proceeded
by the number that indicates when it was used;
I got this far:
history | cut -f 2 | cut -d ' ' -f 3,5 | sort -k 2 -n
Which should sort the second column of the number of occurrences from the command in that row... But it doesn't do that. I know I can head -10 the pipe at the end to take the highest ten of them, but I'm kinda stuck with the sorting part.
The 10 most used commands stored in your history:
history | sed -e 's/ *[0-9][0-9]* *//' | sort | uniq -c | sort -rn | head -10
This gives you the most used command line entries by removing the history number (sed), counting (sort | uniq -c), sorting by frequency (sort -rn) and showing only the top ten entries.
If you just want the commands alone:
history | awk '{print $2;}' | sort | uniq -c | sort -rn | head -10
Both of these strip the history number. Currently, I have no idea, how to achieve that in one line.
If you want to find the top used commands in your history file, you will have to count the instances in your history. awk can be used to do this. In the following code, the awk segment will create a hashtable with commands as the key and the number of times they appear as the value. This is printed out with the last history number for that command and sorted:
history | cut -f 2 | cut -d ' ' -f 3,5 | awk '{a[$2]++;b[$2]=$1} END{for (i in a) {print b[i], i, a[i]}}' | sort -k3 -rn | head -n 10
Output looks like:
975 cd 142
972 vim 122
990 ls 118
686 hg 90
974 mvn 51
939 bash 39
978 tac 32
958 cat 28
765 echo 27
981 exit 17
If you don't want the last column you could pipe the output through cut -d' ' -f1,2.

Conversion between binary and decimal

How do I convert between decimal and binary? I'm working on a Solaris 10 platform
Decimal to Binary
4000000002 -> 100000000000000000000000000010
Binary to Decimal
100000000000000000000000000010 -> 4000000002
I used the following command in unix but it takes lot of time. I have 20 million records like this
For decimal to binary, set obase to 2:
echo 'obase=2;4000000002' | bc
For binary to decimal, set ibase to 2:
echo 'ibase=2;100000000000000000000000000010' | bc
If you are running bc once for each number that will be slow.
Can you not arrange for the data to be delivered to a file and input in one go?
Here's a simple illustration, starting with your numbers in the file called input.txt:
# To binary
$ ( echo 'obase=2;ibase=16;'; cat input.txt ) | bc | paste input.txt - > output.txt
# To hex
$ ( echo 'obase=16;ibase=2;'; cat input.txt ) | bc | paste input.txt - > output.txt
The results are written to the file output.txt.
The paste is included to produce a tab-spearated output result like
07 111
1A 11010
20 100000
2B 101011
35 110101
80 10000000
FF 11111111
showing input value versus output value.
If you just want the results you can omit the paste, e.g.:
$ ( echo 'obase=2;ibase=16;'; cat input.txt ) | bc > output.txt
Note that you probably have to set ibase as well as obase for the conversion to be correct.
gclswceap1d-mc48191-CRENG_DEV [/home/mc48191/scratch]

Resources