Encoding/Decoding with power series - encryption

Intro
I got a string original, which was encoded (using the procedure below), then encrypted with rsa and then decoded again, so I'm left with a ciphertext s.
To get back to the original plaintext I'd encode s, then decrypt and then decode again.
Encoding
Each character in s gets encoded (using the function x) like this:
x(A)=0, x(B)=1, ..., x(Z)=25
Then the message, with k amount of characters, gets encoded (using the function y) like this:
encoded_msg = y(s) = x(s0)*260 + x(s1)*261 + x(s2)*262 + ... x(sk)*26k-1
The problem
Now, if i do this for original="ABCD" for example, that would lead to
y(x(original)) = 0 + 1*26 + 2*676 + 3*17576 = 54106.
(encrypt → decrypt → 54106)
decode ?
My question is: If all I got are the functions x and y and a result 54106, how do I decode back to "ABCD"?

Related

Hex string to base58

does anyone know any package that support the following conversion of base58 to hex string or the other way round from hex string to base58 encoding.
below is an example of a python implementation.
https://www.reddit.com/r/Tronix/comments/ja8khn/convert_my_address/
this hex string <- "4116cecf977b1ecc53eed37ee48c0ee58bcddbea5e"
should result in this : "TC3ockcvHNmt7uJ8f5k3be1QrZtMzE8MxK"
here is a link to be used for verification: https://tronscan.org/#/tools/tron-convert-tool
I was looking for it and I was able to design functions that produce the desired result.
import base58
def hex_to_base58(hex_string):
if hex_string[:2] in ["0x", "0X"]:
hex_string = "41" + hex_string[2:]
bytes_str = bytes.fromhex(hex_string)
base58_str = base58.b58encode_check(bytes_str)
return base58_str.decode("UTF-8")
def base58_to_hex(base58_string):
asc_string = base58.b58decode_check(base58_string)
return asc_string.hex().upper()
They are useful if you want to convert the public key (hex) of the transactions to the addresses of the wallets (base58).
public_key_hex = "0x4ab99740bdf786204e57c00677cf5bf8ee766476"
address = hex_to_base58(public_key_hex)
print(address)
# TGnKLLBQyCo6QF911j65ipBz5araDSYQAD

What is encoding error i.e utf-8 error while encrypting?

from Cryptodome.Cipher import AES
from Cryptodome.Random import get_random_bytes
import hashlib
import base64
def decrypt(enc, key_hash): # To decrypt data
print("\nIn Decryption method\n")
unpad = lambda s: s[:-ord(s[-1:])]
enc = base64.b64decode(enc)
iv = enc[:AES.block_size]
cipher = AES.new(key_hash, AES.MODE_CFB, iv)
ciper_text = cipher.decrypt(enc[AES.block_size:])
ciper_text = ciper_text.decode('utf-16')
ciper_text = unpad(ciper_text)
return ciper_text
def encrypt(ID, temperature, key_hash): # To encrypt data
print("\nIn Encryption method\n")
BS = AES.block_size
pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS)
ID = pad(ID)
ID = ID.encode('utf-16')
temperature = pad(temperature)
temperature = temperature.encode('utf-16')
iv = get_random_bytes(AES.block_size)
cipher = AES.new(key= key_hash, mode= AES.MODE_CFB, iv= iv)
ID_cipher = base64.b64encode(iv + cipher.encrypt(ID))
temperature_cipher = base64.b64encode(iv + cipher.encrypt(temperature))
print("Id cipher is '{0}'".format(ID_cipher))
print("temp cipher is '{0}'".format(temperature_cipher))
return (ID_cipher, temperature_cipher)
no = int(input("enter no of records"))
key_hash = hashlib.sha256(b"charaka").digest() # Creating key for cipher
for i in range(no):
ID = input("enter ID\n")
temperature = input("enter temperature\n")
(ID_cipher, temperature_cipher) = encrypt(ID, temperature, key_hash)
print("Decyrpted ID is '{0}'".format((decrypt(ID_cipher, key_hash))))
print("Decyrpted temp is '{0}'".format((decrypt(temperature_cipher, key_hash))))
When I want to enter a record i.e "ID, temperature" and trying to decrypt both ID is decrypting fine but temperature is not decrypting. Sometimes it produces utf-16 error i.e
ciper_text = ciper_text.decode('utf-16')
UnicodeDecodeError: 'utf-16-le' codec can't decode bytes in position 4-5: illegal UTF-16 surrogate
sometimes, output is not displayed properly
In Decryption method
Decyrpted temp is '勼⼋'
My doubt is when ID is decrypting well why am I getting problem in decrypting temperature value. I have explored other tutorials about encoding techniques but the problem is same.
I used pycryptodome library to encrypt strings using AES.
Thank you

Need help understanding how gsub and tonumber are used to encode lua source code?

I'm new to LUA but figured out that gsub is a global substitution function and tonumber is a converter function. What I don't understand is how the two functions are used together to produce an encoded string.
I've already tried reading parts of PIL (Programming in Lua) and the reference manual but still, am a bit confused.
local L0_0, L1_1
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
encodes = L0_0
L0_0 = gg
L0_0 = L0_0.toast
L1_1 = "__loading__\226\128\166"
L0_0(L1_1)
L0_0 = encodes
L1_1 = --"The Encoded String"
L0_0 = L0_0(L1_1)
L1_1 = load
L1_1 = L1_1(L0_0)
pcall(L1_1)
I removed the encoded string where I put the comment because of how long it was. If needed I can upload the encoded string as well.
gsub is being used to get 2 digit sections of A0_2. This means the string A0_3 is a 2 digit hexadecimal number but it is not in a number format so we cannot preform math on the value. A0_3 being a hex number can be inferred based on how tonubmer is used.
tonumber from Lua 5.1 Reference Manual:
Tries to convert its argument to a number. If the argument is already a number or a string convertible to a number, then tonumber returns this number; otherwise, it returns nil.
An optional argument specifies the base to interpret the numeral. The base may be any integer between 2 and 36, inclusive. In bases above 10, the letter 'A' (in either upper or lower case) represents 10, 'B' represents 11, and so forth, with 'Z' representing 35. In base 10 (the default), the number can have a decimal part, as well as an optional exponent part (see §2.1). In other bases, only unsigned integers are accepted.
So tonumber(A0_3, 16) means we are expecting for A0_3 to be a base 16 number (hexadecimal).
Once we have the number value of A0_3 we do some math and finally convert it to a character.
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
This block of code takes a string of hex digits and converts them into chars. tonumber is being used to allow for the manipulation of the values.
Here is an example of how this works with Hello World:
local str = "Hello World"
local hex_str = ''
for i = 1, #str do
hex_string = hex_string .. string.format("%x", str:byte(i,i))
end
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
local encoded = L0_0(hex_str)
print(encoded)
Output
;X__bJbe_W
And taking it back to the orginal string:
function decode(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 13) % 256)
end))
end
hex_string = ''
for i = 1, #encoded do
hex_string = hex_string .. string.format("%x", encoded:byte(i,i))
end
print(decode(hex_string))

In Lua, how to insert numbers as 32 bits to front of a binary sequence?

I'm new to Lua when I began to use OpenResty, I want to output a image and it's x,y coordinate together as one binary sequence to the clients, looks like: x_int32_bits y_int32_bits image_raw_data. At the client, I know the first 32 bits is x, the second 32 is y, and the others are image raw data. I met some questions:
How to convert number to 32 binary bits in Lua?
How to merge two 32 bits to one 64 bits sequence?
How to insert 64 bits to front of image raw data? And how to be fastest?
file:read("*a") got string type result, is the result ASCII sequence or like "000001110000001..." string?
What I'm thinking is like below, I don't know how to convert 32bits to string format same as file:read("*a") result.
#EgorSkriptunoff thank you, you opened a window for me. I wrote some new code, would you take a look, and I have another question, is the string merge method .. inefficient and expensive? Specially when one of the string is very large. Is there an alternative way to merge the bytes string?
NEW CODE UNDER #EgorSkriptunoff 's GUIDANCE
function _M.number_to_int32_bytes(num)
return ffi.string(ffi.new("int32_t[1]", num), 4)
end
local x, y = unpack(v)
local file, err = io.open(image_path, "rb")
if nil ~= file then
local image_raw_data = file:read("*a")
if nil == image_raw_data then
ngx.log(ngx.ERR, "read file error:", err)
else
-- Is the .. method inefficient and expensive? Because the image raw data maybe large,
-- so will .. copy all the data to a new string? Is there an alternative way to merge the bytes string?
output = utils.number_to_int32_bytes(x) .. utils.number_to_int32_bytes(y) .. image_raw_data
ngx.print(output)
ngx.flush(true)
end
file:close()
end
OLD CODE:
function table_merge(t1, t2)
for k,v in ipairs(t2) do
table.insert(t1, v)
end
return t1
end
function numToBits(num, bits)
-- returns a table of bits
local t={} -- will contain the bits
for b=bits,1,-1 do
rest=math.fmod(num,2)
t[b]=rest
num=(num-rest)/2
end
if num==0 then return t else return {'Not enough bits to represent this number'} end
end
-- Need to insert x,y as 32bits respectively to front of image binary sequence
function output()
local x = 1, y = 3
local file, err = io.open("/storage/images/1.png", "rb")
if nil ~= file then
local d = file:read("*a") ------- type(d) is string, why?
if nil == d then
ngx.log(ngx.ERR, "read file error:", err)
else
-- WHAT WAY I'M THINKING -----------------
-- Convert x, y to binary table, then merge them to one binary table
data = table_merge(numToBits(x, 32), numToBits(y, 32))
-- Convert data from binary table to string
data = convert_binary_table_to_string(data) -- HOW TO DO THAT? --------
-- Insert x,y data to front of image data, is data .. d ineffective?
data = data .. d
-------------------------------------------
ngx.print(data)
ngx.flush(true)
end
file:close()
end
end

UTF-8 file appending in vbscript/classicasp - can it be done?

My current knowledge:
If you are trying to write text files in vbscript / asp land you have two options.
the Scripting.FileSystemObject
the ADODB.Stream object
Scripting.FileSystemObject does not support utf8. It will write in Ascii, or Unicode (and waste two bytes for most of your characters)
ADODB.Stream does not support appending (afaik). I can't figure out how to make the ADODB.Stream object actually open a file and write to it when I call stream.Write. There is a SaveToFile function, but it outputs the entire stream.
If you need to write 1GB files, you would have to fill the whole 1GB into the stream before you could write it out.
Is there a special trick to get the ADODB.Stream object to link to a file on disk? I tried :
stream.Open "URL=file:///c:\test.txt"
but that gave an error.
In this scenario, I would probably create a COM component that takes a string, and runs it through WideCharToMultiByte to convert it to UTF-8.
In case you really want to stay within VBScript, I just hacked up a very quick and very dirty UTF-8 conversion...
Function Utf8(ByVal c)
Dim b1, b2, b3
If c < 128 Then
Utf8 = chr(c)
ElseIf c < 2048 Then
b1 = c Mod 64
b2 = (c - b1) / 64
Utf8 = chr(&hc0 + b2) & chr(&h80 + b1)
ElseIf c < 65536 Then
b1 = c Mod 64
b2 = ((c - b1) / 64) Mod 64
b3 = (c - b1 - (64 * b2)) / 4096
Utf8 = chr(&he0 + b3) & chr(&h80 + b2) & chr(&h80 + b1)
End If
End Function
Just open a file with Scripting.FileSystemObject, using system default encoding. Then pass every character of the string you want to write through this function.
NOTE that the function expects a Unicode code point, so be sure to use AscW() instead of plain Asc():
For i = 1 to Len(s)
file.Write Utf8(AscW(Mid(s, i, 1)))
Next

Resources