Transposition Cipher returning wrong results - encryption

This is the source code I got from https://inventwithpython.com/hacking
import math, pyperclip
def main():
myMessage = 'Cenoonommstmme oo snnio. s s c'
myKey = 8
plaintext = decryptMessage(myKey, myMessage)
print(plaintext + '|')
pyperclip.copy(plaintext)
def decryptMessage(key, message):
numOfColumns = math.ceil(len(message) / key)
numOfRows = key
numOfShadedBoxes = (numOfColumns * numOfRows) - len(message)
plaintext = [''] * int(numOfColumns)
col = 0
row = 0
for symbol in message:
plaintext[col] += symbol
col += 1
if (col == numOfColumns) or (col == numOfColumns - 1 and row >= numOfRows - numOfShadedBoxes):
col = 0
row += 1
return ''.join(plaintext)
if __name__ == '__main__':
main()
What this Should be returning is
Common sence is not so common.|
What im getting back is
Coosmosi seomteonos nnmm n. c|
I cant figure out where the code is failing to send back the phrase

The code is OK. The problem is that you're using the wrong version of Python. As the 'Installation' chapter of that website says:
Important Note! Be sure to install Python 3, and not Python 2. The
programs in this book use Python 3, and you’ll get errors if you try
to run them with Python 2. It is so important, I am adding a cartoon
penguin telling you to install Python 3 so that you do not miss this
message:
You are using Python 2 to run the program.
The result is incorrect because the program depends on a feature that behaves differently in Python 2 than in Python 3. Specifically, dividing two integers in Python 3 produces a floating-point result but in Python 2 it produces a rounded-down integer result. So this expression:
(len(message) / key)
produces 3.75 in Python 3 but produces 3 in Python 2, and therefore this expression:
math.ceil(len(message) / key)
produces 4 (3.75 rounded up is 4) in Python 3 but produces 3 (3 rounded up is 3) in Python 2. This means that your numOfColumns is incorrect and therefore the decryption procedure produces an incorrect result.
You can fix this specific issue by changing (len(message) / key) to (float(len(message)) / key) to force Python 2 to treat that calculation as a floating-point division that will give the desired 3.75 result. But the real solution is to switch to using Python 3, because these differences in behaviour between Python 3 and Python 2 are just going to keep causing trouble as you proceed through the rest of the book.

Related

How to interpolate multiple variables in a Markdown string in Julia

In the following Julia 1.5 code:
a, b = 4, 5
"a=$(a), b=($b)" # "a=4, b=5"
using Markdown
md"a=$(a), b=($b)" # a=a, b=b
# but...
Markdown.parse("a=$(a), b=($b)") # "a=4, b=5"
It seems that the Markdown macro thinks two $ indicate a math expression. But the parse handles it OK.
Can someone explain this? is there a way to use the md"..." form for this.
It's not obvious in my opinion, but I think $ with a non-space before is interpreted as a closing LaTeX $ if there is one before.
Some suggestions:
If you're OK with spaces around your = sign, then this works:
julia> md"a = $a, b = $b"
a = 4, b = 5
Or you could make it a list:
julia> md"""
- a=$a
- b=$b
"""
• a=4
• b=5

Reverse order recursion

I want to print out the elements in the list in reverse order recursively.
def f3(alist):
if alist == []:
print()
else:
print(alist[-1])
f3(alist[:-1])
I know it is working well, but I don't know the difference between
return f3(alist[:-1])
and
f3(alist[:-1])
Actually both are working well.
My inputs are like these.
f3([1,2,3])
f3([])
f3([3,2,1])
There is a difference between the two although it isn't noticeable in this program. Look at the following example where all I am doing is passing a value as an argument and incrementing it thereby making it return the value once it hits 10 or greater:
from sys import exit
a = 0
def func(a):
a += 1
if a >= 10:
return a
exit(1)
else:
# Modifications are made to the following line
return func(a)
g = func(3)
print(g)
Here the output is 10
Now if I re-write the code the second way without the "return" keyword like this :
from sys import exit
a = 0
def func(a):
a += 1
if a >= 10:
return a
exit(1)
else:
# Modifications are made to the following line
func(a)
g = func(3)
print(g)
The output is "None".
This is because we are not returning any value to handle.
In short, return is like internal communication within the function that can be possibly handled versus just running the function again.

Porting to Python3: PyPDF2 mergePage() gives TypeError

I'm using Python 3.4.2 and PyPDF2 1.24 (also using reportlab 3.1.44 in case that helps) on windows 7.
I recently upgraded from Python 2.7 to 3.4, and am in the process of porting my code. This code is used to create a blank pdf page with links embedded in it (using reportlab) and merge it (using PyPDF2) with an existing pdf page. I had an issue with reportlab in that saving the canvas used StringIO which needed to be changed to BytesIO, but after doing that I ran into this error:
Traceback (most recent call last):
File "C:\cms_software\pdf_replica\builder.py", line 401, in merge_pdf_files
input_page.mergePage(link_page)
File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2013, in mergePage
self.mergePage(page2)
File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2059, in mergePage
page2Content = PageObject._pushPopGS(page2Content, self.pdf)
File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 1973, in _pushPopGS
stream = ContentStream(contents, pdf)
File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2446, in __init
stream = BytesIO(b_(stream.getData()))
File "C:\Python34\lib\site-packages\PyPDF2\generic.py", line 826, in getData
decoded._data = filters.decodeStreamData(self)
File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 326, in decodeStreamData
data = ASCII85Decode.decode(data)
File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 264, in decode
data = [y for y in data if not (y in ' \n\r\t')]
File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 264, in
data = [y for y in data if not (y in ' \n\r\t')]
TypeError: 'in <string>' requires string as left operand, not int
Here is the line and the line above where the traceback mentions:
link_page = self.make_pdf_link_page(pdf, size, margin, scale_factor, debug_article_links)
if link_page != None:
input_page.mergePage(link_page)
Here are the relevant parts of that make_pdf_link_page function:
packet = io.BytesIO()
can = canvas.Canvas(packet, pagesize=(size['width'], size['height']))
....# left out code here is just reportlab specifics for size and url stuff
can.linkURL(url, r1, thickness=1, color=colors.green)
can.rect(x1, y1, width, height, stroke=1, fill=0)
# create a new PDF with Reportlab that has the url link embedded
can.save()
packet.seek(0)
try:
new_pdf = PdfFileReader(packet)
except Exception as e:
logger.exception('e')
return None
return new_pdf.getPage(0)
I'm assuming it's a problem with using BytesIO, but I can't create the page with reportlab with StringIO. This is a critical feature that used to work perfectly with Python 2.7, so I'd appreciate any kind of feedback on this. Thanks!
UPDATE:
I've also tried changing from using BytesIO to just writing to a temp file, then merging. Unfortunately I got the same error.
Here is tempfile version:
import tempfile
temp_dir = tempfile.gettempdir()
temp_path = os.path.join(temp_dir, "tmp.pdf")
can = canvas.Canvas(temp_path, pagesize=(size['width'], size['height']))
....
can.showPage()
can.save()
try:
new_pdf = PdfFileReader(temp_path)
except Exception as e:
logger.exception('e')
return None
return new_pdf.getPage(0)
UPDATE:
I found an interesting bit of information on this. It seems if I comment out the can.rect and can.linkURL calls it will merge. So drawing anything on a page, then trying to merge it with my existing pdf is causing the error.
After digging in to PyPDF2 library code, I was able to find my own answer. For python 3 users, old libraries can be tricky. Even if they say they support python 3, they don't necessarily test everything. In this case, the problem was with the class ASCII85Decode in filters.py in PyPDF2. For python 3, this class needs to return bytes. I borrowed the code for this same type of function from pdfminer3k, which is a port for python 3 of pdfminer. If you exchange the ASCII85Decode() class for this code, it will work:
import struct
class ASCII85Decode(object):
def decode(data, decodeParms=None):
if isinstance(data, str):
data = data.encode('ascii')
n = b = 0
out = bytearray()
for c in data:
if ord('!') <= c and c <= ord('u'):
n += 1
b = b*85+(c-33)
if n == 5:
out += struct.pack(b'>L',b)
n = b = 0
elif c == ord('z'):
assert n == 0
out += b'\0\0\0\0'
elif c == ord('~'):
if n:
for _ in range(5-n):
b = b*85+84
out += struct.pack(b'>L',b)[:n-1]
break
return bytes(out)

How do you use matrices in Nimrod?

I found this project on GitHub; it was the only search term returned for "nimrod matrix". I took the bare bones of it and changed it a little bit so that it compiled without errors, and then I added the last two lines to build a simple matrix, and then output a value, but the "getter" function isn't working for some reason. I adapted the instructions for adding properties found here, but something isn't right.
Here is my code so far. I'd like to use the GNU Scientific Library from within Nimrod, and I figured that this was the first logical step.
type
TMatrix*[T] = object
transposed: bool
dataRows: int
dataCols: int
data: seq[T]
proc index[T](x: TMatrix[T], r,c: int): int {.inline.} =
if r<0 or r>(x.rows()-1):
raise newException(EInvalidIndex, "matrix index out of range")
if c<0 or c>(x.cols()-1):
raise newException(EInvalidIndex, "matrix index out of range")
result = if x.transposed: c*x.dataCols+r else: r*x.dataCols+c
proc rows*[T](x: TMatrix[T]): int {.inline.} =
## Returns the number of rows in the matrix `x`.
result = if x.transposed: x.dataCols else: x.dataRows
proc cols*[T](x: TMatrix[T]): int {.inline.} =
## Returns the number of columns in the matrix `x`.
result = if x.transposed: x.dataRows else: x.dataCols
proc matrix*[T](rows, cols: int, d: openarray[T]): TMatrix[T] =
## Constructor. Initializes the matrix by allocating memory
## for the data and setting the number of rows and columns
## and sets the data to the values specified in `d`.
result.dataRows = rows
result.dataCols = cols
newSeq(result.data, rows*cols)
if len(d)>0:
if len(d)<(rows*cols):
raise newException(EInvalidIndex, "insufficient data supplied in matrix constructor")
for i in countup(0,rows*cols-1):
result.data[i] = d[i]
proc `[][]`*[T](x: TMatrix[T], r,c: int): T =
## Element access. Returns the element at row `r` column `c`.
result = x.data[x.index(r,c)]
proc `[][]=`*[T](x: var TMatrix[T], r,c: int, a: T) =
## Sets the value of the element at row `r` column `c` to
## the value supplied in `a`.
x.data[x.index(r,c)] = a
var m = matrix( 2, 2, [1,2,3,4] )
echo( $m[0][0] )
This is the error I get:
c:\program files (x86)\nimrod\config\nimrod.cfg(36, 11) Hint: added path: 'C:\Users\H127\.babel\libs\' [Path]
Hint: used config file 'C:\Program Files (x86)\Nimrod\config\nimrod.cfg' [Conf]
Hint: system [Processing]
Hint: mat [Processing]
mat.nim(48, 9) Error: type mismatch: got (TMatrix[int], int literal(0))
but expected one of:
system.[](a: array[Idx, T], x: TSlice[Idx]): seq[T]
system.[](a: array[Idx, T], x: TSlice[int]): seq[T]
system.[](s: string, x: TSlice[int]): string
system.[](s: seq[T], x: TSlice[int]): seq[T]
Thanks you guys!
I'd like to first point out that the matrix library you refer to is three years old. For a programming language in development that's a lot of time due to changes, and it doesn't compile any more with the current Nimrod git version:
$ nimrod c matrix
...
private/tmp/n/matrix/matrix.nim(97, 8) Error: ']' expected
It fails on the double array accessor, which seems to have changed syntax. I guess your attempt to create a double [][] accessor is problematic, it could be ambiguous: are you accessing the double array accessor of the object or are you accessing the nested array returned by the first brackets? I had to change the proc to the following:
proc `[]`*[T](x: TMatrix[T], r,c: int): T =
After that change you also need to change the way to access the matrix. Here's what I got:
for x in 0 .. <2:
for y in 0 .. <2:
echo "x: ", x, " y: ", y, " = ", m[x,y]
Basically, instead of specifying two bracket accesses you pass all the parameters inside a single bracket. That code generates:
x: 0 y: 0 = 1
x: 0 y: 1 = 2
x: 1 y: 0 = 3
x: 1 y: 1 = 4
With regards to finding software for Nimrod, I would like to recommend you using Nimble, Nimrod's package manager. Once you have it installed you can search available and maintained packages. The command nimble search math shows two potential packages: linagl and extmath. Not sure if they are what you are looking for, but at least they seem more fresh.

How to read Base64 VLQ code?

I'm trying to understand how css source map works. I've created a very simple scss file.
#navbar {
color: black;
}
When I compile the above scss, I get the following map file.
{
"version": "3",
"mappings": "AAAA,OAAQ;EACP,KAAK,EAAE,KAAK",
"sources": ["test.scss"],
"file": "test.css"
}
when I decode "mappings", I get the following values.
0) [0,0,0,0], [7,0,0,8]
1) [2,0,1,-7], [5,0,0,5], [2,0,0,2], [5,0,0,5]
What are those values?
I found an example at http://www.thecssninja.com/javascript/source-mapping, under the section "Base64 VLQ and keeping the source map small".
The above diagram AAgBC once processed further would return 0, 0, 32, 16, 1 – the 32 being the continuation bit that helps build the following value of 16. B purely decoded in Base64 is 1. So the important values that are used are 0, 0, 16, 1. This then lets us know that line 1 (lines are kept count by the semi colons) column 0 of the generated file maps to file 0 (array of files 0 is foo.js), line 16 at column 1.
Even after reading the answers, explanations were still not so clear to me. Here is an explanation in plain english in case it helps someone:
something like ;;AAAA,IAAM,WAAW,SAAX;... means <line0 info>;<line1 info>;...
so for ;;AAAA;IAAM,WAAW,SAAX;..., line 0 and line 1 doesn't have any important info (empty spaces etc.)
then for line 2 we have AAAA,IAAM,WAAW,SAAX
we convert each of these groups to binary using the base64 character mapping:
BASE64_ALPHABET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
so we basically find the index in this BASE64_ALPHABET above, and convert the index to 6-bit binary (6 bit because we use base64). eg. index of A is 0, so in 6 bit binary its 000000. so AAAA would be 000000 000000 000000 000000
then if we do this with IAAM we get: 001000 000000 000000 001100.
then this bit representation is the VLQ encoded version of 4 numbers. we start from the left block, remove the sign and continuation bit, keep the bits. and continue adding it bits while continuation bit is 1.
eg. 001000 is (cont)0100(sign)
so cont = 0 (no other block will be added to this number)
sign=0 (its positive)
bits = 0100 --> so it is 4 in decimal
-- note that we only remove sign bit for the first time. so if we had
101000 001000
we would say
0100 (cont=1, sign=0) 01000 (cont=0)
so we would have had +010001000 = 136
when we keep doing this, we will get these 4 numbers (continuation bit should be 0 exactly 4 times).
AAAA would map to (0,0,0,0)
IAAM would map to (4,0,0,6)
WAAW would map to (11,0,0,11)
...
now, each of these mean relative numbers. so we correct those:
AAAA actually points to: (0,0,0,0)
IAAM actually points to: (0+4, 0+0, 0+0, 0+6) = (4,0,0,6)
WAAW actually points to: (4+11, 0+0, 0+0, 6+11) = (15,0,0,17) // we added it where IAAAM was actually pointing to
...
so numbers (n1, n2, n3, n4) here stand for
n1: column in generated code
n2: corresponding source file index in "sources" array of sourceMapping output
n3: line number in original code
n4: column number in original code
we already knew which line this referred to from the beginning. so using the information we have above, we learned:
AAAA: line 2, column 1 of generated code points to sources[0], line 0, column 0
IAAM: line 2, column 4 of generated code points to sources[0], line 0, column 6
WAAW: line 2, column 15 of generated code points to sources[0], line 0, column 17
...
two good sources about this:
more on VLQ encoding
more on how to interpret/decode
Despite the examples I could find, I took me quite a while to understand how the coding / decoding really works. So I thought I'd learn best by trying to make something myself in a very explicit, step by step way. I started out with the explanation of VLQ at this blog,
I use the following Python functor to generate sourcemaps for Transcrypt.
The code is simple and I think gives good insight in how the coding/decoding works in principle.
To achieve speed despite its simplicity, it caches the first 256 numbers, which are used most often in generating a v3 sourcemap.
import math
class GetBase64Vlq:
def __init__ (self):
self.nBits32 = 5
self.encoding = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
self.prefabSize = 256
self.prefab = [self (i, True) for i in range (self.prefabSize)]
def __call__ (self, anInteger, init = False):
if not init and 0 < anInteger < self.prefabSize:
return self.prefab [anInteger]
else:
signed = bin (abs (anInteger)) [2 : ] + ('1' if anInteger < 0 else '0')
nChunks = math.ceil (len (signed) / float (self.nBits32))
padded = (self.nBits32 * '0' + signed) [-nChunks * self.nBits32 : ]
chunks = [('1' if iChunk else '0') + padded [iChunk * self.nBits32 : (iChunk + 1) * self.nBits32] for iChunk in range (nChunks - 1, -1, -1)]
return ''.join ([self.encoding [int (chunk, 2)] for chunk in chunks])
getBase64Vlq = GetBase64Vlq ()
Example of use:
while (True):
print (getBase64Vlq (int (input ('Give number:'))))

Resources