How to read Base64 VLQ code? - css

I'm trying to understand how css source map works. I've created a very simple scss file.
#navbar {
color: black;
}
When I compile the above scss, I get the following map file.
{
"version": "3",
"mappings": "AAAA,OAAQ;EACP,KAAK,EAAE,KAAK",
"sources": ["test.scss"],
"file": "test.css"
}
when I decode "mappings", I get the following values.
0) [0,0,0,0], [7,0,0,8]
1) [2,0,1,-7], [5,0,0,5], [2,0,0,2], [5,0,0,5]
What are those values?

I found an example at http://www.thecssninja.com/javascript/source-mapping, under the section "Base64 VLQ and keeping the source map small".
The above diagram AAgBC once processed further would return 0, 0, 32, 16, 1 – the 32 being the continuation bit that helps build the following value of 16. B purely decoded in Base64 is 1. So the important values that are used are 0, 0, 16, 1. This then lets us know that line 1 (lines are kept count by the semi colons) column 0 of the generated file maps to file 0 (array of files 0 is foo.js), line 16 at column 1.

Even after reading the answers, explanations were still not so clear to me. Here is an explanation in plain english in case it helps someone:
something like ;;AAAA,IAAM,WAAW,SAAX;... means <line0 info>;<line1 info>;...
so for ;;AAAA;IAAM,WAAW,SAAX;..., line 0 and line 1 doesn't have any important info (empty spaces etc.)
then for line 2 we have AAAA,IAAM,WAAW,SAAX
we convert each of these groups to binary using the base64 character mapping:
BASE64_ALPHABET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
so we basically find the index in this BASE64_ALPHABET above, and convert the index to 6-bit binary (6 bit because we use base64). eg. index of A is 0, so in 6 bit binary its 000000. so AAAA would be 000000 000000 000000 000000
then if we do this with IAAM we get: 001000 000000 000000 001100.
then this bit representation is the VLQ encoded version of 4 numbers. we start from the left block, remove the sign and continuation bit, keep the bits. and continue adding it bits while continuation bit is 1.
eg. 001000 is (cont)0100(sign)
so cont = 0 (no other block will be added to this number)
sign=0 (its positive)
bits = 0100 --> so it is 4 in decimal
-- note that we only remove sign bit for the first time. so if we had
101000 001000
we would say
0100 (cont=1, sign=0) 01000 (cont=0)
so we would have had +010001000 = 136
when we keep doing this, we will get these 4 numbers (continuation bit should be 0 exactly 4 times).
AAAA would map to (0,0,0,0)
IAAM would map to (4,0,0,6)
WAAW would map to (11,0,0,11)
...
now, each of these mean relative numbers. so we correct those:
AAAA actually points to: (0,0,0,0)
IAAM actually points to: (0+4, 0+0, 0+0, 0+6) = (4,0,0,6)
WAAW actually points to: (4+11, 0+0, 0+0, 6+11) = (15,0,0,17) // we added it where IAAAM was actually pointing to
...
so numbers (n1, n2, n3, n4) here stand for
n1: column in generated code
n2: corresponding source file index in "sources" array of sourceMapping output
n3: line number in original code
n4: column number in original code
we already knew which line this referred to from the beginning. so using the information we have above, we learned:
AAAA: line 2, column 1 of generated code points to sources[0], line 0, column 0
IAAM: line 2, column 4 of generated code points to sources[0], line 0, column 6
WAAW: line 2, column 15 of generated code points to sources[0], line 0, column 17
...
two good sources about this:
more on VLQ encoding
more on how to interpret/decode

Despite the examples I could find, I took me quite a while to understand how the coding / decoding really works. So I thought I'd learn best by trying to make something myself in a very explicit, step by step way. I started out with the explanation of VLQ at this blog,
I use the following Python functor to generate sourcemaps for Transcrypt.
The code is simple and I think gives good insight in how the coding/decoding works in principle.
To achieve speed despite its simplicity, it caches the first 256 numbers, which are used most often in generating a v3 sourcemap.
import math
class GetBase64Vlq:
def __init__ (self):
self.nBits32 = 5
self.encoding = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
self.prefabSize = 256
self.prefab = [self (i, True) for i in range (self.prefabSize)]
def __call__ (self, anInteger, init = False):
if not init and 0 < anInteger < self.prefabSize:
return self.prefab [anInteger]
else:
signed = bin (abs (anInteger)) [2 : ] + ('1' if anInteger < 0 else '0')
nChunks = math.ceil (len (signed) / float (self.nBits32))
padded = (self.nBits32 * '0' + signed) [-nChunks * self.nBits32 : ]
chunks = [('1' if iChunk else '0') + padded [iChunk * self.nBits32 : (iChunk + 1) * self.nBits32] for iChunk in range (nChunks - 1, -1, -1)]
return ''.join ([self.encoding [int (chunk, 2)] for chunk in chunks])
getBase64Vlq = GetBase64Vlq ()
Example of use:
while (True):
print (getBase64Vlq (int (input ('Give number:'))))

Related

Maximum input number to URL shortener

Given the following code which encodes a number - how can I calculate the maximum number if I want to limit the length of my generated keys. e.g. setting the max length of the result of encode(num) to some fixed value say 10
var alphabet = <SOME SET OF KEYS>,
base = alphabet.length;
this.encode = function(num) {
var str = '';
while (num > 0) {
str = _alphabet.charAt(num % base) + str;
num = Math.floor(num / base);
}
return str;
};
You are constructing num's representation in base base, with some arbitrary set of characters as numerals (alphabet).
For n characters we can represent numbers 0 through base^n - 1, so the answer to your question is base^10 - 1. For example, using the decimal system, with 5 digits we can represent numbers from 0 to 99999 (10^5 - 1).
It's worth noting that you will not ever use some sub-n length strings such as '001' or '0405' (using the decimal system numerals) - so any string starting with the equivalent of 0 except '0' itself.
I imagine that, for the purpose of a URL shortener that is allowed variable length, this might be considered a waste. By using all combinations you could represent numbers 0 through base^(n+1) - 2, but it wouldn't be as straightforward as your scheme.

skip missing values in a for loop

I have a folder with 1000 tifs. I'd like to read 4 tifs, do some calculation and read the next 4 tifs. The problem is, that for example tif number 500 is missing. Now my current program stops right bevor tif number 500.
So my idea is that I check if the path exists with file_test and /directory and skip all missing values in the foor loop.
Directory: Set this keyword to return 1 (true) if File exists and is a directory. true =1, false = 0
for j = 0, 1102 do begin
PathEx = File_test(e:\Meteosat\Tiff\2016\06\17\MSG_201606170100_B4_L.tif', directory)
if PathEx = 1 then
B = READ_TIFF(e:\Meteosat\Tiff\2016\06\17\MSG_201606170100_B4_L.tif, GEOTIFF=tags)
if PathEx = 0 then
print, 'missing' and continue
end
I want to skip all missing paths. I don't know how to do this. I Also read something about
.CONTINUE
But I have no clue how this works too.
thank you!
pampi
I am not sure you are using the correct syntax. Your code should be something like the following:
FOR j=0L, 1102L DO BEGIN
PathEx = FILE_TEST('e:\Meteosat\Tiff\2016\06\17\MSG_201606170100_B4_L.tif', /DIRECTORY)
IF (PathEx[0] EQ 0) THEN CONTINUE ;; This will jump to the next index
b = READ_TIFF('e:\Meteosat\Tiff\2016\06\17\MSG_201606170100_B4_L.tif',GEOTIFF=tags)
ENDFOR
You should probably have a list of file names and then index those. Suppose you call them something like filenames and this is a [N]-element string array. Then you would do something like the following within the above loop:
file = filenames[j]
PathEx = FILE_TEST(file[0],/DIRECTORY)
IF (PathEx[0] EQ 0) THEN CONTINUE ;; This will jump to the next index
b = READ_TIFF(file[0],GEOTIFF=tags)
I should also mention that the variable b should either be an array or you should have an array defined outside the loop that you can store data into, otherwise all your operations within the FOR loop will be lost.

Transposition Cipher returning wrong results

This is the source code I got from https://inventwithpython.com/hacking
import math, pyperclip
def main():
myMessage = 'Cenoonommstmme oo snnio. s s c'
myKey = 8
plaintext = decryptMessage(myKey, myMessage)
print(plaintext + '|')
pyperclip.copy(plaintext)
def decryptMessage(key, message):
numOfColumns = math.ceil(len(message) / key)
numOfRows = key
numOfShadedBoxes = (numOfColumns * numOfRows) - len(message)
plaintext = [''] * int(numOfColumns)
col = 0
row = 0
for symbol in message:
plaintext[col] += symbol
col += 1
if (col == numOfColumns) or (col == numOfColumns - 1 and row >= numOfRows - numOfShadedBoxes):
col = 0
row += 1
return ''.join(plaintext)
if __name__ == '__main__':
main()
What this Should be returning is
Common sence is not so common.|
What im getting back is
Coosmosi seomteonos nnmm n. c|
I cant figure out where the code is failing to send back the phrase
The code is OK. The problem is that you're using the wrong version of Python. As the 'Installation' chapter of that website says:
Important Note! Be sure to install Python 3, and not Python 2. The
programs in this book use Python 3, and you’ll get errors if you try
to run them with Python 2. It is so important, I am adding a cartoon
penguin telling you to install Python 3 so that you do not miss this
message:
You are using Python 2 to run the program.
The result is incorrect because the program depends on a feature that behaves differently in Python 2 than in Python 3. Specifically, dividing two integers in Python 3 produces a floating-point result but in Python 2 it produces a rounded-down integer result. So this expression:
(len(message) / key)
produces 3.75 in Python 3 but produces 3 in Python 2, and therefore this expression:
math.ceil(len(message) / key)
produces 4 (3.75 rounded up is 4) in Python 3 but produces 3 (3 rounded up is 3) in Python 2. This means that your numOfColumns is incorrect and therefore the decryption procedure produces an incorrect result.
You can fix this specific issue by changing (len(message) / key) to (float(len(message)) / key) to force Python 2 to treat that calculation as a floating-point division that will give the desired 3.75 result. But the real solution is to switch to using Python 3, because these differences in behaviour between Python 3 and Python 2 are just going to keep causing trouble as you proceed through the rest of the book.

How to convert a group of Hexadecimal to Decimal (Visual Studio )

I want to retrieve like in Pic2, the values in Decimal. ( hardcoded for visual understanding)
This is the codes to convert Hex to Dec for 16 bit:
string H;
int D;
H = txtHex.Text;
D = Convert.ToInt16(H, 16);
txtDec.Text = Convert.ToString(D);
however it doesn't work for a whole group
So the hex you are looking at does not refer to a decimal number. If it did refer to a single number that number would be far too large to store in any integral type. It might actually be too large to store in floating point types.
That hex you are looking at represents the binary data of a file. Each set of two characters represents one byte (because 16^2 = 2^8).
Take each pair of hex characters and convert it to a value between 0 and 255. You can accomplish this easily by converting each character to its numerical value. In case you don't have a complete understanding of what hex is, here's a map.
'0' = 0
'1' = 1
'2' = 2
'3' = 3
'4' = 4
'5' = 5
'6' = 6
'7' = 7
'8' = 8
'9' = 9
'A' = 10
'B' = 11
'C' = 12
'D' = 13
'E' = 14
'F' = 15
If the character on the left evaluates to n and the character on the right evaluates to m then the decimal value of the hex pair is (n x 16) + m.
You can use this method to get your values between 0 and 255. You then need to store each value in an unsigned char (this is a C/C++/ObjC term - I have no idea what the C# or VBA equivalent is, sorry). You then concatenate these unsigned char's to create the binary of the file. It is very important that you use an 8 bit type to store these values. You should not store these values in 16 bit integers, as you do above, or you will get corrupted data.
I don't know what you're meant to output in your program but this is how you get the data. If you provide a little more information I can probably help you use this binary.
You will need to split the contents into separate hex-number pairs ("B9", "D1" and so on). Then you can convert each into their "byte" value and add it to a result list.
Something like this, although you may need to adjust the "Split" (now it uses single spaces, returns, newlines and tabs as separator):
var byteList = new List<byte>();
foreach(var bytestring in txtHex.Text.Split(new[] {' ', '\r', '\n', '\t'},
StringSplitOptions.RemoveEmptyEntries))
{
byteList.Add(Convert.ToByte(bytestring, 16));
}
byte[] bytes = byteList.ToArray(); // further processing usually needs a byte-array instead of a List<byte>
What you then do with those "bytes" is up to you.

How do you use matrices in Nimrod?

I found this project on GitHub; it was the only search term returned for "nimrod matrix". I took the bare bones of it and changed it a little bit so that it compiled without errors, and then I added the last two lines to build a simple matrix, and then output a value, but the "getter" function isn't working for some reason. I adapted the instructions for adding properties found here, but something isn't right.
Here is my code so far. I'd like to use the GNU Scientific Library from within Nimrod, and I figured that this was the first logical step.
type
TMatrix*[T] = object
transposed: bool
dataRows: int
dataCols: int
data: seq[T]
proc index[T](x: TMatrix[T], r,c: int): int {.inline.} =
if r<0 or r>(x.rows()-1):
raise newException(EInvalidIndex, "matrix index out of range")
if c<0 or c>(x.cols()-1):
raise newException(EInvalidIndex, "matrix index out of range")
result = if x.transposed: c*x.dataCols+r else: r*x.dataCols+c
proc rows*[T](x: TMatrix[T]): int {.inline.} =
## Returns the number of rows in the matrix `x`.
result = if x.transposed: x.dataCols else: x.dataRows
proc cols*[T](x: TMatrix[T]): int {.inline.} =
## Returns the number of columns in the matrix `x`.
result = if x.transposed: x.dataRows else: x.dataCols
proc matrix*[T](rows, cols: int, d: openarray[T]): TMatrix[T] =
## Constructor. Initializes the matrix by allocating memory
## for the data and setting the number of rows and columns
## and sets the data to the values specified in `d`.
result.dataRows = rows
result.dataCols = cols
newSeq(result.data, rows*cols)
if len(d)>0:
if len(d)<(rows*cols):
raise newException(EInvalidIndex, "insufficient data supplied in matrix constructor")
for i in countup(0,rows*cols-1):
result.data[i] = d[i]
proc `[][]`*[T](x: TMatrix[T], r,c: int): T =
## Element access. Returns the element at row `r` column `c`.
result = x.data[x.index(r,c)]
proc `[][]=`*[T](x: var TMatrix[T], r,c: int, a: T) =
## Sets the value of the element at row `r` column `c` to
## the value supplied in `a`.
x.data[x.index(r,c)] = a
var m = matrix( 2, 2, [1,2,3,4] )
echo( $m[0][0] )
This is the error I get:
c:\program files (x86)\nimrod\config\nimrod.cfg(36, 11) Hint: added path: 'C:\Users\H127\.babel\libs\' [Path]
Hint: used config file 'C:\Program Files (x86)\Nimrod\config\nimrod.cfg' [Conf]
Hint: system [Processing]
Hint: mat [Processing]
mat.nim(48, 9) Error: type mismatch: got (TMatrix[int], int literal(0))
but expected one of:
system.[](a: array[Idx, T], x: TSlice[Idx]): seq[T]
system.[](a: array[Idx, T], x: TSlice[int]): seq[T]
system.[](s: string, x: TSlice[int]): string
system.[](s: seq[T], x: TSlice[int]): seq[T]
Thanks you guys!
I'd like to first point out that the matrix library you refer to is three years old. For a programming language in development that's a lot of time due to changes, and it doesn't compile any more with the current Nimrod git version:
$ nimrod c matrix
...
private/tmp/n/matrix/matrix.nim(97, 8) Error: ']' expected
It fails on the double array accessor, which seems to have changed syntax. I guess your attempt to create a double [][] accessor is problematic, it could be ambiguous: are you accessing the double array accessor of the object or are you accessing the nested array returned by the first brackets? I had to change the proc to the following:
proc `[]`*[T](x: TMatrix[T], r,c: int): T =
After that change you also need to change the way to access the matrix. Here's what I got:
for x in 0 .. <2:
for y in 0 .. <2:
echo "x: ", x, " y: ", y, " = ", m[x,y]
Basically, instead of specifying two bracket accesses you pass all the parameters inside a single bracket. That code generates:
x: 0 y: 0 = 1
x: 0 y: 1 = 2
x: 1 y: 0 = 3
x: 1 y: 1 = 4
With regards to finding software for Nimrod, I would like to recommend you using Nimble, Nimrod's package manager. Once you have it installed you can search available and maintained packages. The command nimble search math shows two potential packages: linagl and extmath. Not sure if they are what you are looking for, but at least they seem more fresh.

Resources