we can do the following to convert:
var a = "129.13"|0, // becomes 129
var b = 11.12|0; // becomes 11
var c = "112"|0; // becomes 112
This seem to work but not sure if this is a standard JS feature. Does any one have any idea if this is safe to use for converting strings and decimals to integers ?
Yes, it is standard behavior. Bitwise operators only operate on integers, so they convert whatever number they're give to signed 32 bit integer.
This means that the max range is that of signed 32 bit integer minus 1, which is 2147483647.
(Math.pow(2, 32) / 2 - 1)|0; // 2147483647
(Math.pow(2, 32) / 2)|0; // -2147483648 (wrong result)
Related
I'm new to LUA but figured out that gsub is a global substitution function and tonumber is a converter function. What I don't understand is how the two functions are used together to produce an encoded string.
I've already tried reading parts of PIL (Programming in Lua) and the reference manual but still, am a bit confused.
local L0_0, L1_1
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
encodes = L0_0
L0_0 = gg
L0_0 = L0_0.toast
L1_1 = "__loading__\226\128\166"
L0_0(L1_1)
L0_0 = encodes
L1_1 = --"The Encoded String"
L0_0 = L0_0(L1_1)
L1_1 = load
L1_1 = L1_1(L0_0)
pcall(L1_1)
I removed the encoded string where I put the comment because of how long it was. If needed I can upload the encoded string as well.
gsub is being used to get 2 digit sections of A0_2. This means the string A0_3 is a 2 digit hexadecimal number but it is not in a number format so we cannot preform math on the value. A0_3 being a hex number can be inferred based on how tonubmer is used.
tonumber from Lua 5.1 Reference Manual:
Tries to convert its argument to a number. If the argument is already a number or a string convertible to a number, then tonumber returns this number; otherwise, it returns nil.
An optional argument specifies the base to interpret the numeral. The base may be any integer between 2 and 36, inclusive. In bases above 10, the letter 'A' (in either upper or lower case) represents 10, 'B' represents 11, and so forth, with 'Z' representing 35. In base 10 (the default), the number can have a decimal part, as well as an optional exponent part (see §2.1). In other bases, only unsigned integers are accepted.
So tonumber(A0_3, 16) means we are expecting for A0_3 to be a base 16 number (hexadecimal).
Once we have the number value of A0_3 we do some math and finally convert it to a character.
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
This block of code takes a string of hex digits and converts them into chars. tonumber is being used to allow for the manipulation of the values.
Here is an example of how this works with Hello World:
local str = "Hello World"
local hex_str = ''
for i = 1, #str do
hex_string = hex_string .. string.format("%x", str:byte(i,i))
end
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
local encoded = L0_0(hex_str)
print(encoded)
Output
;X__bJbe_W
And taking it back to the orginal string:
function decode(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 13) % 256)
end))
end
hex_string = ''
for i = 1, #encoded do
hex_string = hex_string .. string.format("%x", encoded:byte(i,i))
end
print(decode(hex_string))
Given the following code which encodes a number - how can I calculate the maximum number if I want to limit the length of my generated keys. e.g. setting the max length of the result of encode(num) to some fixed value say 10
var alphabet = <SOME SET OF KEYS>,
base = alphabet.length;
this.encode = function(num) {
var str = '';
while (num > 0) {
str = _alphabet.charAt(num % base) + str;
num = Math.floor(num / base);
}
return str;
};
You are constructing num's representation in base base, with some arbitrary set of characters as numerals (alphabet).
For n characters we can represent numbers 0 through base^n - 1, so the answer to your question is base^10 - 1. For example, using the decimal system, with 5 digits we can represent numbers from 0 to 99999 (10^5 - 1).
It's worth noting that you will not ever use some sub-n length strings such as '001' or '0405' (using the decimal system numerals) - so any string starting with the equivalent of 0 except '0' itself.
I imagine that, for the purpose of a URL shortener that is allowed variable length, this might be considered a waste. By using all combinations you could represent numbers 0 through base^(n+1) - 2, but it wouldn't be as straightforward as your scheme.
Some cases of periodic boundary conditions (PBC) can be imposed very efficiently on integers by simply doing:
myWrappedWithinPeriodicBoundary = myUIntValue & mask
This works when the boundary is the half open range [0, upperBound), where the (exclusive) upperBound is 2^exp so that
mask = (1 << exp) - 1
For example:
let pbcUpperBoundExp = 2 // so the periodic boundary will be [0, 4)
let mask = (1 << pbcUpperBoundExp) - 1
for x in -7 ... 7 { print(x & mask, terminator: " ") }
(in Swift) will print:
1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
Question: Is there any (roughly similar) efficient method for imposing (some cases of) PBCs on floating point-numbers (32 or 64-bit IEEE-754)?
There are several reasonable approaches:
fmod(x,1)
modf(x,&dummy) — has the advantage of knowing its divisor statically, but in my testing comes from libc.so.6 even with -ffast-math
x-floor(x) (suggested by Jens in a comment) — supports negative inputs directly
Manual bit-twiddling direct implementation
Manual bit-twiddling implementation of floor
The first two preserve the sign of their input; you can add 1 if it's negative.
The two bit manipulations are very similar: you identify which significand bits correspond to the integer portion, and mask them (for the direct implementation) or the rest (to implement floor) off. The direct implementation can be completed either with a floating-point division or with a shift to reassemble the double manually; the former is 28% faster even given hardware CLZ. The floor implementation can immediately reconstitute a double: floor never changes the exponent of its argument unless it returns 0. About 20 lines of C are required.
The following timing is with double and gcc -O3, with timing loops over representative inputs into which the operative code was inlined.
fmod: 41.8 ns
modf: 19.6 ns
floor: 10.6 ns
With -ffast-math:
fmod: 26.2 ns
modf: 30.0 ns
floor: 21.9 ns
Bit manipulation:
direct: 18.0 ns
floor: 20.6 ns
The manual implementations are competitive, but the floor technique is the best. Oddly, two of the three library functions perform better without -ffast-math: that is, as a PLT function call than as an inlined builtin function.
I'm adding this answer to my own question since it describes the, at the time of writing, best solution I have found. It's in Swift 4.1 (should be straight forward to translate into C) and it's been tested in various use cases:
extension BinaryFloatingPoint {
/// Returns the value after restricting it to the periodic boundary
/// condition [0, 1).
/// See https://forums.swift.org/t/why-no-fraction-in-floatingpoint/10337
#_transparent
func wrappedToUnitRange() -> Self {
let fract = self - self.rounded(.down)
// Have to clamp to just below 1 because very small negative values
// will otherwise return an out of range result of 1.0.
// Turns out this:
if fract >= 1.0 { return Self(1).nextDown } else { return fract }
// is faster than this:
//return min(fract, Self(1).nextDown)
}
#_transparent
func wrapped(to range: Range<Self>) -> Self {
let measure = range.upperBound - range.lowerBound
let recipMeasure = Self(1) / measure
let scaled = (self - range.lowerBound) * recipMeasure
return scaled.wrappedToUnitRange() * measure + range.lowerBound
}
#_transparent
func wrappedIteratively(to range: Range<Self>) -> Self {
var v = self
let measure = range.upperBound - range.lowerBound
while v >= range.upperBound { v = v - measure }
while v < range.lowerBound { v = v + measure }
return v
}
}
On my MacBook Pro with a 2 GHz Intel Core i7,
a hundred million (probably inlined) calls to wrapped(to range:) on random (finite) Double values takes 0.6 seconds, which is about 166 million calls per second (not multi threaded). The range being statically known or not, or having bounds or measure that is a power of two etc, can make some difference but not as much as one could perhaps have thought.
wrappedToUnitRange() takes about 0.2 seconds, meaning 500 million calls per second on my system.
Given the right scenario, wrappedIteratively(to range:) is as fast as wrappedToUnitRange().
The timings have been made by comparing a baseline test (without wrapping some value, but still using it to compute eg a simple xor checksum) to the same test where a value is wrapped. The difference in time between these are the times I have given for the wrapping calls.
I have used Swift development toolchain 2018-02-21, compiling with -O -whole-module-optimization -static-stdlib -gnone. And care has been taken to make the tests relevant, ie preventing dead code removal, using true random input of different distributions etc. Writing the wrapping functions generically, like this extension on BinaryFloatingPoint, turned out to be optimized into equivalent code as if I had written separate specialized versions for eg Float and Double.
It would be interesting to see someone more skilled than me investigating this further (C or Swift or any other language doesn't matter).
EDIT:
For anyone interested, here is some versions for simd float2:
extension float2 {
#_transparent
func wrappedInUnitRange() -> float2 {
return simd.fract(self)
}
#_transparent
func wrappedToMinusOneToOne() -> float2 {
let scaled = (self + float2(1, 1)) * float2(0.5, 0.5)
let scaledFract = scaled - floor(scaled)
let wrapped = simd_muladd(scaledFract, float2(2, 2), float2(-1, -1))
// Note that we have to make sure the result is not out of bounds, like
// simd fract does:
let oneNextDown = Float(bitPattern:
0b0_01111110_11111111111111111111111)
let oneNextDownFloat2 = float2(oneNextDown, oneNextDown)
return simd.min(wrapped, oneNextDownFloat2)
}
#_transparent
func wrapped(toLowerBound lowerBound: float2,
upperBound: float2) -> float2
{
let measure = upperBound - lowerBound
let recipMeasure = simd_precise_recip(measure)
let scaled = (self - lowerBound) * recipMeasure
let scaledFract = scaled - floor(scaled)
// Note that we have to make sure the result is not out of bounds, like
// simd fract does:
let wrapped = simd_muladd(scaledFract, measure, lowerBound)
let maxX = upperBound.x.nextDown // For some reason, this won't be
let maxY = upperBound.y.nextDown // optimized even when upperBound is
// statically known, and there is no similar simd function available.
let maxValue = float2(maxX, maxY)
return simd.min(wrapped, maxValue)
}
}
I asked some related simd-related questions here which might be of interest.
EDIT2:
As can be seen in the above Swift Forums thread:
// Note that tiny negative values like:
let x: Float = -1e-08
// May produce results outside the [0, 1) range:
let wrapped = x - floor(x)
print(wrapped < 1.0) // false
// which may result in out-of-bounds table accesses
// in common usage, so it's probably better to use:
let correctlyWrapped = simd_fract(x)
print(correctlyWrapped < 1.0) // true
I have since updated the code to account for this.
I read that there is a computer that uses only subtraction.
How is that possible. For the plus operand it's pretty easy.
The logical operands I think can be made using subtraction with a constant.
What do you guys think ?
Plus +
is easy as you already have minus implemented so:
x + y = x - (0-y)
NOT !
In standard ALU is usual to compute substraction by addition:
-x = !x + 1
So from this the negation is:
!x = -1 - x
AND &,OR |,XOR ^
Sorry have no clue about efficient AND,OR,XOR implementations without more info about the architecture other then testing each bit individually from MSB to LSB. So first you need to know the bit value from a number so let assume 4 bit unsigned integer numbers for simplification so x=(x3,x2,x1,x0) where x3 is the MSB and x0 is the LSB.
if (x>=8) { x3=1; x-=8; } else x3=0;
if (x>=4) { x2=1; x-=4; } else x2=0;
if (x>=2) { x1=1; x-=2; } else x1=0;
if (x>=1) { x0=1; x-=1; } else x0=0;
And this is how to get the number back
x=0
if (x0) x+=1;
if (x1) x+=2;
if (x2) x+=4;
if (x3) x+=8;
or like this:
x=15
if (!x0) x-=1;
if (!x1) x-=2;
if (!x2) x-=4;
if (!x3) x-=8;
now we can do the AND,OR,XOR operations
z=x&y // AND
z0=(x0+y0==2);
z1=(x1+y1==2);
z2=(x2+y2==2);
z3=(x3+y3==2);
z=x|y // OR
z0=(x0+y0>0);
z1=(x1+y1>0);
z2=(x2+y2>0);
z3=(x3+y3>0);
z=x^y // XOR
z0=!(x0+y0==1);
z1=!(x1+y1==1);
z2=!(x2+y2==1);
z3=!(x3+y3==1);
PS the comparison is just substraction + Carry and Zero flags examination. Also all the + can be rewriten and optimized to use of - to better suite this weird architecture
bit shift <<,>>
z=x>>1
z0=x1;
z1=x2;
z2=x3;
z3=0;
z=x<<1
z0=0;
z1=x0;
z2=x1;
z3=x2;
I want to retrieve like in Pic2, the values in Decimal. ( hardcoded for visual understanding)
This is the codes to convert Hex to Dec for 16 bit:
string H;
int D;
H = txtHex.Text;
D = Convert.ToInt16(H, 16);
txtDec.Text = Convert.ToString(D);
however it doesn't work for a whole group
So the hex you are looking at does not refer to a decimal number. If it did refer to a single number that number would be far too large to store in any integral type. It might actually be too large to store in floating point types.
That hex you are looking at represents the binary data of a file. Each set of two characters represents one byte (because 16^2 = 2^8).
Take each pair of hex characters and convert it to a value between 0 and 255. You can accomplish this easily by converting each character to its numerical value. In case you don't have a complete understanding of what hex is, here's a map.
'0' = 0
'1' = 1
'2' = 2
'3' = 3
'4' = 4
'5' = 5
'6' = 6
'7' = 7
'8' = 8
'9' = 9
'A' = 10
'B' = 11
'C' = 12
'D' = 13
'E' = 14
'F' = 15
If the character on the left evaluates to n and the character on the right evaluates to m then the decimal value of the hex pair is (n x 16) + m.
You can use this method to get your values between 0 and 255. You then need to store each value in an unsigned char (this is a C/C++/ObjC term - I have no idea what the C# or VBA equivalent is, sorry). You then concatenate these unsigned char's to create the binary of the file. It is very important that you use an 8 bit type to store these values. You should not store these values in 16 bit integers, as you do above, or you will get corrupted data.
I don't know what you're meant to output in your program but this is how you get the data. If you provide a little more information I can probably help you use this binary.
You will need to split the contents into separate hex-number pairs ("B9", "D1" and so on). Then you can convert each into their "byte" value and add it to a result list.
Something like this, although you may need to adjust the "Split" (now it uses single spaces, returns, newlines and tabs as separator):
var byteList = new List<byte>();
foreach(var bytestring in txtHex.Text.Split(new[] {' ', '\r', '\n', '\t'},
StringSplitOptions.RemoveEmptyEntries))
{
byteList.Add(Convert.ToByte(bytestring, 16));
}
byte[] bytes = byteList.ToArray(); // further processing usually needs a byte-array instead of a List<byte>
What you then do with those "bytes" is up to you.