Maximum input number to URL shortener - math

Given the following code which encodes a number - how can I calculate the maximum number if I want to limit the length of my generated keys. e.g. setting the max length of the result of encode(num) to some fixed value say 10
var alphabet = <SOME SET OF KEYS>,
base = alphabet.length;
this.encode = function(num) {
var str = '';
while (num > 0) {
str = _alphabet.charAt(num % base) + str;
num = Math.floor(num / base);
}
return str;
};

You are constructing num's representation in base base, with some arbitrary set of characters as numerals (alphabet).
For n characters we can represent numbers 0 through base^n - 1, so the answer to your question is base^10 - 1. For example, using the decimal system, with 5 digits we can represent numbers from 0 to 99999 (10^5 - 1).
It's worth noting that you will not ever use some sub-n length strings such as '001' or '0405' (using the decimal system numerals) - so any string starting with the equivalent of 0 except '0' itself.
I imagine that, for the purpose of a URL shortener that is allowed variable length, this might be considered a waste. By using all combinations you could represent numbers 0 through base^(n+1) - 2, but it wouldn't be as straightforward as your scheme.

Related

How to find the filter coefficients for a DVBS2 shaping SRRC?

in the DVBS2 Standard the SRRC filter is defined as
How can i find the filter's time domain coefficients for implementation? The Inverse Fourier transform of this is not clear to me.
For DVBS2 signal you can use RRC match filter before timing recovery. For match filter, you can use this expression:
For example for n_ISI = 32 and Roll of factor = 0.25 with any sample per symbol you can use this Matlab code:
SPS = 4; %for example
n_ISI=32;
rolloff = 0.25;
n = linspace(-n_ISI/2,n_ISI/2,n_ISI*SPS+1) ;
rrcFilt = zeros(size(n)) ;
for iter = 1:length(n)
if n(iter) == 0
rrcFilt(iter) = 1 - rolloff + 4*rolloff/pi ;
elseif abs(n(iter)) == 1/4/rolloff
rrcFilt(iter) = rolloff/sqrt(2)*((1+2/pi)*sin(pi/4/rolloff)+(1-2/pi)*cos(pi/4/rolloff)) ;
else
rrcFilt(iter) = (4*rolloff/pi)/(1-(4*rolloff*n(iter)).^2) * (cos((1+rolloff)*pi*n(iter)) + sin((1-rolloff)*pi*n(iter))/(4*rolloff*n(iter))) ;
end
end
But if you want to use SRRC, there are two ways: 1. You can use its frequency representation form if you use filtering in the frequency domain. And for implementation, you can use the expression that you've noted. 2. For time-domain filtering, you should define the FIR filter with its time representation sequence. The time representation of such SRRC pulses is shown to adopt the following form:

Usage of the pipe " | " in a less calculation [duplicate]

we can do the following to convert:
var a = "129.13"|0, // becomes 129
var b = 11.12|0; // becomes 11
var c = "112"|0; // becomes 112
This seem to work but not sure if this is a standard JS feature. Does any one have any idea if this is safe to use for converting strings and decimals to integers ?
Yes, it is standard behavior. Bitwise operators only operate on integers, so they convert whatever number they're give to signed 32 bit integer.
This means that the max range is that of signed 32 bit integer minus 1, which is 2147483647.
(Math.pow(2, 32) / 2 - 1)|0; // 2147483647
(Math.pow(2, 32) / 2)|0; // -2147483648 (wrong result)

Need help understanding how gsub and tonumber are used to encode lua source code?

I'm new to LUA but figured out that gsub is a global substitution function and tonumber is a converter function. What I don't understand is how the two functions are used together to produce an encoded string.
I've already tried reading parts of PIL (Programming in Lua) and the reference manual but still, am a bit confused.
local L0_0, L1_1
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
encodes = L0_0
L0_0 = gg
L0_0 = L0_0.toast
L1_1 = "__loading__\226\128\166"
L0_0(L1_1)
L0_0 = encodes
L1_1 = --"The Encoded String"
L0_0 = L0_0(L1_1)
L1_1 = load
L1_1 = L1_1(L0_0)
pcall(L1_1)
I removed the encoded string where I put the comment because of how long it was. If needed I can upload the encoded string as well.
gsub is being used to get 2 digit sections of A0_2. This means the string A0_3 is a 2 digit hexadecimal number but it is not in a number format so we cannot preform math on the value. A0_3 being a hex number can be inferred based on how tonubmer is used.
tonumber from Lua 5.1 Reference Manual:
Tries to convert its argument to a number. If the argument is already a number or a string convertible to a number, then tonumber returns this number; otherwise, it returns nil.
An optional argument specifies the base to interpret the numeral. The base may be any integer between 2 and 36, inclusive. In bases above 10, the letter 'A' (in either upper or lower case) represents 10, 'B' represents 11, and so forth, with 'Z' representing 35. In base 10 (the default), the number can have a decimal part, as well as an optional exponent part (see ยง2.1). In other bases, only unsigned integers are accepted.
So tonumber(A0_3, 16) means we are expecting for A0_3 to be a base 16 number (hexadecimal).
Once we have the number value of A0_3 we do some math and finally convert it to a character.
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
This block of code takes a string of hex digits and converts them into chars. tonumber is being used to allow for the manipulation of the values.
Here is an example of how this works with Hello World:
local str = "Hello World"
local hex_str = ''
for i = 1, #str do
hex_string = hex_string .. string.format("%x", str:byte(i,i))
end
function L0_0(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 256 - 13 + 255999744) % 256)
end))
end
local encoded = L0_0(hex_str)
print(encoded)
Output
;X__bJbe_W
And taking it back to the orginal string:
function decode(A0_2)
return (A0_2:gsub("..", function(A0_3)
return string.char((tonumber(A0_3, 16) + 13) % 256)
end))
end
hex_string = ''
for i = 1, #encoded do
hex_string = hex_string .. string.format("%x", encoded:byte(i,i))
end
print(decode(hex_string))

How to convert a group of Hexadecimal to Decimal (Visual Studio )

I want to retrieve like in Pic2, the values in Decimal. ( hardcoded for visual understanding)
This is the codes to convert Hex to Dec for 16 bit:
string H;
int D;
H = txtHex.Text;
D = Convert.ToInt16(H, 16);
txtDec.Text = Convert.ToString(D);
however it doesn't work for a whole group
So the hex you are looking at does not refer to a decimal number. If it did refer to a single number that number would be far too large to store in any integral type. It might actually be too large to store in floating point types.
That hex you are looking at represents the binary data of a file. Each set of two characters represents one byte (because 16^2 = 2^8).
Take each pair of hex characters and convert it to a value between 0 and 255. You can accomplish this easily by converting each character to its numerical value. In case you don't have a complete understanding of what hex is, here's a map.
'0' = 0
'1' = 1
'2' = 2
'3' = 3
'4' = 4
'5' = 5
'6' = 6
'7' = 7
'8' = 8
'9' = 9
'A' = 10
'B' = 11
'C' = 12
'D' = 13
'E' = 14
'F' = 15
If the character on the left evaluates to n and the character on the right evaluates to m then the decimal value of the hex pair is (n x 16) + m.
You can use this method to get your values between 0 and 255. You then need to store each value in an unsigned char (this is a C/C++/ObjC term - I have no idea what the C# or VBA equivalent is, sorry). You then concatenate these unsigned char's to create the binary of the file. It is very important that you use an 8 bit type to store these values. You should not store these values in 16 bit integers, as you do above, or you will get corrupted data.
I don't know what you're meant to output in your program but this is how you get the data. If you provide a little more information I can probably help you use this binary.
You will need to split the contents into separate hex-number pairs ("B9", "D1" and so on). Then you can convert each into their "byte" value and add it to a result list.
Something like this, although you may need to adjust the "Split" (now it uses single spaces, returns, newlines and tabs as separator):
var byteList = new List<byte>();
foreach(var bytestring in txtHex.Text.Split(new[] {' ', '\r', '\n', '\t'},
StringSplitOptions.RemoveEmptyEntries))
{
byteList.Add(Convert.ToByte(bytestring, 16));
}
byte[] bytes = byteList.ToArray(); // further processing usually needs a byte-array instead of a List<byte>
What you then do with those "bytes" is up to you.

Get index of first "true" in vector

How do I efficiently calculate the index of the first "true" value in an OpenCL vector:
float4 f = (float4)(1, 2, 3, 4);
int i = firstTrue(f > 2);
In the example I would like to get i=2 because 3 is the first value greater than 2.
I have looked at all functions in http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/ but have found nothing.
Is this such an uncommon operation?
How do I calculate this (on my own) without much branching/code duplication?
I'm not aware of a built-in function that does exactly what you want, but I have some ideas on how you could do it. There might be a simpler solution, but I've only had one cup of coffee so far. The idea is to leverage the "count leading zeros" function "clz". You just need to convert the results of your conditional into bit positions in an integer.
Create a boolean vector with true/false state set by the comparison
Do a dot product of that against an integer vector with pre-defined values that correspond to bit positions.
The first bit set will correspond to the index you're asking for. Use clz() or a bithack to find that bit index.
In code, something like this (untested and might need adjusting):
float4 f = (float4)(1, 2, 3, 4);
int4 greater = (f > 2);
int4 bits = (int4)(8, 4, 2, 1);
int sum = dot(greater, bits); // maybe this needs to use float
int index = clz(sum); // might need offset applied
You'll need to offset or invert the result from clz to get 0,1,2,3 but that's just addition or subtraction.
Working Code
int firstTrue(int4 v) {
return 4 - (clz(0) - clz((v.x & 8) | (v.y & 4) | (v.z & 2) | (v.w & 1));
}

Resources