Please I need help with this question. I've been trying to understand how the decryption time works here through an example from my lecture notes. The question is as follows.
Given a key of length n there are 2n possible keys to try in a brute force attack. A key of length 128 bits would require 5,257,322,061,209,440,000,000 years to crack, assuming 8,000,000 guesses/second
I've tried working it out but the result is found wanting.
n = 128, therefore 2n will be 2128 = 3.402823669 x 1038
8,000,000 guesses a second for a year should give 8,000,000 * 365 * 24 * 3600
therefore guesses per year = 2.52288 x 1014
But the main problem is that the guesses when divided by total guesses in a year do not give an accurate answer.
I'm guessing time it takes to decrypt = total length / guesses which is:
3.402823669 x 10^38/2.52288 x 10^14 = 1.348785384 * 10^24
but the answer says it would require 5.257322061 x 10^21
Related
If x is an n-bit integer. What is the size (in bits) of x2?
I think the answer is O(n); is that correct? The way I thought about it is adding a number to itself that number amount of times means that there will be n operations, therefore O(n). Is my understanding correct?
Let's suppose x has n bits. This means x = Θ(2n). Therefore, x2 = Θ(2n · 2n) = Θ(22n), so the number now has about twice as many bits as before. This means that if there were n bits to begin with, there are now about 2n = Θ(n) bits.
While the answer you gave of O(n) is correct, your reasoning is invalid. Note that the question isn't asking for how long it takes to compute x2, but rather the number of bits it contains. The time to compute x2 is a different question.
Hope this helps!
I got this math problem. I am trying to calculate the max amount of samples when the response time is zero. My test has 3 samples (HTTP Request). The total test wait time is 11 seconds. The test is run for 15 minutes and 25 seconds. The ramp up is 25 seconds, this means that for every second 2 users are created till we reach 50.
Normally you have to wait for the server to respond, but I am trying to calculate the max amount of samples (this means response time is zero.) How do i do this. I can't simply do ((15 * 60 + 25) / 11) * 50. Because of the ramp up.
Any ideas?
EDIT:
Maybe I should translate this problem into something generic and not specific to JMeter So consider this (maybe it will make sense to me aswel ;)).
50 people are walking laps around the park. Each lap takes exactly 11 seconds to run. We got 15 minutes and 25 seconds to walk as many as possible laps. We cannot start all at the sametime but we can start 2 every second (25seconds till we are all running). How many laps can we run?
What i end up doing was manually adding it all up...
Since it takes 25s to get up to full speed, only 2 people can walk for 900s and 2 people can walk for 901s and 2 people can walk for 902s all the way to total of 50 people..
Adding that number together should give me my number i think.
If I am doing something wrong or based on wrong conclusion I like to hear your opinion ;). Or if somebody can see a formula.
Thanks in advance
I have no idea about jmeter, but I do understand your question about people running round the park :-).
If you want an exact answer to that question which ignores partial laps round the park, you'll need to do (in C/java terminology) a for loop to work it out. This is because to ignore partial laps it's necessary to round down the number of possible laps, and there isn't a simple formula that's going to take the rounding down into account. Doing that in Excel, I calculate that 4012 complete laps are possible by the 50 people.
However, if you're happy to include partial laps, you just need to work out the total number of seconds available (taking account of the ramp up), then divide by the number of people starting each second, and finally divide by how many seconds it takes to run the lap. The total number of seconds available is an arithmetic progression.
To write down the formula that includes partial laps, some notation is needed:
T = Total number of seconds (i.e. 900, given that there are 15 minutes)
P = number of People (i.e. 50)
S = number of people who can start at the Same time (i.e. 2)
L = time in seconds for a Lap (i.e. 11)
Then the formula for the total number of laps, including partial laps is
Number of Laps = P * (2 * T - (P/S - 1)) / (2*L)
which in this case equals 4036.36.
Assume we're given:
T = total seconds = 925
W = walkers = 50
N = number of walkers that can start together = 2
S = stagger (seconds between starting groups) = 1
L = lap time = 11
G = number of starting groups = ceiling(W/N) = 25
Where all are positive, W and N are integers, and T >= S*(G-1) (i.e. all walkers have a chance to start). I am assuming the first group start walking at time 0, not S seconds later.
We can break up the time into the ramp period:
Ramp laps = summation(integer i, 0 <= i < G, N*S*(G-i-1)/L)
= N*S*G*(G-1)/(2*L)
and the steady state period (once all the walkers have started):
Steady state laps = W * (T - S*(G-1))/L
Adding these two together and simplifying a little, we get:
Laps = ( N*S*G*(G-1)/2 + W*(T-S*(G-1)) ) / L
This works out to be 4150 laps.
There is a closed form solution if you're only interested in full laps. If that's the case, just let me know.
I'm writing something that reads bytes (just a List<int>) from a remote random number generation source that is extremely slow. For that and my personal requirements, I want to retrieve as few bytes from the source as possible.
Now I am trying to implement a method which signature looks like:
int getRandomInteger(int min, int max)
I have two theories how I can fetch bytes from my random source, and convert them to an integer.
Approach #1 is naivé . Fetch (max - min) / 256 number of bytes and add them up. It works, but it's going to fetch a lot of bytes from the slow random number generator source I have. For example, if I want to get a random integer between a million and a zero, it's going to fetch almost 4000 bytes... that's unacceptable.
Approach #2 sounds ideal to me, but I'm unable come up with the algorithm. it goes like this:
Lets take min: 0, max: 1000 as an example.
Calculate ceil(rangeSize / 256) which in this case is ceil(1000 / 256) = 4. Now fetch one (1) byte from the source.
Scale this one byte from the 0-255 range to 0-3 range (or 1-4) and let it determine which group we use. E.g. if the byte was 250, we would choose the 4th group (which represents the last 250 numbers, 750-1000 in our range).
Now fetch another byte and scale from 0-255 to 0-250 and let that determine the position within the group we have. So if this second byte is e.g. 120, then our final integer is 750 + 120 = 870.
In that scenario we only needed to fetch 2 bytes in total. However, it's much more complex as if our range is 0-1000000 we need several "groups".
How do I implement something like this? I'm okay with Java/C#/JavaScript code or pseudo code.
I'd also like to keep the result from not losing entropy/randomness. So, I'm slightly worried of scaling integers.
Unfortunately your Approach #1 is broken. For example if min is 0 and max 510, you'd add 2 bytes. There is only one way to get a 0 result: both bytes zero. The chance of this is (1/256)^2. However there are many ways to get other values, say 100 = 100+0, 99+1, 98+2... So the chance of a 100 is much larger: 101(1/256)^2.
The more-or-less standard way to do what you want is to:
Let R = max - min + 1 -- the number of possible random output values
Let N = 2^k >= mR, m>=1 -- a power of 2 at least as big as some multiple of R that you choose.
loop
b = a random integer in 0..N-1 formed from k random bits
while b >= mR -- reject b values that would bias the output
return min + floor(b/m)
This is called the method of rejection. It throws away randomly selected binary numbers that would bias the output. If min-max+1 happens to be a power of 2, then you'll have zero rejections.
If you have m=1 and min-max+1 is just one more than a biggish power of 2, then rejections will be near half. In this case you'd definitely want bigger m.
In general, bigger m values lead to fewer rejections, but of course they require slighly more bits per number. There is a probabilitistically optimal algorithm to pick m.
Some of the other solutions presented here have problems, but I'm sorry right now I don't have time to comment. Maybe in a couple of days if there is interest.
3 bytes (together) give you random integer in range 0..16777215. You can use 20 bits from this value to get range 0..1048575 and throw away values > 1000000
range 1 to r
256^a >= r
first find 'a'
get 'a' number of bytes into array A[]
num=0
for i=0 to len(A)-1
num+=(A[i]^(8*i))
next
random number = num mod range
Your random source gives you 8 random bits per call. For an integer in the range [min,max] you would need ceil(log2(max-min+1)) bits.
Assume that you can get random bytes from the source using some function:
bool RandomBuf(BYTE* pBuf , size_t nLen); // fill buffer with nLen random bytes
Now you can use the following function to generate a random value in a given range:
// --------------------------------------------------------------------------
// produce a uniformly-distributed integral value in range [nMin, nMax]
// T is char/BYTE/short/WORD/int/UINT/LONGLONG/ULONGLONG
template <class T> T RandU(T nMin, T nMax)
{
static_assert(std::numeric_limits<T>::is_integer, "RandU: integral type expected");
if (nMin>nMax)
std::swap(nMin, nMax);
if (0 == (T)(nMax-nMin+1)) // all range of type T
{
T nR;
return RandomBuf((BYTE*)&nR, sizeof(T)) ? *(T*)&nR : nMin;
}
ULONGLONG nRange = (ULONGLONG)nMax-(ULONGLONG)nMin+1 ; // number of discrete values
UINT nRangeBits= (UINT)ceil(log((double)nRange) / log(2.)); // bits for storing nRange discrete values
ULONGLONG nR ;
do
{
if (!RandomBuf((BYTE*)&nR, sizeof(nR)))
return nMin;
nR= nR>>((sizeof(nR)<<3) - nRangeBits); // keep nRangeBits random bits
}
while (nR >= nRange); // ensure value in range [0..nRange-1]
return nMin + (T)nR; // [nMin..nMax]
}
Since you are always getting a multiple of 8 bits, you can save extra bits between calls (for example you may need only 9 bits out of 16 bits). It requires some bit-manipulations, and it is up to you do decide if it is worth the effort.
You can save even more, if you'll use 'half bits': Let's assume that you want to generate numbers in the range [1..5]. You'll need log2(5)=2.32 bits for each random value. Using 32 random bits you can actually generate floor(32/2.32)= 13 random values in this range, though it requires some additional effort.
Additional details:
X is any positive integer 6 digits or less.
X is left-padded with zeros to maintain a width of 6.
Please explain your answer :)
(This might be better in the Math site, but figured it involves programming functions)
The picture from the german Wikipedia article is very helpful:
You see that 6 consecutive bits from the original bytes generate a Base64 value. To generate + or / (codes 62 and 63), you'd need the bitstrings 111110 and 111111, so at least 5 consecutive bits set.
However, look at the ASCII codes for 0...9:
00110000
00110001
00110010
00110011
00110100
00110101
00110110
00110111
00111000
00111001
No matter how you concatenate six of those, there won't be more than 3 consecutive bits set. So it's not possible to generate a Base64 string that contains + or / this way, Y will always be alphanumeric.
EDIT: In fact, you can even rule other Base64 values out like 000010 (C), so this leads to nice follow-up questions/puzzles like "How many of the 64 values are possible at all?".
This is actually for a programming contest, but I've tried really hard and haven't got even the faintest clue how to do this.
Find the first and last k digits of nm where n and m can be very large ~ 10^9.
For the last k digits I implemented modular exponentiation.
For the first k I thought of using the binomial theorem upto certain powers but that involves quite a lot of computation for factorials and I'm not sure how to find an optimal point at which n^m can be expanded as (x+y)m.
So is there any known method to find the first k digits without performing the entire calculation?
Update 1 <= k <= 9 and k will always be <= digits in nm
not sure, but the identity nm = exp10(m log10(n)) = exp(q (m log(n)/q)) where q = log(10) comes to mind, along with the fact that the first K digits of exp10(x) = the first K digits of exp10(frac(x)) where frac(x) = the fractional part of x = x - floor(x).
To be more explicit: the first K digits of nm are the first K digits of its mantissa = exp(frac(m log(n)/q) * q), where q = log(10).
Or you could even go further in this accounting exercise, and use exp((frac(m log(n)/q)-0.5) * q) * sqrt(10), which also has the same mantissa (+ hence same first K digits) so that the argument of the exp() function is centered around 0 (and between +/- 0.5 log 10 = 1.151) for speedy convergence.
(Some examples: suppose you wanted the first 5 digits of 2100. This equals the first 5 digits of exp((frac(100 log(2)/q)-0.5)*q)*sqrt(10) = 1.267650600228226. The actual value of 2100 is 1.267650600228229e+030 according to MATLAB, I don't have a bignum library handy. For the mantissa of 21,000,000,000 I get 4.612976044195602 but I don't really have a way of checking.... There's a page on Mersenne primes where someone's already done the hard work; 220996011-1 = 125,976,895,450... and my formula gives 1.259768950493908 calculated in MATLAB which fails after the 9th digit.)
I might use Taylor series (for exp and log, not for nm) along with their error bounds, and keep adding terms until the error bounds drop below the first K digits. (normally I don't use Taylor series for function approximation -- their error is optimized to be most accurate around a single point, rather than over a desired interval -- but they do have the advantage that they're mathematically simple, and you can increased accuracy to arbitrary precision simply by adding additional terms)
For logarithms I'd use whatever your favorite approximation is.
Well. We want to calculate and to get only n first digits.
Calculate by the following iterations:
You have .
Calcluate each not exactly.
The thing is that the relative error of is less
than n times relative error of a.
You want to get final relative error less than .
Thus relative error on each step may be .
Remove last digits at each step.
For example, a=2, b=16, n=1. Final relative error is 10^{-n} = 0,1.
Relative error on each step is 0,1/16 > 0,001.
Thus 3 digits is important on each step.
If n = 2, you must save 4 digits.
2 (1), 4 (2), 8 (3), 16 (4), 32 (5), 64 (6), 128 (7), 256 (8), 512 (9), 1024 (10) --> 102,
204 (11), 408 (12), 816 (13), 1632 (14) -> 163, 326 (15), 652 (16).
Answer: 6.
This algorithm has a compexity of O(b). But it is easy to change it to get
O(log b)
Suppose you truncate at each step? Not sure how accurate this would be, but, e.g., take
n=11
m=some large number
and you want the first 2 digits.
recursively:
11 x 11 -> 121, truncate -> 12 (1 truncation or rounding)
then take truncated value and raise again
12 x 11 -> 132 truncate -> 13
repeat,
(132 truncated ) x 11 -> 143.
...
and finally add #0's equivalent to the number of truncations you've done.
Have you taken a look at exponentiation by squaring? You might be able to modify one of the methods such that you only compute what's necessary.
In my last algorithms class we had to implement something similar to what you're doing and I vaguely remember that page being useful.