Related
When we use A* with a non admissible heuristic we can sometimes get a non optimal path as result.
But when it is allowed to have path with zero cost, the only admissible heuristic that comes to my mind is h(x) = 0, which turns A* into a "simple" Dijkstra's algorithm.
Am I correct? Is this the only possible admissible heuristic? What is the real loss of not using an admissible heuristic? There is other path-finding algorithm that works better with zero cost paths?
An example:
Suppose the following graph (the numbers above edges shows the costs):
1 1 0 1 1
S --> V1 --> V2 --> V3 --> V4 --> G
Where:
S means start vertex
V means inner vertex
G means goal vertex
By looking the graph, we see that C(S) = 4.
What heuristic function h(x) can I use? If I use euclidian distance I got:
f(S) = g(S) + h(S)
f(S) = 0 + 5 = 5
We can see that this heuristic over-estimates the real distance, therefore for a more complex graph, it may not find the optimal solution.
Not true. The heuristic function h(x) has argument x consisting of the current search state. It returns an estimate of the distance from x to the goal. In a simple graph, x is a graph node.
Admissibility requires that h(x) can only be an under-estimate (or equal to the goal distance). This condition is for each particular x. (You seem to be inferring the condition is for all possible x, which is far too strong. A* would be useless if this were necessary.)
The correct statement regarding the case you propose is that h(x) = 0 is necessary only when x is a state with distance zero to the goal. Any other value would be an over-estimate. However, for any other x (in the same state space) that requires transitions with total at least cost C>0 to get to the goal, we can have any h such that h(x)<=C.
Of course if x's distance to goal is zero, then x is the goal state and the search is complete. So your concern is vacuous - there are no cases where it's of interest.
Information to construct h(x) comes from your knowledge of the search space (e.g. characteristics of the graph). A bare, general graph alone doesn't provide anything useful. The best you can do is h(x) = cost of min weight outgoing edge of x for non-goal nodes and, as already discussed, h(x) = 0 for the goal. Again note this is a lower bound on distance to goal. It gives you Dijkstra!
To do better you need to know something about the graph's structure.
Edit
In your example, you are providing detailed knowledge, so making a good h is simple. You can use
/ 4 if x == S
| 3 if x == V1
h(x) = { 2 if x == V2 or V3
| 1 if x == V4
\ 0 if x == G
or you can use any other function h'(x) such that h'(x) <= h(x) for all x. For example, this would be admissible:
/ 3 if x == S
| 2 if x == V1
h'(x) = { 2 if x == V2 or V3
| 1 if x == V4
\ 0 if x == G
Addition
The OP points out that for many problems, h(x) can be hard to choose! This is precisely correct. If you can't find a good admissible heuristic, then A* is the wrong algorithm! Nonetheless, A* is very effective for problems where heuristics can be found. Examples I've tried myself:
Graphs where Euclidean distance is a good lower bound on the possible distance between any two nodes. For example, each pair of cities A and B is separated by a distance D "as the crow flies," but the road distance from A to B is at least D in length and possibly much more, i.e. its cost C is greater than or equal to D. In this case, D makes a fine heuristic because it's a low estimate.
Puzzles where "distance" to the winning state involves moving game pieces. In this case, the number of pieces currently out of position with respect to the winning state is a fine heuristic. Examples are the 8-bishop's problem from 7th Guest (number of bishops not yet in their final positions) and the Magic Square Problem (total Manhatten distance from all pieces' current positions to their correct position in the winning state).
I have a homework problem for my algorithms class asking me to calculate the maximum size of a problem that can be solved in a given number of operations using an O(n log n) algorithm (ie: n log n = c). I was able to get an answer by approximating, but is there a clean way to get an exact answer?
There is no closed-form formula for this equation. Basically, you can transform the equation:
n log n = c
log(n^n) = c
n^n = exp(c)
Then, this equation has a solution of the form:
n = exp(W(c))
where W is Lambert W function (see especially "Example 2"). It was proved that W cannot be expressed using elementary operations.
However, f(n)=n*log(n) is a monotonic function. You can simply use bisection (here in python):
import math
def nlogn(c):
lower = 0.0
upper = 10e10
while True:
middle = (lower+upper)/2
if lower == middle or middle == upper:
return middle
if middle*math.log(middle, 2) > c:
upper = middle
else:
lower = middle
the O notation only gives you the biggest term in the equation. Ie the performance of your O(n log n ) algorithm could actually be better represented by c = (n log n) + n + 53.
This means that without knowing the exact nature of the performance of your algorithm you wouldn't be able to calculate the exact number of operations required to process an given amount of data.
But it is possible to calculate that the maximum number of operations required to process a data set of size n is more than a certain number, or conversely that the biggest problem set that can be solved, using that algorithm and that number of operations, is smaller than a certain number.
The O notation is useful for comparing 2 algorithms, ie an O(n^2) algorithm is faster than a O(n^3) algorithm etc.
see Wikipedia for more info.
some help with logs
This question on getting random values from a finite set got me thinking...
It's fairly common for people to want to retrieve X unique values from a set of Y values. For example, I may want to deal a hand from a deck of cards. I want 5 cards, and I want them to all be unique.
Now, I can do this naively, by picking a random card 5 times, and try again each time I get a duplicate, until I get 5 cards. This isn't so great, however, for large numbers of values from large sets. If I wanted 999,999 values from a set of 1,000,000, for instance, this method gets very bad.
The question is: how bad? I'm looking for someone to explain an O() value. Getting the xth number will take y attempts...but how many? I know how to figure this out for any given value, but is there a straightforward way to generalize this for the whole series and get an O() value?
(The question is not: "how can I improve this?" because it's relatively easy to fix, and I'm sure it's been covered many times elsewhere.)
Variables
n = the total amount of items in the set
m = the amount of unique values that are to be retrieved from the set of n items
d(i) = the expected amount of tries needed to achieve a value in step i
i = denotes one specific step. i ∈ [0, n-1]
T(m,n) = expected total amount of tries for selecting m unique items from a set of n items using the naive algorithm
Reasoning
The first step, i=0, is trivial. No matter which value we choose, we get a unique one at the first attempt. Hence:
d(0) = 1
In the second step, i=1, we at least need 1 try (the try where we pick a valid unique value). On top of this, there is a chance that we choose the wrong value. This chance is (amount of previously picked items)/(total amount of items). In this case 1/n. In the case where we picked the wrong item, there is a 1/n chance we may pick the wrong item again. Multiplying this by 1/n, since that is the combined probability that we pick wrong both times, gives (1/n)2. To understand this, it is helpful to draw a decision tree. Having picked a non-unique item twice, there is a probability that we will do it again. This results in the addition of (1/n)3 to the total expected amounts of tries in step i=1. Each time we pick the wrong number, there is a chance we might pick the wrong number again. This results in:
d(1) = 1 + 1/n + (1/n)2 + (1/n)3 + (1/n)4 + ...
Similarly, in the general i:th step, the chance to pick the wrong item in one choice is i/n, resulting in:
d(i) = 1 + i/n + (i/n)2 + (i/n)3 + (i/n)4 + ... = = sum( (i/n)k ), where k ∈ [0,∞]
This is a geometric sequence and hence it is easy to compute it's sum:
d(i) = (1 - i/n)-1
The overall complexity is then computed by summing the expected amount of tries in each step:
T(m,n) = sum ( d(i) ), where i ∈ [0,m-1] = = 1 + (1 - 1/n)-1 + (1 - 2/n)-1 + (1 - 3/n)-1 + ... + (1 - (m-1)/n)-1
Extending the fractions in the series above by n, we get:
T(m,n) = n/n + n/(n-1) + n/(n-2) + n/(n-3) + ... + n/(n-m+2) + n/(n-m+1)
We can use the fact that:
n/n ≤ n/(n-1) ≤ n/(n-2) ≤ n/(n-3) ≤ ... ≤ n/(n-m+2) ≤ n/(n-m+1)
Since the series has m terms, and each term satisfies the inequality above, we get:
T(m,n) ≤ n/(n-m+1) + n/(n-m+1) + n/(n-m+1) + n/(n-m+1) + ... + n/(n-m+1) + n/(n-m+1) = = m*n/(n-m+1)
It might be(and probably is) possible to establish a slightly stricter upper bound by using some technique to evaluate the series instead of bounding by the rough method of (amount of terms) * (biggest term)
Conclusion
This would mean that the Big-O order is O(m*n/(n-m+1)). I see no possible way to simplify this expression from the way it is.
Looking back at the result to check if it makes sense, we see that, if n is constant, and m gets closer and closer to n, the results will quickly increase, since the denominator gets very small. This is what we'd expect, if we for example consider the example given in the question about selecting "999,999 values from a set of 1,000,000". If we instead let m be constant and n grow really, really large, the complexity will converge towards O(m) in the limit n → ∞. This is also what we'd expect, since while chosing a constant number of items from a "close to" infinitely sized set the probability of choosing a previously chosen value is basically 0. I.e. We need m tries independently of n since there are no collisions.
If you already have chosen i values then the probability that you pick a new one from a set of y values is
(y-i)/y.
Hence the expected number of trials to get (i+1)-th element is
y/(y-i).
Thus the expected number of trials to choose x unique element is the sum
y/y + y/(y-1) + ... + y/(y-x+1)
This can be expressed using harmonic numbers as
y (Hy - Hy-x).
From the wikipedia page you get the approximation
Hx = ln(x) + gamma + O(1/x)
Hence the number of necessary trials to pick x unique elements from a set of y elements
is
y (ln(y) - ln(y-x)) + O(y/(y-x)).
If you need then you can get a more precise approximation by using a more precise approximation for Hx. In particular, when x is small it is possible to
improve the result a lot.
If you're willing to make the assumption that your random number generator will always find a unique value before cycling back to a previously seen value for a given draw, this algorithm is O(m^2), where m is the number of unique values you are drawing.
So, if you are drawing m values from a set of n values, the 1st value will require you to draw at most 1 to get a unique value. The 2nd requires at most 2 (you see the 1st value, then a unique value), the 3rd 3, ... the mth m. Hence in total you require 1 + 2 + 3 + ... + m = [m*(m+1)]/2 = (m^2 + m)/2 draws. This is O(m^2).
Without this assumption, I'm not sure how you can even guarantee the algorithm will complete. It's quite possible (especially with a pseudo-random number generator which may have a cycle), that you will keep seeing the same values over and over and never get to another unique value.
==EDIT==
For the average case:
On your first draw, you will make exactly 1 draw.
On your 2nd draw, you expect to make 1 (the successful draw) + 1/n (the "partial" draw which represents your chance of drawing a repeat)
On your 3rd draw, you expect to make 1 (the successful draw) + 2/n (the "partial" draw...)
...
On your mth draw, you expect to make 1 + (m-1)/n draws.
Thus, you will make 1 + (1 + 1/n) + (1 + 2/n) + ... + (1 + (m-1)/n) draws altogether in the average case.
This equals the sum from i=0 to (m-1) of [1 + i/n]. Let's denote that sum(1 + i/n, i, 0, m-1).
Then:
sum(1 + i/n, i, 0, m-1) = sum(1, i, 0, m-1) + sum(i/n, i, 0, m-1)
= m + sum(i/n, i, 0, m-1)
= m + (1/n) * sum(i, i, 0, m-1)
= m + (1/n)*[(m-1)*m]/2
= (m^2)/(2n) - (m)/(2n) + m
We drop the low order terms and the constants, and we get that this is O(m^2/n), where m is the number to be drawn and n is the size of the list.
There's a beautiful O(n) algorithm for this. It goes as follows. Say you have n items, from which you want to pick m items. I assume the function rand() yields a random real number between 0 and 1. Here's the algorithm:
items_left=n
items_left_to_pick=m
for j=1,...,n
if rand()<=(items_left_to_pick/items_left)
Pick item j
items_left_to_pick=items_left_to_pick-1
end
items_left=items_left-1
end
It can be proved that this algorithm does indeed pick each subset of m items with equal probability, though the proof is non-obvious. Unfortunately, I don't have a reference handy at the moment.
Edit The advantage of this algorithm is that it takes only O(m) memory (assuming the items are simply integers or can be generated on-the-fly) compared to doing a shuffle, which takes O(n) memory.
Your actual question is actually a lot more interesting than what I answered (and harder). I've never been any good at statistitcs (and it's been a while since I did any), but intuitively, I'd say that the run-time complexity of that algorithm would probably something like an exponential. As long as the number of elements picked is small enough compared to the size of the array the collision-rate will be so small that it will be close to linear time, but at some point the number of collisions will probably grow fast and the run-time will go down the drain.
If you want to prove this, I think you'd have to do something moderately clever with the expected number of collisions in function of the wanted number of elements. It might be possible do to by induction as well, but I think going by that route would require more cleverness than the first alternative.
EDIT: After giving it some thought, here's my attempt:
Given an array of m elements, and looking for n random and different elements. It is then easy to see that when we want to pick the ith element, the odds of picking an element we've already visited are (i-1)/m. This is then the expected number of collisions for that particular pick. For picking n elements, the expected number of collisions will be the sum of the number of expected collisions for each pick. We plug this into Wolfram Alpha (sum (i-1)/m, i=1 to n) and we get the answer (n**2 - n)/2m. The average number of picks for our naive algorithm is then n + (n**2 - n)/2m.
Unless my memory fails me completely (which entirely possible, actually), this gives an average-case run-time O(n**2).
The worst case for this algorithm is clearly when you're choosing the full set of N items. This is equivalent to asking: On average, how many times must I roll an N-sided die before each side has come up at least once?
Answer: N * HN, where HN is the Nth harmonic number,
a value famously approximated by log(N).
This means the algorithm in question is N log N.
As a fun example, if you roll an ordinary 6-sided die until you see one of each number, it will take on average 6 H6 = 14.7 rolls.
Before being able to answer this question in details, lets define the framework. Suppose you have a collection {a1, a2, ..., an} of n distinct objects, and want to pick m distinct objects from this set, such that the probability of a given object aj appearing in the result is equal for all objects.
If you have already picked k items, and radomly pick an item from the full set {a1, a2, ..., an}, the probability that the item has not been picked before is (n-k)/n. This means that the number of samples you have to take before you get a new object is (assuming independence of random sampling) geometric with parameter (n-k)/n. Thus the expected number of samples to obtain one extra item is n/(n-k), which is close to 1 if k is small compared to n.
Concluding, if you need m unique objects, randomly selected, this algorithm gives you
n/n + n/(n-1) + n/(n-2) + n/(n-3) + .... + n/(n-(m-1))
which, as Alderath showed, can be estimated by
m*n / (n-m+1).
You can see a little bit more from this formula:
* The expected number of samples to obtain a new unique element increases as the number of already chosen objects increases (which sounds logical).
* You can expect really long computation times when m is close to n, especially if n is large.
In order to obtain m unique members from the set, use a variant of David Knuth's algorithm for obtaining a random permutation. Here, I'll assume that the n objects are stored in an array.
for i = 1..m
k = randInt(i, n)
exchange(i, k)
end
here, randInt samples an integer from {i, i+1, ... n}, and exchange flips two members of the array. You only need to shuffle m times, so the computation time is O(m), whereas the memory is O(n) (although you can adapt it to only save the entries such that a[i] <> i, which would give you O(m) on both time and memory, but with higher constants).
Most people forget that looking up, if the number has already run, also takes a while.
The number of tries nessesary can, as descriped earlier, be evaluated from:
T(n,m) = n(H(n)-H(n-m)) ⪅ n(ln(n)-ln(n-m))
which goes to n*ln(n) for interesting values of m
However, for each of these 'tries' you will have to do a lookup. This might be a simple O(n) runthrough, or something like a binary tree. This will give you a total performance of n^2*ln(n) or n*ln(n)^2.
For smaller values of m (m < n/2), you can do a very good approximation for T(n,m) using the HA-inequation, yielding the formula:
2*m*n/(2*n-m+1)
As m goes to n, this gives a lower bound of O(n) tries and performance O(n^2) or O(n*ln(n)).
All the results are however far better, that I would ever have expected, which shows that the algorithm might actually be just fine in many non critical cases, where you can accept occasional longer running times (when you are unlucky).
I'm writing a vertex shader at the moment, and I need some random numbers. Vertex shader hardware doesn't have logical/bit operations, so I cannot implement any of the standard random number generators.
Is it possible to make a random number generator using only standard arithmetic? the randomness doesn't have to be particularly good!
If you don't mind crappy randomness, a classic method is
x[n+1] = (x[n] * x[n] + C) mod N
where C and N are constants, C != 0 and C != -2, and N is prime. This is a typical pseudorandom generator for Pollard Rho factoring. Try C = 1 and N = 8051, those work ok.
Vertex shaders sometimes have built-in noise generators for you to use, such as cg's noise() function.
Use a linear congruential generator:
X_(n+1) = (a * X_n + c) mod m
Those aren't that strong, but at least they are well known and can have long periods. The Wikipedia page also has good recommendations:
The period of a general LCG is at most
m, and for some choices of a much less
than that. The LCG will have a full
period if and only if:
1. c and m are relatively prime,
2. a - 1 is divisible by all prime factors of m,
3. a - 1 is a multiple of 4 if m is a multiple of 4
Believe it or not, I used newx = oldx * 5 + 1 (or a slight variation of it) in several videogames. The randomness is horrible--it's more of a scrambled sequence than a random generator. But sometimes that's all you need. If I recall correctly, it goes through all numbers before it repeats.
It has some terrible characteristics. It doesn't ever give you the same number twice in a row. A few of us did a bunch of tests on variations of it and we used some variations in other games.
We used it when there was no good modulo available to us. It's just a shift by two and two adds (or a multiply by 5 and one add). I would never use it nowadays for random numbers--I'd use an LCG--but maybe it would work OK for a shader where speed is crucial and your instruction set may be limited.
Just been looking at a code golf question about generating a sorted list of 100 random integers. What popped into my head, however, was the idea that you could generate instead a list of positive deltas, and just keep adding them to a running total, thus:
deltas: 1 3 2 7 2
ints: 1 4 6 13 15
In fact, you would use floats, then normalise to fit some upper limit, and round, but the effect is the same.
Although it wouldn't make for shorter code, it would certainly be faster without the sort step. But the thing I have no real handle on is this: Would the resulting distribution of integers be the same as generating 100 random integers from a uniformly distributed probability density function?
Edit: A sample script:
import random,sys
running = 0
max = 1000
deltas = [random.random() for i in range(0,11)]
floats = []
for d in deltas:
running += d
floats.append(running)
upper = floats.pop()
ints = [int(round(f/upper*max)) for f in floats]
print(ints)
Whose output (fair dice roll) was:
[24, 71, 133, 261, 308, 347, 499, 543, 722, 852]
UPDATE: Alok's answer and Dan Dyer's comment point out that using an exponential distribution for the deltas would give a uniform distribution of integers.
So you are asking if the numbers generated in this way are going to be uniformly distributed.
You are generating a series:
yj = ∑i=0j ( xi / A )
where A is the sum of all xi. xi is the list of (positive) deltas.
This can be done iff xi are exponentially distributed (with any fixed mean). So, if xi are uniformly distributed, the resulting yj will not be uniformly distributed.
Having said that, it's fairly easy to generate exponential xi values.
One example would be:
sum := 0
for I = 1 to N do:
X[I] = sum = sum - ln(RAND)
sum = sum - ln(RAND)
for I = 1 to N do:
X[I] = X[I]/sum
and you will have your random numbers sorted in the range [0, 1).
Reference: Generating Sorted Lists of Random Numbers. The paper has other (faster) algorithms as well.
Of course, this generates floating-point numbers. For uniform distribution of integers, you can replace sum above by sum/RANGE in the last step (i.e., the R.H.S becomes X[I]*RANGE/sum, and then round the numbers to the nearest integer).
A uniform distribution has an upper and a lower bound. If you use your proposed method, and your deltas happen to be chosen large enough that you run into the upper bound before you have generated all your numbers, what would your algorithm do next?
Having said that, you may want to investigate the Poisson distribution, which is the distribution of interval times between random events occurring with a given average frequency.
If you take the number range of being 1 to 1000, and you have to use 100 of these numbers, the delta will have to be as a minimum 10, otherwise you can not reach the 1000 mark. How about some working to demonstrate it in action...
The chance of any given number in an evenly distributed random selection is 100/1000 e.g. 1/10 - no shock there, take that as the basis.
Assuming you start using a delta and that delta is just 10.
The odds of getting the number 1 is 1/10 - seems fine.
The odds of getting the number 2 is 1/10 + (1/10 * 1/10) (because you could hit 2 deltas of 1 in a row, or just hit a 2 as the first delta.)
The odds of getting the number 3 is 1/10 + (1/10 * 1/10 * 1/10) + (1/10 * 1/10) + (1/10 * 1/10)
The first case was a delta of 3, the second was hitting 3 deltas of 1 in a row, the third case would be a delta of 1 followed by a 2, and the fourth case was a delta of 2 followed by a 1.
For the sake of my fingers typing, we won't generate the combinations that hit 5.
Immediately the first few numbers have a greater percentage chance than the straight random.
This could be altered by changing the delta value so the fractions are all different, but I do not believe you could find a delta that produced identical odds.
To give an analogy that might just sink it, if you consider your delta as just 6 and you run that twice it is the equivalent of throwing 2 dice - each of the deltas is independant, but you know that 7 has a higher chance of being selected than 2.
I think it will be extremely similar but the extremes will be different because of the normalization. For example, 100 numbers chosen at random between 1 and 100 could all be 1. However, 100 numbers created using your system could all have deltas of 0.01 but when you normalize them you'll scale them up to be in the range 1 -> 100 which will mean you'll never get that strange possibility of a set of very low numbers.
Alok's answer and Dan Dyer's comment point out that using an exponential distribution for the deltas would give a uniform distribution of integers.
So the new version of the code sample in the question would be:
import random,sys
running = 0
max = 1000
deltas = [random.expovariate(1.0) for i in range(0,11)]
floats = []
for d in deltas:
running += d
floats.append(running)
upper = floats.pop()
ints = [int(round(f/upper*max)) for f in floats]
print(ints)
Note the use of random.expovariate(1.0), a Python exponential distribution random number generator (very useful!). Here it's called with a mean of 1.0, but since the script normalises against the last number in the sequence, the mean itself doesn't matter.
Output (fair dice roll):
[11, 43, 148, 212, 249, 458, 539, 725, 779, 871]
Q: Would the resulting distribution of integers be the same as generating 100 random integers from a uniformly distributed probability density function?
A: Each delta will be uniformly distributed. The central limit theorem tells us that the distribution of a sum of a large number of such deviates (since they have a finite mean and variance) will tend to the normal distribution. Hence the later deviates in your sequence will not be uniformly distributed.
So the short answer is "no". Afraid I cannot give a simple solution without doing algebra I don't have time to do today!
The reference (1979) in Alok's answer is interesting. It gives an algorithm for generating the uniform order statistics not by addition but by successive multiplication:
max = 1.
for i = N downto 1 do
out[i] = max = max * RAND^(1/i)
where RAND is uniform on [0,1). This way you don't have to normalize at the end, and in fact don't even have to store the numbers in an array; you could use this as an iterator.
The Exponential distribution: theory, methods and applications
By N. Balakrishnan, Asit P. Basu gives another derivation of this algorithm on page 22 and credits Malmquist (1950).
You can do it in two passes;
in the first pass, generate deltas between 0 and (MAX_RAND/n)
in the second pass, normalise the random numbers to be within bounds
Still O(n), with good locality of reference.