Related
this is an AMPL model, I'm pretty new in this, so I'm doing a classical problem of logistics, a network flow problem, where I have to find the least expensive way to transport available blood donations in a net of cities where there are different costs in the edges. So I have to minimize its objective function (maybe is better understandable reading the code).
I solve the problem right now, but now I'm facing the second task, where a fixed cost equal to 10 must be paid for each edge used for transporting blood donations (in addition to the shipping costs). For what I have understood, the question is easy. In practice I just have to add 10*numberOfEdgeUsed to the objective funcion. I want to do it in the correct way trying to add a binary variable for every edge, 1 if an edge is used, 0 if not. I'm pretty new to this kind of programmation, and I don't know how to do it.
Any help is welcome. I put just the .mod code, I don't put the .dat file becouse is not necessary.
This is the code of the first task, I have to modify this:
set Cities;
set Origins within (Cities);
set Destinations within (Cities);
set Link within (Cities cross Cities);
param Costs{Link};
param DemSup{Cities};
param fixedCost{(i, j) in Link} = 10;
var y{Link}, binary;
var Ship{Link} >= 0, <= 1000;
minimize Total_Cost: sum{(i,j) in Link} fixedCost[i,j] * y[i,j] + sum {(i,j) in Link} (Costs[i,j] * Ship[i,j]);
subject to Supply {i in Origins}: - sum {(i,k) in Link} Ship[i,k] >= DemSup[i];
subject to Demand {i in Destinations}: sum {(j,i) in Link} Ship[j,i] - sum {(i,k) in Link} Ship[i,k] == DemSup[i];
You need to add the implication: if y=0 then the corresponding link can not be used. This can be formulated as the constraint:
Ship[i,j] <= 1000 * y[i,j]
(or better Ship[i,j] <= Ship[i,j].ub * y[i,j])
this is cordic multiply function I got a code when I run across from internet.
But It's quite a bit different from expected data.
How to modify to have this code with correctness?
update code
for (i=1; i=<8; i++)
{
if (x > 0) {
x = x - pow(2,-i);
z = z + y*pow(2,-i);
}
else{
x = x + pow(2,-i);
z = z - y* pow(2,-i);
}
If I do run with x=7, y=8 then z=7.000 not 56.
What is the wrong point?
update2
I got the right answer but Thnaks, I have checked the range it works. Bytheway, is there any extension range algorithm? How to make range extension?
It looks like you took this function from this paper (without attribution!). The code is full of obvious typos, but if you read the paragraph below the function it says:
This calculation assumes that both x and y are fractional ranging from -1 to 1. The algorithm is valid for other ranges as long as the decimal point is allowed to float. With a few extensions, this algorithm would work well with floating point data.
Take home message: always read the accompanying documentation for any code that you plan to use, especially if you don't understand how it works.
My goal is to independently calculate the number of items an enemy would drop after it is killed. For example, say there are 50 potions each with a 50% chance of being dropped, I'd like to randomly return a number from 0 to 50, based on independent trials.
Currently, this is the code I'm using:
int droppedItems(int n, float probability) {
int count = 0;
for (int x = 1; x <= n; ++x) {
if (random() <= probability) {
++count;
}
}
return count;
}
Where probability is a number from 0.0 to 1.0, random() returns 0.0 to 1.0, and n is the maximum number of items to be dropped. This is in C++ code, however, I'm actually using Visual Basic 6 - so there's no libraries to help with this.
This code works flawlessly. However, I'd like to optimize this so that if n happens to be 999999, it doesn't take forever (which it currently does).
Use the binomial distribution. Wiki - Binomial Distribution
Ideally, use the libraries for whatever language this pseudocode will be written in. There's no sense in reinventing the wheel unless of course you are trying to learn how to invent a wheel.
Specifically, you'll want something that will let you generate random values given a binomial distribution with a probability of success in any given trial and a number of trials.
EDIT :
I went ahead and did this (in python, since that's where I live these days). It relies on the very nice numpy library (hooray, abstraction!):
>>>import numpy
>>>numpy.random.binomial(99999,0.5)
49853
>>>numpy.random.binomial(99999,0.5)
50077
And, using timeit.Timer to check execution time:
# timing it across 10,000 iterations for 99,999 items per iteration
>>>timeit.Timer(stmt="numpy.random.binomial(99999,0.5)", setup="import numpy").timeit(10000)
0.00927[... seconds]
EDIT 2 :
As it turns out, there isn't a simple way to implement a random number generator based off of the binomial distribution.
There is an algorithm you can implement without library support which will generate random variables from the binomial distribution. You can view it here as a PDF
My guess is that given what you want to use it for (having monsters drop loot in a game), implementing the algorithm is not worth your time. There's room for fudge factor here!
I would change your code like this (note: this is not a binomial distribution):
Use your current code for small values, say n up to 100.
For n greater than one hundred, calculate the value of count for
100 using your current algorithm and then multiply the result by
n/100.
Again, if you really want to figure out how to implement the BTPE algorithm yourself, you can - I think the method I give above wins in the trade off between effort to write and getting "close enough".
As #IamChuckB pointed out already, the key word is binomial distribution. When the number of Bernoulli trials (number of items in your example) is large enough, a good approximation is the Poisson distribution, which is much simpler to calculate and draw numbers from (the exact algorithm is spelled out in the linked Wikipedia article).
I'm having a strange problem here, and I can't manage to find a good explanation to it, so I thought of asking you guys :
Consider the following method :
int MathUtility::randomize(int Min, int Max)
{
qsrand(QTime::currentTime().msec());
if (Min > Max)
{
int Temp = Min;
Min = Max;
Max = Temp;
}
return ((rand()%(Max-Min+1))+Min);
}
I won't explain you gurus what this method actually does, I'll instead explain my problem :
I realised that when I call this method in a loop, sometimes, I get the same random number over and over again... For example, this snippet...
for(int i=0; i<10; ++i)
{
int Index = MathUtility::randomize(0, 1000);
qDebug() << Index;
}
...will produce something like :
567
567
567
567...etc...
I realised too, that if I don't call qsrand everytime, but only once during my application's lifetime, it's working perfectly...
My question : Why ?
Because if you call randomize more than once in a millisecond (which is rather likely at current CPU clock speeds), you are seeding the RNG with the same value. This is guaranteed to produce the same output from the RNG.
Random-number generators are only meant to be seeded once. Seeding them multiple times does not make the output extra random, and in fact (as you found) may make it much less random.
If you make the call fast enough the value of QTime::currentTime().msec() will not change, and you're basically re-seeding qsrand with the same seed, causing the next random number generated to be the same as the prior one.
If you call the qsrand Qt function to initialize the seed, you must call the qrand Qt function to generate a random number, not the rand function from the standard library. the seed initialization for the rand function is srand.
Sorry for the dig up.
What you see is the effect of pseudo-randomness. You seed it with the time once, and it generates a sequence of numbers. Since you are pulling a series of random numbers very quickly after each other, you are re-seeding the randomizer with the same number until the next millisecond. And while a millisecond seems like a short time, consider the amount of calculations you're doing in that time.
modern Qt c++ 11
#include <random>
#include "QDateTime"
int getRand(int min, int max){
unsigned int ms = static_cast<unsigned>(QDateTime::currentMSecsSinceEpoch());
std::mt19937 gen(ms);
std::uniform_int_distribution<> uid(min, max);
return uid(gen);
}
Two problems:
1 As others have pointed out, the generator is being seed multiple times.
2 This is not a very good method to generate random numbers within a given range. (In fact it's very very bad for most generators )
You are assuming that the low-order bits from the generator are uniformly distributed . This is not the case with most generators. In most generators the randomness occurs in the high order bits.
By using the remainder after divisions you are in effect throwing out the randomness.
You should scale using multiplication and division. Not using the modulo operator.
eg
my_number= start_required + ( generator_output * range_required)/generator_maximum;
if generator_output is in [0, generator_maximum]
my_number will be in [start_required , start_required + range_required]
I've found the same action and solved it by using rand() instead the srand().
But I use it for checking my application. It just working in the cicle, so I don't need to look for it updates.
But if you going to do some king of game, it isn't a good way, because your randomizing will be the same.
First off, this question is ripped out from this question. I did it because I think this part is bigger than a sub-part of a longer question. If it offends, please pardon me.
Assume that you have a algorithm that generates randomness. Now how do you test it?
Or to be more direct - Assume you have an algorithm that shuffles a deck of cards, how do you test that it's a perfectly random algorithm?
To add some theory to the problem -
A deck of cards can be shuffled in 52! (52 factorial) different ways. Take a deck of cards, shuffle it by hand and write down the order of all cards. What is the probability that you would have gotten exactly that shuffle? Answer: 1 / 52!.
What is the chance that you, after shuffling, will get A, K, Q, J ... of each suit in a sequence? Answer 1 / 52!
So, just shuffling once and looking at the result will give you absolutely no information about your shuffling algorithms randomness. Twice and you have more information, Three even more...
How would you black box test a shuffling algorithm for randomness?
Statistics. The de facto standard for testing RNGs is the Diehard suite (originally available at http://stat.fsu.edu/pub/diehard). Alternatively, the Ent program provides tests that are simpler to interpret but less comprehensive.
As for shuffling algorithms, use a well-known algorithm such as Fisher-Yates (a.k.a "Knuth Shuffle"). The shuffle will be uniformly random so long as the underlying RNG is uniformly random. If you are using Java, this algorithm is available in the standard library (see Collections.shuffle).
It probably doesn't matter for most applications, but be aware that most RNGs do not provide sufficient degrees of freedom to produce every possible permutation of a 52-card deck (explained here).
Here's one simple check that you can perform. It uses generated random numbers to estimate Pi. It's not proof of randomness, but poor RNGs typically don't do well on it (they will return something like 2.5 or 3.8 rather ~3.14).
Ideally this would be just one of many tests that you would run to check randomness.
Something else that you can check is the standard deviation of the output. The expected standard deviation for a uniformly distributed population of values in the range 0..n approaches n/sqrt(12).
/**
* This is a rudimentary check to ensure that the output of a given RNG
* is approximately uniformly distributed. If the RNG output is not
* uniformly distributed, this method will return a poor estimate for the
* value of pi.
* #param rng The RNG to test.
* #param iterations The number of random points to generate for use in the
* calculation. This value needs to be sufficiently large in order to
* produce a reasonably accurate result (assuming the RNG is uniform).
* Less than 10,000 is not particularly useful. 100,000 should be sufficient.
* #return An approximation of pi generated using the provided RNG.
*/
public static double calculateMonteCarloValueForPi(Random rng,
int iterations)
{
// Assumes a quadrant of a circle of radius 1, bounded by a box with
// sides of length 1. The area of the square is therefore 1 square unit
// and the area of the quadrant is (pi * r^2) / 4.
int totalInsideQuadrant = 0;
// Generate the specified number of random points and count how many fall
// within the quadrant and how many do not. We expect the number of points
// in the quadrant (expressed as a fraction of the total number of points)
// to be pi/4. Therefore pi = 4 * ratio.
for (int i = 0; i < iterations; i++)
{
double x = rng.nextDouble();
double y = rng.nextDouble();
if (isInQuadrant(x, y))
{
++totalInsideQuadrant;
}
}
// From these figures we can deduce an approximate value for Pi.
return 4 * ((double) totalInsideQuadrant / iterations);
}
/**
* Uses Pythagoras' theorem to determine whether the specified coordinates
* fall within the area of the quadrant of a circle of radius 1 that is
* centered on the origin.
* #param x The x-coordinate of the point (must be between 0 and 1).
* #param y The y-coordinate of the point (must be between 0 and 1).
* #return True if the point is within the quadrant, false otherwise.
*/
private static boolean isInQuadrant(double x, double y)
{
double distance = Math.sqrt((x * x) + (y * y));
return distance <= 1;
}
First, it is impossible to know for sure if a certain finite output is "truly random" since, as you point out, any output is possible.
What can be done, is to take a sequence of outputs and check various measurements of this sequence against what is more likely. You can derive a sort of confidence score that the generating algorithm is doing a good job.
For example, you could check the output of 10 different shuffles. Assign a number 0-51 to each card, and take the average of the card in position 6 across the shuffles. The convergent average is 25.5, so you would be surprised to see a value of 1 here. You could use the central limit theorem to get an estimate of how likely each average is for a given position.
But we shouldn't stop here! Because this algorithm could be fooled by a system that only alternates between two shuffles that are designed to give the exact average of 25.5 at each position. How can we do better?
We expect a uniform distribution (equal likelihood for any given card) at each position, across different shuffles. So among the 10 shuffles, we could try to verify that the choices 'look uniform.' This is basically just a reduced version of the original problem. You could check that the standard deviation looks reasonable, that the min is reasonable, and the max value as well. You could also check that other values, such as the closest two cards (by our assigned numbers), also make sense.
But we also can't just add various measurements like this ad infinitum, since, given enough statistics, any particular shuffle will appear highly unlikely for some reason (e.g. this is one of very few shuffles in which cards X,Y,Z appear in order). So the big question is: which is the right set of measurements to take? Here I have to admit that I don't know the best answer. However, if you have a certain application in mind, you can choose a good set of properties/measurements to test, and work with those -- this seems to be the way cryptographers handle things.
There's a lot of theory on testing randomness. For a very simple test on a card shuffling algorithm you could do a lot of shuffles and then run a chi squared test that the probability of each card turning up in any position was uniform. But that doesn't test that consecutive cards aren't correlated so you would also want to do tests on that.
Volume 2 of Knuth's Art of Computer Programming gives a number of tests that you could use in sections 3.3.2 (Empirical tests) and 3.3.4 (The Spectral Test) and the theory behind them.
The only way to test for randomness is to write a program that attempts to build a predictive model for the data being tested, and then use that model to try to predict future data, and then showing that the uncertainty, or entropy, of its predictions tend towards maximum (i.e. the uniform distribution) over time. Of course, you'll always be uncertain whether or not your model has captured all of the necessary context; given a model, it'll always be possible to build a second model that generates non-random data that looks random to the first. But as long as you accept that the orbit of Pluto has an insignificant influence on the results of the shuffling algorithm, then you should be able to satisfy yourself that its results are acceptably random.
Of course, if you do this, you might as well use your model generatively, to actually create the data you want. And if you do that, then you're back at square one.
Shuffle alot, and then record the outcomes (if im reading this correctly). I remember seeing comparisons of "random number generators". They just test it over and over, then graph the results.
If it is truly random the graph will be mostly even.
I'm not fully following your question. You say
Assume that you have a algorithm that generates randomness. Now how do you test it?
What do you mean? If you're assuming you can generate randomness, there's no need to test it.
Once you have a good random number generator, creating a random permutation is easy (e.g. Call your cards 1-52. Generate 52 random numbers assigning each one to a card in order, and then sort according to your 52 randoms) . You're not going to destroy the randomness of your good RNG by generating your permutation.
The difficult question is whether you can trust your RNG. Here's a sample link to people discussing that issue in a specific context.
Testing 52! possibilities is of course impossible. Instead, try your shuffle on smaller numbers of cards, like 3, 5, and 10. Then you can test billions of shuffles and use a histogram and the chi-square statistical test to prove that each permutation is coming up an "even" number of times.
No code so far, therefore I copy-paste a testing part from my answer to the original question.
// ...
int main() {
typedef std::map<std::pair<size_t, Deck::value_type>, size_t> Map;
Map freqs;
Deck d;
const size_t ntests = 100000;
// compute frequencies of events: card at position
for (size_t i = 0; i < ntests; ++i) {
d.shuffle();
size_t pos = 0;
for(Deck::const_iterator j = d.begin(); j != d.end(); ++j, ++pos)
++freqs[std::make_pair(pos, *j)];
}
// if Deck.shuffle() is correct then all frequencies must be similar
for (Map::const_iterator j = freqs.begin(); j != freqs.end(); ++j)
std::cout << "pos=" << j->first.first << " card=" << j->first.second
<< " freq=" << j->second << std::endl;
}
This code does not test randomness of underlying pseudorandom number generator. Testing PRNG randomness is a whole branch of science.
For a quick test, you can always try compressing it. Once it doesn't compress, then you can move onto other tests.
I've tried dieharder but it refuses to work for a shuffle. All tests fail. It is also really stodgy, it wont let you specify the range of values you want or anything like that.
Pondering it myself, what I would do is something like:
Setup (Pseudo code)
// A card has a Number 0-51 and a position 0-51
int[][] StatMatrix = new int[52][52]; // Assume all are set to 0 as starting values
ShuffleCards();
ForEach (card in Cards) {
StatMatrix[Card.Position][Card.Number]++;
}
This gives us a matrix 52x52 indicating how many times a card has ended up at a certain position. Repeat this a large number of times (I would start with 1000, but people better at statistics than me may give a better number).
Analyze the matrix
If we have perfect randomness and perform the shuffle an infinite number of times then for each card and for each position the number of times the card ended up in that position is the same as for any other card. Saying the same thing in a different way:
statMatrix[position][card] / numberOfShuffle = 1/52.
So I would calculate how far from that number we are.