I'm making poker hand analysis program just for fun and already done hand evaluator, that determines what kind of hand it is (pair, straight, etc).
But now I'm stuck doing the calculation of outs based on 5 (flop) and 6 (turn) cards. So I need to calculate the number of cards for each combo (pair, straight, etc), that aren't in hand yet, but are possible to be dealt.
What would be the possible approach for this? Is it even possible to make without sort of monte-karlo?
Related
I am making a roguelike where the setting is open world on a procedurally generated planet. I want the distribution of each biome to be organic. There are 5 different biomes. Is there a way to organically distribute them without a huge complicated algorithm? I want the amount of space each biome takes up to be nearly equal.
I have worked with cellular automata before when I was making the terrain generators for each biome. There were 2 different states for each tile there. Is there an efficient way to do 5?
I'm using python 2.5, although specific code isn't necessary. Programming theory on it is fine.
If the question is too open ended, are there any resources out there that I could look at for this kind of problem?
You can define a cellular automaton on any cell state space. Just formulate the cell update function as F:Q^n->Q where Q is your state space (here Q={0,1,2,3,4,5}) and n is the size of your neighborhood.
As a start, just write F as a majority rule, that is, 0 being the neutral state, F(c) should return the value in 1-5 with the highest count in the neighborhood, and 0 if none is present. In case of equality, you may pick one of the max at random.
As an initial state, start with a configuration with 5 relatively equidistant cells with the states 1-5 (you may build them deterministically through a fixed position that can be shifted/mirrored, or generate these points randomly).
When all cells have a value different than 0, you have your map.
Feel free to improve on the update function, for example by applying the rule with a given probability.
I have a linear regression equation from school , which gives a value between 1 and -1 indicative of whether or not a set of data points are close enough to a linear function
and the equation given here
http://people.hofstra.edu/stefan_waner/realworld/calctopic1/regression.html
under best fit of a line. I would like to use these to do simple gesture detection based on a point in 3-space (x,y,z) - forward, back, left, right, up, down. First I would see if they fall on a line in 2 of the 3 dimensions, then I would see if that line's slope approached zero or infinity.
Is this fast enough for functional gesture recognition? If not, could someone propose an alternative algorithm?
If I've understood your question correctly then (1) the calculation you describe here would probably be plenty fast enough, (2) it may not actually do what you want, and (3) the stuff that'll be slow in an actual implementation would lie elsewhere.
So, I think you're proposing to do this. (1) Identify the positions of ... something ... (the user's hand, perhaps) in three-dimensional space, at several successive times. (2) For (say) each of {x,y} and {x,z}, look at those two coordinates of each point, compute the correlation coefficient (which is what your formula describes) and see whether it's close to +-1. (3) If both correlation coefficients are close to +-1 then the points lie approximately on a straight line; calculate the gradient of that line (using a formula similar to that of the correlation coefficient). (4) If the gradients are both very close to 0 or +- infinity, then your line is approximately parallel to one axis, which is the case you're trying to recognize.
1: Is it fast enough? You might perhaps be sampling at 50 frames per second or thereabouts, and your gestures might take a second to execute. So you'll have somewhere on the order of 50 positions. So, the total number of arithmetic operations you'll need is maybe a few hundred (including a modest number of square roots). In the worst case, you might be doing this in emulated floating-point on a slow ARM processor or something; in that case, each arithmetic operation might take a couple of hundred cycles, so the whole thing might be 100k cycles, which for a really slow processor running at 100MHz would be about a millisecond. You're not going to have any problem with the time taken to do this calculation.
2: Is it the right thing? It's not clear that it's the right calculation. For instance, suppose your user's hand moves back and forth rapidly several times along the x-axis; that will give you a positive result; is that what you want? Suppose the user attempts the gesture you want but moves at slightly the wrong angle; you may get a negative result. Suppose they move exactly along the x-axis for a bit and then along the y-axis for a bit; then the projections onto the {x,y}, {x,z} and {y,z} planes will all pass your test. These all seem like results you might not want.
3: Is it where the real cost will lie? This all assumes you've already got (x,y,z) coordinates. Getting those is probably going to be more expensive than processing them. For instance, if you have a camera-based system of some kind then there'll be some nontrivial image processing for every frame. Or perhaps you're integrating up data from accelerometers (which, by the way, is likely to give nasty inaccurate position results); the chances are that you're doing some filtering and other calculations to get position data. I bet that the cost of performing a calculation like this one will be substantially less than the cost of getting the coordinates in the first place.
first off I'm going to say I don't know a whole lot about theory and such. But I was wondering if this was an NP or NP-complete problem. It specifically sounds like a special case of the subset sum problem.
Anyway, there's this game I've been playing recently called Alchemy which prompted this thought. Basically you start off with 4 basic elements and combine them to make other elements.
So, for instance, this is a short "recipe" if you will for making elements
fire=basic element
water=basic element
air=basic element
earth=basic element
sand=earth+earth
glass=sand+fire
energy=fire+air
lightbulb=energy+glass
So let's say a computer could create only the 4 basic elements, but it could create multiple sets of the elements. So you write a program to make any element by combining other elements. How would this program process the list the create a lightbulb?
It's clearly fire+air=energy, earth+earth=sand, sand+fire=glass, energy+glass=lightbulb.
But I can't think of any way to write a program to process a list and figure that out without doing a brute force type method and going over every element and checking its recipe.
Is this an NP problem? Or am I just not able to figure this out?
How would this program process the list the create a lightbulb?
Surely you just run the definitions backwards; e.g.
Creating a lightbulb requires 1 energy + 1 glass
Creating an energy requires 1 fire + 1 air
and so on. This is effectively a simple tree walk.
OTOH, if you want the computer to figure out that energy + glass means lightbulb (rather than "blob of molten glass"), you've got no chance of solving the problem. You probably couldn't get 2 gamers to agree that energy + glass = lightbulb!
You can easily model your problem as a graph and look for a solution with any complete search algorithm. If you don't have any experience, it might also help to look into automated planning. I'm linking to that text because it also features an introduction on complexity and search algorithms.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Algorithm to find minimum number of weighings required to find defective ball from a set of n balls
We have n coins. One of them is fake, which is heavier or lighter (we don't know). We have scales with 2 plates. How can we get the fake coin in p moves?
Can you give me a hand for writing such a program? No need a whole program, just ideas.
Thank you.
This is known as Balance puzzle. See Marcel Kołodziejczyk’s Two-pan balance and generalized counterfeit coin problem for a generalization of this problem.
I remember solving this for n=12 and 13, partly by hand and then with a program at the end. I don't know how I would solve it for a general n... but I know how I'd start - by considering small values of n and doing it by hand.
I suspect there are essentially patterns that can be used recursively for this... but you'll find them much easier to discover with pen and paper for small values (n=4 to 7, for example) than by coding.
Put coins on each side, the real ones will balance each other out, the fake will make the scale go either way. When the scales aren't balanced, one of the 2 you just put on is fake, try each against a real coin.
If the coins are objects you're handed, then you should be able to do that in a program quite easily.
I want to write an app to transpose the key a wav file plays in (for fun, I know there are apps that already do this)... my main understanding of how this might be accomplished is to
1) chop the audio file into very small blocks (say 1/10 a second)
2) run an FFT on each block
3) phase shift the frequency space up or down depending on what key I want
4) use an inverse FFT to return each block to the time domain
5) glue all the blocks together
But now I'm wondering if the transformed blocks would no longer be continuous when I try to glue them back together. Are there ideas how I should do this to guarantee continuity, or am I just worrying about nothing?
Overlap the time samples for each block by half so that each block after the first consists of the last N/2 samples from the previous block and N/2 new samples. Be sure to apply some window to the samples before the transform.
After shifting the frequency, perform an inverse FFT and use the middle N/2 samples from each block. You'll need to adjust the final gain after the IFFT.
Of course, mixing the time samples with a sine wave and then low pass filtering will provide the same shift in the time domain as well. The frequency of the mixer would be the desired frequency difference.
For speech you might want to look at PSOLA - this is a popular algorithm for pitch-shifting and/or time stretching/compression which is a little more sophisticated than the basic overlap-add method, but not much more complex.
If you need to process non-speech samples, e.g. music, then there are several possibilities, however the overlap-add FFT/modify/IFFT approach mentioned in other answers is probably the best bet.
Found this great article on the subject, for anyone trying it in the future!
You may have to find a zero-crossing between the blocks to glue the individual wavs back together. Otherwise you may find that you are getting clicks or pops between the blocks.