So basically I'm making a JRPG, and I'm trying to determine the level of a Fighter by how much XP it has.
So if my Fighter has 0 XP, it's going to be a level 1, but if it has say 1000 XP, it'll be a level 2.
I want the amount of XP required to make a level to go up as the Fighter levels up, so something like 1000 XP to get to lvl 1, and 1400 to get to lvl 2 (maybe not so drastic, but I think you get the picture).
I need a formula that determines the level using only the given XP value.
You're going to need to use some variation of the sum of n natual numbers (http://www.9math.com/book/sum-first-n-natural-numbers) in order to calculate how much XP you need to reach a level. Once you've done that you'll need to use a loop to count up from 0 or 1 depending on your starting level and check to see if the f(n) > current xp, then return n-1.
I'd make a function to calculate the XP needed for a given level n so that if you need to tweak it you only need to change it in one place.
Related
I am making a roguelike where the setting is open world on a procedurally generated planet. I want the distribution of each biome to be organic. There are 5 different biomes. Is there a way to organically distribute them without a huge complicated algorithm? I want the amount of space each biome takes up to be nearly equal.
I have worked with cellular automata before when I was making the terrain generators for each biome. There were 2 different states for each tile there. Is there an efficient way to do 5?
I'm using python 2.5, although specific code isn't necessary. Programming theory on it is fine.
If the question is too open ended, are there any resources out there that I could look at for this kind of problem?
You can define a cellular automaton on any cell state space. Just formulate the cell update function as F:Q^n->Q where Q is your state space (here Q={0,1,2,3,4,5}) and n is the size of your neighborhood.
As a start, just write F as a majority rule, that is, 0 being the neutral state, F(c) should return the value in 1-5 with the highest count in the neighborhood, and 0 if none is present. In case of equality, you may pick one of the max at random.
As an initial state, start with a configuration with 5 relatively equidistant cells with the states 1-5 (you may build them deterministically through a fixed position that can be shifted/mirrored, or generate these points randomly).
When all cells have a value different than 0, you have your map.
Feel free to improve on the update function, for example by applying the rule with a given probability.
Im looking to create a ranking system for users on a gaming site.
The system should be based of a weighted win percentage with the weighted element being the number of games played.
For instance:
55 wins and 2 losses = 96% win percentage
1 win and 0 losses = 100% win percentage
The first record should rank higher because they have a higher number of wins.
I'm sure the math is super simple, I just can't wrap my head around it. Can anyone help?
ELO is more thorough because it considers opponent strength when scoring a win or loss, but if opponents are randomly matched a simple and very effect approach is:
(Wins + constant * Average Win % of all players) / (Wins + Losses + constant)
so with 0 games the formula is the average for all players, as you increase the number of games played the formula converges on the actual record. The constant determines how quickly it does this and you can probably get away with choosing something between 5 and 20.
Yes, it is "super simple":
Percentage = Wins * 100.0 / (Wins + Losses)
To round to an integer you usually use round or Math.round (but you didn't specify a programming language).
The value could be weighted on the number of wins, using the given ratio:
Rank = Wins * Wins / (Wins + Losses)
But there are other systems that understand the problem better, like Elo (see my comment).
Another possibility would be my answer to How should I order these “helpful” scores?. Basically, use the number of wins to determine the range of likely possibilities for the probability that the player win a game, then take the lower end. This makes 55-2 beat 1-0 for any reasonable choice of the confidence level. (Lacking a reason to do otherwise, I'd suggest setting that to 50% -- see the post for the details, which are actually very simple.)
As a small technical aside: I've seen some suggestions to use the Wald interval rather than Agresti-Coull. Practically, they give the same results for large inputs. But there are good reasons to prefer Agresti-Coull if the number of games might be small. (By the way, I came up with this idea on my own—though surely I was not the first—and only later found that it was somewhat standard.)
How about score = (points per win) * (number of wins) + (points per loss) * (number of losses), where points per win is some positive number and points per loss is some negative number, chosen to work well for you application.
Hello good people of stackoverflow, this is a conceptual question and could possibly belong in math.stackexchange.com, however since this relates to the processing speed of a CPU, I put it in here.
Anyways, my question is pretty simple. I have to calculate the sum of the cubes of 3 numbers in a range of numbers. That sounds confusing to me, so let me give an example.
I have a range of numbers, (0, 100), and a list of each numbers cube. I have to calculate each and every combination of 3 numbers in this set. For example, 0 + 0 + 0, 1 + 0 + 0, ... 98^3 + 99^3 + 100^3. That may make sense, I'm not sure if I explained it well enough.
So anyways, after all the sets are computed and checked against a list of numbers to see if the sum matches with any of those, the program moves on to the next set, (100, 200). This set needs to compute everything from 100-200 + 0-200 + 0-200. Than (200, 300) will need to do 200 - 300 + 0 - 300 + 0 - 300 and so on.
So, my question is, depending on the numbers given to a CPU to add, will the time taken increase due to size? And, will the time it takes for each set exponentially increase at a predictable rate or will it be exponential, however not constant.
The time to add two numbers is logarithmic with the magnitude of the numbers, or linear with the size (length) of the numbers.
For a 32-bit computer, numbers up to 2^32 will take 1 unit of time to add, numbers up to 2^64 will take 2 units, etc.
As I understand the question you have roughly 100*100*100 combinations for the first set (let's ignore that addition is commutative). For the next set you have 100*200*200, and for the third you have 100*300*300. So it looks like you have an O(n^2) process going on there. So if you want to calculate twice as many sets, it will take you four times as long. If you want to calculate thrice as many, it's going to take nine times as long. This is not exponential (such as 2^n), but usually referred to as quadratic.
It depends on how long "and so on" lasts. As long as you maximum number, cubed, fits in your longest integer type, no. It always takes just one instruction to add, so it's constant time.
Now, if you assume an arbitrary precision machine, like say writing these numbers on the tape of a turing machine in decimal symbols, then adding will take a longer time. In that case, consider how long it would take? In other words, think about how the length of a string of decimal symbols grows to represent a number n. It will take time at least proportional to that length.
I am looking at whether or not certain 'systems' for betting really do work as claimed, namely, that they have a positive expectation. One such system is based on the rebate on loss. You basically have a large master pot, say $1 million. Your bankroll for each game is $50k.
The way it works is as follows:
Start with $50k, always bet on banker
If you win, add the money to the master pot. Then play again with $50k.
If you lose(now you're at $30k) play till you either:
(a) hit 0, you get a rebate of 10%. Begin playing again with $50k+5k=$55k.
(b) If you win more than the initial bankroll, add the money to the master pot. Then play again with $50k.
Continue until you double the master pot.
I just can't find an easy way of programming out the possible cases in R, since you can eventually go down an improbable path.
For example, you start at 50k, lose 20, win 19, now you're at 49, now you lose 20, lose 20, now you're at 9, you either lose 9 and get back 5k or you win and this cycle continues until you either end up with more than 50k or hit 0 and get the rebate on the 50k and start again with $50k +5k.
Here's some code I started, but I haven't figured out a good way of handling the cases where you get stuck and keeping track of the number of games played. Thanks again for your help. Obviously, I understand you may be busy and not have time.
p.loss <- .4462466
p.win <- .4585974
p.tie <- 1 - (p.win+p.loss)
prob <- c(p.win,p.tie,p.loss)
bet<-20
x <- c(19,0,-20)
r <- 10 # rebate = 20%
br.i <- 50
br<-200
#for(i in 1:100){
# cbr.i<-0
y <- sample(x,1,replace=T,prob)
cbr.i<-y+br.i
if(cbr.i > br.i){
br<-br+(cbr.i-br.i);
cbr.i<-br.i;
}else{
y <- sample(x,2,replace=T,prob);
if( sum(y)< cbr.i ){ cbr.i<-br.i+(1/r)*br.i; br<-br-br.i}
cbr.i<-y+
}else{
cbr.i<- sum(y) + cbr.i;
}if(cbr.i <= bet){
y <- sample(x,1,replace=T,prob)
if(abs(y)>cbr.i){ cbr.i<-br.i+(1/r)*br.i } }
The way you've phrased to rules don't make the game entirely clear to me, but here's some general advice on how you might solve your problem.
First of all, sit down with pen and paper, and see if you can make some progress towards an analytic solution. If the game is sufficiently complicated, this might not be possible, but you may get some more insight into how the game works.
Next step, if that fails, is to run a simulation. This means writing a function that accepts a starting level of player cash, and house cash (optionally this could be infinite), and a maximum number of bets to place. It then simulates playing the game, placing bets according your your betting system until either
i. The player goes broke
ii. The house goes broke
iii. You reach the maximum number of bets. (You need this maximum so you don't get stuck simulating forever.)
The function should return the amount of cash that the player has after all the bets have been placed.
Run this function lots of times, and compare the end cash with the starting cash. The average of end cash / start cash is a measure of your expectation.
Try the simulation with different inputs. (For instance, with many gambling games, even if you could theoretically make an infinite amount of money in the long run, stochastic variation means that you go broke before you get there.)
I decided to learn concurrency and wanted to find out in how many ways instructions from two different processes could overlap. The code for both processes is just a 10 iteration loop with 3 instructions performed in each iteration. I figured out the problem consisted of leaving X instructions fixed at a point and then fit the other X instructions from the other process between the spaces taking into account that they must be ordered (instruction 4 of process B must always come before instruction 20).
I wrote a program to count this number, looking at the results I found out that the solution is n Combination k, where k is the number of instructions executed throughout the whole loop of one process, so for 10 iterations it would be 30, and n is k*2 (2 processes). In other words, n number of objects with n/2 fixed and having to fit n/2 among the spaces without the latter n/2 losing their order.
Ok problem solved. No, not really. I have no idea why this is, I understand that the definition of a combination is, in how many ways can you take k elements from a group of n such that all the groups are different but the order in which you take the elements doesn't matter. In this case we have n elements and we are actually taking them all, because all the instructions are executed ( n C n).
If one explains it by saying that there are 2k blue (A) and red (B) objects in a bag and you take k objects from the bag, you are still only taking k instructions when 2k instructions are actually executed. Can you please shed some light into this?
Thanks in advance.
FWIW it can be viewed like this: you have a bag with k blue and k red balls. Balls of same color are indistinguishable (in analogy with the restriction that the order of instructions within the same process/thread is fixed - which is not true in modern processors btw, but let's keep it simple for now). How many different ways can you pull all the balls from the bag?
My combinatorial skills are quite rusty, but my first guess is
(2k!)
-----
2*k!
which, according to Wikipedia, indeed equals
(2k)
(k )
(sorry, I have no better idea how to show this).
For n processes, it can be generalized by having balls of n different color in the bag.
Update: Note that in the strict sense, this models only the situation when different processes are executed on a single processor, so all instructions from all processes must be ordered linearly on the processor level. In a multiprocessor environment, several instructions can be executed literally at the same time.
Generally, I agree with Péter's answer, but since it does not seem to have fully clicked for the OP, here's my shot at it (purely from a mathematical/combinatorial standpoint).
You have 2 sets of 30 (k) instructions that you're putting together, for a total of 60 (n) instructions. Since each set of 30 must be kept in order, we don't need to track which instruction within each set, just which set an instruction is from. So, we have 60 "slots" in which to place 30 instructions from one set (say, red) and 30 instructions from the other set (say, blue).
Let's start by placing the 30 red instructions into the 60 slots. There are (60 choose 30) = 60!/(30!30!) ways to do this (we're choosing which 30 slots of the 60 are filled by red instructions). Now, we still have the 30 blue instructions, but we only have 30 open slots left. There is (30 choose 30) = 30!/(30!0!) = 1 way to place the blue instructions in the remaining slots. So, in total, there are (60 choose 30) * (30 choose 30) = (60 choose 30) * 1 = (60 choose 30) ways to do it.
Now, let's suppose that instead of 2 sets of 30, you have 3 sets (red, green, blue) of k instructions. You have a total of 3k slots to fill. First, place the red ones: (3k choose k) = (3k)!/(k!(3k-k)!) = (3k)!/(k!(2k)!). Now, place the green ones into the remaining 2k slots: (2k choose k) = (2k)!/(k!k!). Finally, place the blue ones into the last k slots: (k choose k) = k!/(k!0!) = 1. In total: (3k choose k) * (2k choose k) * (k choose k) = ( (3k)! * (2k)! * k! ) / ( k!(2k)! * k!k! * k!0! ) = (3k)!/(k!k!k!).
As further extensions (though I'm not going to provide a full explanation):
if you have 3 sets of instructions with length a, b, and c, the number of possibilities is (a+b+c)!/(a!b!c!).
if you have n sets of instructions where the ith set has ki instructions, the number of possibilities is (k1+k2+...+kn)!/(k1!k2!...kn!).
Péter's answer is fine enough, but that doesn't explain just why concurrency is difficult. That's because more and more often nowadays you've got multiple execution units available (be they cores, CPUs, nodes, computers, whatever). That in turn means that the possibilities for overlapping between instructions is increased still further; there's no guarantee that what happens can be modeled correctly with any conventional interleaving.
This is why it is important to think in terms of using semaphores/mutexes correctly, and why memory barriers matter. That's because all of these things end up turning the true nasty picture into something that is far easier to understand. But because mutexes reduce the number of possible executions, they are reducing the overall performance and potential efficiency. It's definitely tricky, and that in turn is why it is far better if you can work in terms of message passing between threads of activity that do not otherwise interact; it's easier to understand and having fewer synchronizations is better.