Simple Steering Behaviour: Explain This Line - math

I am reading the book Programming Game AI by Example, and he gives code for
a steering behaviour which causes the entity to decelerate so that it arrives
gracefully at a target. After calculating dist, the distance from target to
source he then (essentially) does this
double speed = dist/deceleration;
I just cannot understand where this comes from however, am I just missing something
really obvious? It is not listed as a known error in the book so I am guessing it
is correct.

If there was some physical truth to this, the units would have match up on either side.
From what I understand, this is akin to Zeno's paradoxes where you are trying to reach something, but you never get there because you always only travel one nth of the remaining distance.
Suppose
the simulation proceeds at intervals of one second at a time.
deceleration = 5
distance = 1000 meters
With these initial conditions, speed will be set to 200 meters per second. Because the simulation proceeds at intervals of one second, we will travel exactly 200 meters (i.e. one fifth of the remaining distance), and end up at a distance of 800 meters from the target. The new speed is determined to be: 160 meters per second
Here is what happens in the first 30 seconds:
The last 30 seconds:
The last 10 seconds:
Observations
Within the first 30 seconds, we travel roughly 998 meters
Within the first 50 seconds, we cover 999.985 meters
Within the last 10 seconds, we cover only ~1.2cm
As you can see, you get almost there very quickly, but it takes a long time to get close.
Plots by WolframAlpha

Maybe there is something missing in your calculation. For a constant accelaration (or decelleration), and ignoring initial condictions, the speed is
v = a * t
and the distance is
d = a * t^2 / 2
If you eliminate t in both equations you get
v = a * sqrt(2 * d / a)

Related

How to calculate steps needed for arrow to fill all given sectors of the clock?

the problem's data are:
Analog clock is dived into 512 even sections, arrow/handle starts its movement at 0° and each tick/step moves it by 4.01°. Arrow/Handle can move only clockwise. What minimum ticks/steps count is needed for arrow/handle to visit all sections of the clock.
I'm trying to write a formula to calculate the count but can't quite wrap my head around it.
Is it possible to do it? If yes, how can I do it?
This site is for programmers, isn't it?
So we can hire our silicon friend to work for us ;)
Full circle is 360*60*60*4=5184000 units (unit is a quarter of angular second)
One step is 4*(4*3600+36) = 57744 units
One section is 4*360*3600/512 = 10125 units (we use quarters to make this value integer)
cntr = set()
an = 0
step = 57744
div = 10125
mod = 5184000
c = 0
while len(cntr) < 512:
sec = (an % mod) // div
cntr.add(sec)
an += step
c += 1
print(c)
>>804
unfortunately I can`t fully answer your question but the following may help:
Dividing the 512 Sections into degree gives you 1,4222° each.
Each round you cover 90 different section when starting between 0°-3.11° and 89° when starting between 3.12°-4.00°
For starting the rounds this gives you a change in starting degree of 0.9° every round except after the fourth, where it is only 0.89°(within the possible range of 0°-4° so all calculated mod 4).
So you have 0.9°->1.8°->2.7°->3.6°->0.49->1.39°...0.08°...
I hope this helps you devloping an algorithm

Determination of threshold values to group variable in ranges

I have, let's say, 60 empirical realizations of PPR. My goal is to create PPR vector with average values of empirical PPR. This average values depend on what upper and lower limit of TTM i take - so I can take TTM from 60 to 1 and calculate average and in PPR vector put this one average number from row 1 to 60 or I can calculate average value of PPR from TTT >= 60 and TTM <= 30 and TTM > 30 and TTM <= 1 and these two calculated numbers put in my vector accordingly to TTM values. Finaly I want to obtain something like this on chart (x-axis is TTM, green line is my empirical PPR and black line is average based on significant changes during TTM). I want to write an algorithm which will help me find the best TTM thresholds to fit the best black line to green line.
TTM PPR
60 0,20%
59 0,16%
58 0,33%
57 0,58%
56 0,41%
...
10 1,15%
9 0,96%
8 0,88%
7 0,32%
6 0,16%
Can you please help me if you know any statistical method which might be applicable in this case or base idea for an algorithm which I could implement in VBA/R ?
I have used Solver and GRG Nonlinear** to deal with it but I believe that there is something more proper to be utilized.
** with Solver I had the problem that it found optimal solution - ok, but I re-run Solver and it found me new solution (with a little bit different values of TTM) and value of target function was lower that on first time (so, was the first solution really optimal ?)
I think this is what you want. The next step would be including a method that can recognize the break points. I am sure you need to define two new parameters, one as the sensitivity and one as the minimum number of points in a sample to be accepted to be categorized as a section (between two break points including start and end point)
Please hit the checkmark next to this answer if you are happy with it.
You can download the Excel file from here:
http://www.filedropper.com/statisticspatternchange

Understanding the probability of a double-six if i roll two dice

The probability of a double-six in one throw of two die is 1/36 or 0.028.
If I threw a pair of die a hundred times would 3 (0.028 * 100) be
The amount of times (3) I would get a double-six
OR
The probability (3%) of getting a double-six on all throws.
I have a feeling the correct answer is number 1, because intuitively the chance of getting a double six every time on a hundred throws seems to be a lot lower than 3%.
Please explain, as simply as you can, which is the correct understanding and why.
The probablity of not having double six in one throw (all but one outcome divided by all outcomes):
35/36
The probability of not having double six in N throws
(35/36)**N /* where ** is raising into N-th power */
The probability of having at least one double six in N throws
P(N) = 1 - (35/36)**N
if N == 100 we have
P(100) == 0.94022021...
It is nearly 1., but with a twist in the interpretation. 2.8 is the average number of double sixes if you were to perform a series of experiments with 100 throws each. The correct answer for 2. was given by Dmitry.
Please ask math-oriented questions in the math forum math.stackexchange.

Calculate the max samples with ramp up

I got this math problem. I am trying to calculate the max amount of samples when the response time is zero. My test has 3 samples (HTTP Request). The total test wait time is 11 seconds. The test is run for 15 minutes and 25 seconds. The ramp up is 25 seconds, this means that for every second 2 users are created till we reach 50.
Normally you have to wait for the server to respond, but I am trying to calculate the max amount of samples (this means response time is zero.) How do i do this. I can't simply do ((15 * 60 + 25) / 11) * 50. Because of the ramp up.
Any ideas?
EDIT:
Maybe I should translate this problem into something generic and not specific to JMeter So consider this (maybe it will make sense to me aswel ;)).
50 people are walking laps around the park. Each lap takes exactly 11 seconds to run. We got 15 minutes and 25 seconds to walk as many as possible laps. We cannot start all at the sametime but we can start 2 every second (25seconds till we are all running). How many laps can we run?
What i end up doing was manually adding it all up...
Since it takes 25s to get up to full speed, only 2 people can walk for 900s and 2 people can walk for 901s and 2 people can walk for 902s all the way to total of 50 people..
Adding that number together should give me my number i think.
If I am doing something wrong or based on wrong conclusion I like to hear your opinion ;). Or if somebody can see a formula.
Thanks in advance
I have no idea about jmeter, but I do understand your question about people running round the park :-).
If you want an exact answer to that question which ignores partial laps round the park, you'll need to do (in C/java terminology) a for loop to work it out. This is because to ignore partial laps it's necessary to round down the number of possible laps, and there isn't a simple formula that's going to take the rounding down into account. Doing that in Excel, I calculate that 4012 complete laps are possible by the 50 people.
However, if you're happy to include partial laps, you just need to work out the total number of seconds available (taking account of the ramp up), then divide by the number of people starting each second, and finally divide by how many seconds it takes to run the lap. The total number of seconds available is an arithmetic progression.
To write down the formula that includes partial laps, some notation is needed:
T = Total number of seconds (i.e. 900, given that there are 15 minutes)
P = number of People (i.e. 50)
S = number of people who can start at the Same time (i.e. 2)
L = time in seconds for a Lap (i.e. 11)
Then the formula for the total number of laps, including partial laps is
Number of Laps = P * (2 * T - (P/S - 1)) / (2*L)
which in this case equals 4036.36.
Assume we're given:
T = total seconds = 925
W = walkers = 50
N = number of walkers that can start together = 2
S = stagger (seconds between starting groups) = 1
L = lap time = 11
G = number of starting groups = ceiling(W/N) = 25
Where all are positive, W and N are integers, and T >= S*(G-1) (i.e. all walkers have a chance to start). I am assuming the first group start walking at time 0, not S seconds later.
We can break up the time into the ramp period:
Ramp laps = summation(integer i, 0 <= i < G, N*S*(G-i-1)/L)
= N*S*G*(G-1)/(2*L)
and the steady state period (once all the walkers have started):
Steady state laps = W * (T - S*(G-1))/L
Adding these two together and simplifying a little, we get:
Laps = ( N*S*G*(G-1)/2 + W*(T-S*(G-1)) ) / L
This works out to be 4150 laps.
There is a closed form solution if you're only interested in full laps. If that's the case, just let me know.

Popularity formula (using votes and age)

I need to create a simple formula for determining the popularity of an item based on votes and age.
Here is my current formula, which needs some work:
30 / (days between post date and now) * (vote count) = weighted vote
Whenever a vost is cast for an item it checks if its weighted vote is > 300. If an item has a weighted vote more than 300 then it is promoted to the front page.
The problem is that this formula makes it very hard for older items to be promoted.
30 / 1 day * 10 votes = 300 (promoted)
30 / 5 days * 15 votes = 90 (not promoted)
30 / 30 days * 30 votes = 30 (not promoted)
30 / 80 days * 40 votes = 15 (not promoted)
How can I alter the formula to make it relatively easier for older items to be promoted (IE. make the above four weighted values fairly close together)?
Just get a graph drawing program (maybe excel, maybe matlab, maybe GNUplot) and experiment with the formula until you feel it looks right.
There's no right or wrong with these things.
Toss a logarithm on the amount of time it's been since the item was posted. Tweak the base and the constants involved. That'll take you most of the way there.

Resources