Related
I would like to write an operation that takes a number, that could take any value larger than -1, and only outputs a number between -1 and 0.5
Currently, I am able to ensure that the above happens and it always outputs a number between 0 and 1 by doing the following:
SupressedNumber = (Number)%1
And the following for values between -1 and 0:
SupressedNumber = (Number)%-1
And the following for values between -0.5 and 0:
SupressedNumber = (Number)%-0.5
However I would like to make it between -1 and <-0.5 (-0.49ish max). It doesn't have to use modulus but I feel like it's part of the solution. It just has to be doable in lua.
As far as I understand, you want to clamp a number between two values, but one of them is a supremum, not a maximum. Lets solve the first problem first:
To clamp a number between two values inclusivly (e.g. -1 <= number <= -0.5), you can use the standard lua functions math.min() and math.max() or even code it yourself, if you need pure lua:
local function clamp(min, value, max)
return math.max(min, math.min(max, value))
end
clamp() will return value, but if value is smaller than min, it returns min. If it is greater than max, it returns max, so the result never leaves [-1, -0.5].
Since your goal is: [-1, -0.5), you have to make a compromise. Computers store decimals with a finite amount of precision, so you can't get a number that is infinitly close to -0.5, but maybe a number, that is close enaugh. Let's create a variable epsilon which says, how close is close enough:
local epsilon = 0.00001
And now we can put both ideas together:
supressed_number = clamp(-1, number, -0.5 + epsilon)
This will clamp your number between -1 and a little bit less than -0.5:
-1 <= value <= −0,49999 < -0.5
I'm looking for an explanation of how 1 decimal place rounding works for a sequence like this in R:
seq(1.05, 2.95, by = .1)
At high school, I'd round this up, i.e. 2.05 becomes 2.1. But R rounds it to 2 for 1 decimal place rounding.
Round up from .5
The following rounding function from the above stackoverflow answer consistently achieves the high school rounding:
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(z)
z = z/10^n
z*posneg
}
This code compares the R rounding and rounding from above.
data.frame(cbind(
Number = seq(1.05, 2.95, by = .1),
Popular.Round = round2(seq(1.05, 2.95, by = .1), 1),
R.Round = round(seq(1.05, 2.95, by = .1), 1)))
With R rounding, 1.05 is rounded up to 1.1 whereas 2.05 is rounded down to 2. Then again 1.95 is rounded up to 2 and 2.95 is rounded up to 3 as well.
If it is "round to even", why is it 3, i.e. odd number.
Is there a better response than "just deal with it" when asked about this behavior?
Too long to read? Scroll below
This was an interesting study for me personally. According to documentation:
Note that for rounding off a 5, the IEC 60559 standard (see also ‘IEEE
754’) is expected to be used, ‘go to the even digit’. Therefore
round(0.5) is 0 and round(-1.5) is -2. However, this is dependent on
OS services and on representation error (since e.g. 0.15 is not
represented exactly, the rounding rule applies to the represented
number and not to the printed number, and so round(0.15, 1) could be
either 0.1 or 0.2).
Rounding to a negative number of digits means rounding to a power of
ten, so for example round(x, digits = -2) rounds to the nearest
hundred.
For signif the recognized values of digits are 1...22, and non-missing
values are rounded to the nearest integer in that range. Complex
numbers are rounded to retain the specified number of digits in the
larger of the components. Each element of the vector is rounded
individually, unlike printing.
Firstly, you asked "If it is "round to even", why is it 3, i.e. odd number." To be clear, the round to even rule applies for rounding off a 5. If you run round(2.5) or round(3.5), then R returns 2 and 4, respectively.
If you go here, https://stat.ethz.ch/pipermail/r-help/2008-June/164927.html, then you see this response:
The logic behind the round to even rule is that we are trying to
represent an underlying continuous value and if x comes from a truly
continuous distribution, then the probability that x==2.5 is 0 and the
2.5 was probably already rounded once from any values between 2.45 and 2.54999999999999..., if we use the round up on 0.5 rule that we learned in grade school, then the double rounding means that values
between 2.45 and 2.50 will all round to 3 (having been rounded first
to 2.5). This will tend to bias estimates upwards. To remove the
bias we need to either go back to before the rounding to 2.5 (which is
often impossible to impractical), or just round up half the time and
round down half the time (or better would be to round proportional to
how likely we are to see values below or above 2.5 rounded to 2.5, but
that will be close to 50/50 for most underlying distributions). The
stochastic approach would be to have the round function randomly
choose which way to round, but deterministic types are not
comforatable with that, so "round to even" was chosen (round to odd
should work about the same) as a consistent rule that rounds up and
down about 50/50.
If you are dealing with data where 2.5 is likely to represent an exact
value (money for example), then you may do better by multiplying all
values by 10 or 100 and working in integers, then converting back only
for the final printing. Note that 2.50000001 rounds to 3, so if you
keep more digits of accuracy until the final printing, then rounding
will go in the expected direction, or you can add 0.000000001 (or
other small number) to your values just before rounding, but that can
bias your estimates upwards.
Short Answer: If you always round 5s upward, then your data biases upward. But if you round by evens, then your rounded-data, at large, becomes balanced.
Let's test this using your data:
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(z)
z = z/10^n
z*posneg
}
x <- data.frame(cbind(
Number = seq(1.05, 2.95, by = .1),
Popular.Round = round2(seq(1.05, 2.95, by = .1), 1),
R.Round = round(seq(1.05, 2.95, by = .1), 1)))
> mean(x$Popular.Round)
[1] 2.05
> mean(x$R.Round)
[1] 2.02
Using a bigger sample:
x <- data.frame(cbind(
Number = seq(1.05, 6000, by = .1),
Popular.Round = round2(seq(1.05, 6000, by = .1), 1),
R.Round = round(seq(1.05, 6000, by = .1), 1)))
> mean(x$Popular.Round)
[1] 3000.55
> mean(x$R.Round)
[1] 3000.537
While reading OpenCL 1.1 spec about CLK_FILTER_LINEAR (section 8.2, p258), I came to know that for calculating weights of bilinear filter 0.5 will be subtracted as shown below.
i0 = address_mode((int)floor(u – 0.5))
j0 = address_mode((int)floor(v – 0.5))
i1 = address_mode((int)floor(u – 0.5) + 1)
j1 = address_mode((int)floor(v – 0.5) + 1)
While for CLK_FILTER_NEAREST, it directly floor the u and v as below:
i = address_mode((int)floor(u))
j = address_mode((int)floor(v))
So, there seems to be discrepancy. When I provide unnormalized coordinates (5,4) NEAREST filter will read pixel (5,4). And for LINEAR filter will produce average pixel from (4,3), (5,3), (4,4) and (5,4). But even for LINEAR filter I would expect to read from (5,4) because weights will be 1, 0, 0, 0.
opencl1.1_spec
Can anyone please clarify the spec intention?
It's true. If you want to read a non-interpolated pixel, you'll need to add (0.5,0.5) to the coordinate. "Round" numbers (ending in .0) sit between the pixels and will be equally blended.
I'm writing a Python script to generate problems for mental arithmetic drills. The addition and multiplication ones were easy, but I'm running into trouble trying to generate unbiased problems for the subtraction ones.
I want to be able to specify a minimum and maximum value that the minuend (first number) will be -- e.g., for two-digit subtraction it should be between 20 and 99. The subtrahend should also have a range option (11-99, say). The answer needs to be positive and preferably also bounded by a minimum of, say, 10 for this situation.
So:
20 < Minuend < 99
11 < Subtrahend < 99
Answer = Minuend - Subtrahend
Answer >= 10
All the numeric values should be used as variables, of course.
I have these conditions met as follows:
ansMin, ansMax = 10, 99
subtrahendMin, minuendMax = 11,99
# the other max and min did not seem to be necessary here,
# and two ranges was the way I had the program set up
answer = randint(ansMin, ansMax)
subtrahend = randint(subtrahendMin, minuendMax - answer)
minuend = answer + subtrahend # rearranged subtraction equation
The problem here is that the minuend values wind up being nearly all over 50 because the answer and subtrahend were generated first and added together, and only the section of them that were both in the bottom 25% of the range will get the result below 50%. (Edit: that's not strictly true -- for instance, bottom 1% plus bottom 49% would work, and percentages are a bad way of describing it anyway, but I think the idea is clear.)
I also considered trying generating the minuend and subtrahend values both entirely randomly, then throwing out the answer if it didn't match the criteria (namely, that the minuend be greater than the subtrahend by a value at least greater than the answerMin and that they both be within the criteria listed above), but I figured that would result in a similar bias.
I don't care about it being perfectly even, but this is too far off. I'd like the minuend values to be fully random across the allowable range, and the subtrahend values random across the range allowed by the minuends (if I'm thinking about it right, this will be biased in favor of lower ones). I don't think I really care about the distribution of the answers (as long as it's not ridiculously biased). Is there a better way to calculate this?
There are several ways of defining what "not biased" means in this case. I assume that what you are looking for is that every possible subtraction problem from the allowed problem space is chosen with equal probability. Quick and dirty approach:
Pick random x in [x_min, x_max]
Pick random y in [y_min, y_max]
If x - y < answer_min, discard both x and y and start over.
Note the bold part. If you discard only y and keep the x, your problems will have an uniform distribution in x, not in the entire problem space. You need to ensure that for every valid x there is at least one valid y - this is not the case for your original choice of ranges, as we'll see later.
Now the long, proper approach. First we need to find out the actual size of the problem space.
The allowed set of subtrahends is determined by the minuend:
x in [21, 99]
y in [11, x-10]
or using symbolic constants:
x in [x_min, x_max]
y in [y_min, x - answer_min]
We can rewrite that as
x in [21, 99]
y = 11 + a
a in [0, x-21]
or again using symbolic constants
x in [x_min, x_max]
y = y_min + a
a in [0, x - (answer_min + y_min)].
From this, we see that valid problems exist only for x >= (answer_min + y_min), and for a given x there are x - (answer_min + y_min) + 1 possible subtrahents.
Now we assume that x_max does not impose any further constraints, e.g. that answer_min + y_min >= 0:
x in [21, 99], number of problems:
(99 - 21 + 1) * (1 + 78+1) / 2
x in [x_min, x_max], number of problems:
(x_max - x_min + 1) * (1 + x_max - (answer_min + y_min) + 1) / 2
The above is obtained using the formula for the sum of an arithmetic sequence. Therefore, you need to pick a random number in the range [1, 4740]. To transform this number into a subtraction problem, we need to define a mapping between the problem space and the integers. An example mapping is as follows:
1 <=> x = 21, y = 11
2 <=> x = 22, y = 12
3 <=> x = 22, y = 11
4 <=> x = 23, y = 13
5 <=> x = 23, y = 12
6 <=> x = 23, y = 11
and so on. Notice that x jumps by 1 when a triangular number is exceeded. To compute x and y from the random number r, find the lowest triangular number t greater than or equal to r, preferably by searching in a precomputed table; write this number as q*(q+1)/2. Then x = x_min + q-1 and y = y_min + t - r.
Complete program:
import random
x_min, x_max = (21, 99)
y_min = 11
answer_min = 10
triangles = [ (q*(q+1)/2, q) for q in range(1, x_max-x_min+2) ]
upper = (x_max-x_min+1) * (1 + x_max - (answer_min + y_min) + 1) / 2
for i in range(0, 20):
r = 1 + random.randrange(0, upper)
(t, q) = next(a for a in triangles if a[0] >= r)
x = x_min + q - 1
y = y_min + t - r
print "%d - %d = ?" % (x, y)
Note that for a majority of problems (around 75%), x will be above 60. This is correct, because for low values of the minuend there are fewer allowed values of the subtrahend.
I can see a couple of issues with your starting values - if you want the answer to always be greater than 10 - then you need to either increase MinuendMin, or decrease SubtrahendMin because 20-11 is less than 10... Also you have defined the answer min and max as 3,9 - which means the answer will never be more than 10...
Apart from that I managed to get a nice even distribution of values by selecting the minuend value first, then selecting the subtrahend value based on it and the answerMin:
ansMin = 10
minuendMin, minuendMax = 20,99
subtrahendMin = 9;
minuend = randint(minuendMin, minuendMax )
subtrahend = randint(subtrahendMin,(minuend-ansMin) )
answer = minuend - subtrahend
You say you've already got addition working properly. Assuming you have similar restrictions for the addends/sum you could rearrange the factors so that:
minuend <= sum
subtrahend <= first addend
answer <= second addend
A similar mapping can be made for multiplication/division, if required.
How can I convert a length into a value in the range -1.0 to 1.0?
Example: my stage is 440px in length and accepts mouse events. I would like to click in the middle of the stage, and rather than an output of X = 220, I'd like it to be X = 0. Similarly, I'd like the real X = 0 to become X = -1.0 and the real X = 440 to become X = 1.0.
I don't have access to the stage, so i can't simply center-register it, which would make this process a lot easier. Also, it's not possible to dynamically change the actual size of my stage, so I'm looking for a formula that will translate the mouse's real X coordinate of the stage to evenly fit within a range from -1 to 1.
-1 + (2/440)*x
where x is the distance
So, to generalize it, if the minimum normalized value is a and the maximum normalized value is b (in your example a = -1.0, b = 1.0 and the maximum possible value is k (in your example k = 440):
a + x*(b-a)/k
where x is >= 0 and <= k
This is essentially two steps:
Center the range on 0, so for example a range from 400 to 800 moves so it's from -200 to 200. Do this by subtracting the center (average) of the min and max of the range
Divide by the absolute value of the range extremes to convert from a -n to n range to a -1 to 1 range. In the -200 to 200 example, you'd divide by 200
Doesn't answer your question, but for future googlers looking for a continuous monotone function that maps all real numbers to (-1, 1), any sigmoid curve will do, such as atan or a logistic curve:
f(x) = atan(x) / (pi/2)
f(x) = 2/(1+e-x) - 1
(x - 220) / 220 = new X
Is that what you're looking for?
You need to shift the origin and normalize the range. So the expression becomes
(XCoordinate - 220) / 220.0
handling arbitrary stage widths (no idea if you've got threads to consider, which might require mutexes or similar depending on your language?)
stageWidth = GetStageWidth(); // which may return 440 in your case
clickedX = MouseInput(); // should be 0 to 440
x = -1.0 + 2.0 * (clickedX / stageWidth); // scale to -1.0 to +1.0
you may also want to limit x to the range [-1,1] here?
if ( x < -1 ) x = -1.0;
if ( x > 1 ) x = 1.0;
or provide some kind of feedback/warning/error if its out of bounds (only if it really matters and simply clipping it to the range [-1,1] isn't good enough).
You have an interval [a,b] that you'd like to map to a new interval [c,d], and a value x in the original coordinates that you'd like to map to y in the new coordinates. Then:
y = c + (x-a)*(c-d)/(b-a)
And for your example with [a,b] = [0,440] and [c,d] = [-1,1], with x=220:
y = -1 + (220-0)*(1 - -1)/(440-0)
= 0
and so forth.
By the way, this works even if x is outside of [a,b]. So as long as you know any two values in both systems, you can convert any value in either direction.