I thought I understood the division concept but it seems I don't. The example explains T and F division of 42/-10 and -42/10. And my task is as follows:
You F-divided some positive number by −5 and obtained a remainder of 1 What remainder will you obtain T-dividing this number by −5?
:
Normally I wouldn't hesitate with division but I failed to understand this concept.
trunc(n) will always round towards zero.
floor(n) will always round down.
If n is positive, they will behave identically. (trunc(4.7) = 4 and floor(4.7) = 4)
But if n is negative, they will round in opposite directions. (trunc(-4.7) = -4 and floor(-4.7) = -5)
Then we just expand this so that n is the result of division. For T division, we divide and then round towards zero. For F division, we divide and then round down.
Related
Probably this is elementary for the people here. I am just a computer user.
I fooled around near the extreme values (0 and 1) for the Standard Normal cumulative distribution function (CDF), and I noticed that we can get very small probability values for large negative values of the variable, but we do not get the same reach towards the other end, for large positive values, where the value "1" appears already for much smaller (in absolute terms) values of the variable.
From a theoretical point of view, the tail probabilities of the Standard Normal distribution are symmetric around zero, so the probability mass to the left of, say, X=-10, is the same as the probability mass to the right of X=10. So at X=-10 the distance of the CDF from zero is the same as is its distance from unity at X=10.
But the computer/software complex doesn't give me this.
Is there something in the way our computers and software (usually) compute, that creates this asymmetric phenomenon, while the actual relation is symmetric?
Computations where done in "r", with an ordinary laptop.
This post is related, Getting high precision values from qnorm in the tail
Floating-point formats represent numbers as a sign s (+1 or −1), a significand f, and an exponent e. Each format has some fixed base b, so the number represented is s•f•be, and f is restricted to be in [1, b) and to be expressible as a base-b numeral of some fixed number p of digits. These formats can represent numbers very close to zero by making e very small. But the closest they can get to 1 (aside from 1 itself) is where either f is as near 1 as it can get (aside from 1 itself) and e is 0 or f is as near b as it can get and e is −1.
For example, in the IEEE-754 binary64 format, commonly used for double in many languages and implementations, b is two, and p is 53, and e can be as low as −1022 for normal numbers (there are subnormal numbers that can be smaller). This means the smallest representable normal number is 2−1022. But near 1, either e is 0 and f is 1+2−52 or e is −1 and f is 2−2−52. The latter number is closer to 1; it is s•f•be = +1•(2−2−52)•2−1 = 1−2−53.
So, in this format, we can get to a distance of 2−1022 from zero (closer with subnormal numbers), but only to a distance of 2−53 from 1.
I have two vectors a=(a_1, a_2,...,a_n) and b=(b_1, b_2,...,b_n). I want to find a scalar "s" such that s=max{s : a+sb >= 0}. Here inequality is elementwise i.e. a_i+sb_i>=0 for all i=[1,...,n]. How to compute such a scalar ? Also if s=infinity is the solution, we will bound s by s=1.
Also vector "a" is nonnegative (i.e. each element is >=0).
Okay so with a_i >= 0, we can see that s=0 is always a solution.
One possible way is to solve the inequality on all components and then take the intersection of the domains. Their upper bound (which, if finite, is part of the intersection) is then your wanted number.
that means:
is what you're trying to solve. Note that, because in the second case b_i is negative, the number on the right hand side is positive. That means the intersection of all s_i you get like this is non-empty. The maximum has a lower bound of 0, so technically you can ignore the inequalities where b_i is positive, they are true anyways. But, for completeness and illustration purposes:
Example:
a= (1,1), b=(-0,5,1)
1 - s*0.5 >= 0 , that means s <= 2, or s in (-inf, 2]
1 + s*1 >= 0, that means s>= -1, or s in [-1,inf)
intersection: [-1,2]
That means the maximum value such that both equations hold is 2.
That is the most straight-forward way, of course there are probably more elegant ways.
Edit: As algorithm: check if b_i is positive or negative. If positive, save s_i = 1 (if you want your value to be bound by one). If negative, save s_i = -a_i/b_i. In the end you want to actually take the minimum!
But more efficient: You don't actually need to care when b_i is positive. Your maximum will be greater or equal than zero anyways. So just check the cases where it is smaller than 0 and keep the minimum of them -a_i/b_i, as that is the upper bound of the region.
Pseudocode:
s = 1
for i in range = 1 to length(b):
if b[i]<0:
s = min(s, -a[i]/b[i])
Why the minimum? Because that -a[i]/b[i] is the upper bound of the region.
I have a program where I represent lengths (in cm) and angles (in radian) as floats. My lengths usually have values between 10 and 100, while my angles usually have values between 0 and 1.
I'm aware that precision will be lost in all floating point operations, but my question is:
Do I loose extra precision because of the magnitude gap between my two numerical realms? Would it be better if I changed my length unit to be meters, such that my usual length values lies between 0.1 and 1, which matches my usual angle values pretty evenly?
The point of floating point is that the point floats. Changing the magnitudes of numbers does not change the relative errors, except for quantization effects.
A floating point system represents a number x with some value f and an exponent e with some fixed base b (e.g., 2 for binary floating point), so that x = f be. (Often the sign is separated from f, but I am omitting that for simplicity.) If you multiply the numbers being worked with by any power of b, addition and subtraction will operate exactly the same (and so will multiplication and division if you correct for the additional factor), up to the bounds of the format.
If you multiply by other numbers, there can be small effects in rounding. When an operation is performed, the result has to be rounded to a fixed number of digits for the f portion. This rounding error is a fraction of the least significant digit of f. If f is near 1, it is larger relative to f than if f is near 2.
So, if you multiply your numbers by 256 (a power of 2), add, and divide by 256, the results will be the same as if you did the addition directly. If you multiply by 100, add, and divde by 100, there will likely be small changes. After multiplying by 100, some of your numbers will have their f parts moved closer to 2, and some will have their f parts moved closer to 2.
Generally, these changes are effectively random, and you cannot use such scaling to improve the results. Only in special circumstances can you control these errors.
I came across this equation in a research paper, and can't seem to make sense of it. Let me give an argument for why it doesn't make sense, and perhaps someone can tell me where my flaw is?
P_L(d_0) is the RSSI value for distance 0
P_L(d_i) is the RSSI value for distance 1
d_0 is distance 0
d_i is distance 1
So if you put the two RSSI values on the left side of the equation, you have:
(RSSI value 1) - (RSSI value 0) = 10n * log ((Dist 1) / (Dist 0))
Let's consider the case when distance 1 is larger than distance 0:
(Dist 1) > (Dist 0)
Greater distance means less RSSI, so (RSSI 1) < (RSSI 0). So the left side of the equation is negative. The research paper states that n is normally between 2 and 4, so the "10n" part of the right side of the equation is positive, which means the log value must be negative, right?
But that leads to a contradiction. We said Dist 1 is greater, so the number inside the log is greater than 1, therefore the log value itself is positive. So intuitively, we have found an equation with the left side negative and the right side positive. What's going on??
(The opposite leads to a contradiction too: if dist 1 is less than dist 0, we get the left side positive and the right side negative)
According to the paper, P_L is the mean path loss for a given distance. The loss increases with the distance.
Recently i solved a problem where i have to compute (a-b)%n.The results were self explanatory when a-b is an positive number but for negative numbers the results that i got seems confusing..i just wanted to know how can we calculate this result for negative numbers.
Any links dealing with modulo operator properties are most welcome.
http://en.m.wikipedia.org/wiki/Modulo_operation
In many programming languages (C, Java) the modulo operator is defined so that the modulus has the same sign as the first operand. This means that the following equation holds:
(-a) % n = -(a % n)
For example, -8%3 would be -2, since 8%3 is 2.
Others, such as Python, compute a % n instead as the positive remainder when diving by n, which means
(-a) % n = n - (a % n)
For example, -8%3 is 1 because 3-(8%3) is 3-2 is 1.
Note that in modular arithmetic adding or subtracting any multiple of n does not change the result because "equality" (or congruence if you prefer that term) is defined with respect to divisibility: X is equal to 0 if it is a multiple of n, and A is equal to B if A-B is a multiple of n. For example -2 is equal to 1 modulo 3 because -2-1 = -3 is divisible by 3.