Why IDL have different value in same expression? - idl-programming-language

I'm doing some image processing with IDL, and it required a high-precision.
But when I debug my colleague's program, I find some thing strange:
IDL> lat,y_res
45.749001
0.00026999999
IDL> lat - findgen(10)*y_res + y_res * 0.5 + findgen(10)*y_res + y_res * 0.5
45.749268 45.749268 45.749268 45.749268 ... 45.749268
IDL> lat - (findgen(10)*y_res + y_res * 0.5) + (findgen(10)*y_res + y_res * 0.5)
45.749001 45.749001 45.749001 45.749001 ...
Just as code above, I don't know why the two results have different value?
My IDL version is 8.3 with ENVI package.

TriskalJM is correct. If you look at your parentheses in the second expression, you are grouping your terms differently. This will always happen with floating-point arithmetic in any computer language, just due to roundoff errors. If you want a more information, you could consult:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
In the meantime, I would recommend that you switch to double-precision:
lat - dindgen(10)*y_res + y_res * 0.5 + dindgen(10)*y_res + y_res * 0.5

Related

Why is my approximation of the Gamma function not exact?

So I'm setting out to recreating some math functions from the math library from python.
One of those functions is the math.gamma-function. Since I know my way around JavaScript I thought I might try to translate the JavaScript implementation of the Lanczos approximation on Rosetta code into Applescript code:
on gamma(x)
set p to {1.0, 676.520368121885, -1259.139216722403, 771.323428777653, -176.615029162141, 12.507343278687, -0.138571095266, 9.98436957801957E-6, 1.50563273514931E-7}
set E to 2.718281828459045
set g to 7
if x < 0.5 then
return pi / (sin(pi * x) * (gamma(1 - x)))
end if
set x to x - 1
set a to item 1 of p
set t to x + g + 0.5
repeat with i from 2 to count of p
set a to a + ((item i of p) / (x + i))
end repeat
return ((2 * pi) ^ 0.5) * (t ^ x + 0.5) * (E ^ -t) * a
end gamma
The required function for this to run is:
on sin(x)
return (do shell script "python3 -c 'import math; print(math.sin(" & x & "))'") as number
end sin
All the other functions of the Javascript implementation have been removed to not have too many required functions, but the inline operations I introduced produce the same result.
This Javascript-code works great when trying to run it in the browser-console, but my Applescript implementation doesn't produce answers anywhere near the actual result. Is it because...
...I implemented something wrong?
...Applescript doesn't have enough precision?
...something else?
You made two mistakes in your code:
First of all, the i in your repeat statement starts at 2 rather than 1, which is fine for (item i of p), but it needs to be subtracted by 1 in the (x + i).
Secondly, in the code (t ^ x + 0.5) in the return statement, the t and x are being calculated first since they are exponents and then added to 0.5, but according to the JavaScript implementation the x and 0.5 need to be added together first instead.

Question about the Division operator in R not returning the correct value

I am trying to caculate the Bayes Theorem for Cancer and tried to plug in the correct values in my formula as such:
cancer <- (1 * (1/100000)) / (1*(1/100000)) + ((10/99999) * (99999/100000))
In this case, cancer = 1.0001
However, the correct answer should be: 0.09090909, as proven by running the code separately, like this:
num = (1 * (1/100000))
den = (1*(1/100000)) + ((10/99999) * (99999/100000))
num / den
> 0.09090909
Can you please let me know why this is the case and how I should run the combined equation in the future to get the proper result?
Parentheses are needed:
cancer <- (1 * (1/100000)) / ((1*(1/100000)) + ((10/99999) * (99999/100000)))

Estimate maximum relative error of concatenated floating-point operations

According to this very elaborate answer I would estimate the maximum relative error δres,max of the following computation like this:
// Pseudo code
float a, b, c; // Prefilled IEEE 754 floats with double precision
res = a / b * c;
res = a * (1 + δa) / ( b * (1 + δb) ) * (1 + δa/b) * c * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δres)
=> δres = (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c) - 1
All δs are within the bounds of ± ε / 2, where ε is 2^-52.
=> δres,max = (1 + ε / 2)^4 / (1 - ε / 2) - 1 ≈ 2.5 * ε
Is this a valid approach for error estimation that can be used for every combination of basic floating-point operations?
PS:
Yes, I read "What Every Computer Scientist Should Know About Floating-Point Arithmetic". ;)
Well, it's probably a valid approach. I'm not sure how you've jockeyed that last line, but your conclusion is basically correct (though note that, since the theoretical error can exceed 2.5e, in practice the error bound is 3e).
And yes, this is a valid approach which will work for any floating-point expression of this form. However, the results won't always be as clean. Once you have addition/subtraction in the mix, rather than just multiplication and division, you won't usually be able to cleanly separate the exact expression from an error multiplier. Instead, you'll see input terms and error terms getting multiplied directly together, rather than the pleasantly relatively constant bound here.
As a useful example, try deriving the maximum relative error for (a+b)-a (assuming a and b are exact).

How to Incorporate a numerical prefactor into a radical in Maxima?

(%i2) x : expand(cosh(1)*sqrt(3+5*t));
(%o2) cosh(1) sqrt(5 t + 3)
(%i3) expand(float(x));
0.5
(%o3) 1.543080634815244 (5.0 t + 3.0)
How can I get Maxima to incorporate the prefactor into the radical? I'm looking for something that in this case yields something like
0.5
(%o3) (11.90548922 t + 7.143293537)
For numbers as small as these this is not a big deal, but for numerical evaluations Maxima tends to substitute rational approximations that may involve very large denominators, so that I end up with expressions where the prefactor is a very small number (like 6.35324353 × 10-23) and the numbers inside the square root are very large numbers (like 5212548545863256475196584785455844385452665612552468), so that it isn't obvious even what the order of magnitude of the result is.
Here's a solution which uses pattern matching.
(%i1) matchdeclare (cc, numberp, [bb, aa], all) $
(%i2) defrule (r1f, cc*bb^0.5, foof(cc,bb));
0.5
(%o2) r1f : bb cc -> foof(cc, bb)
(%i3) defrule (r2f, aa*cc*bb^0.5, aa*foof(cc,bb));
0.5
(%o3) r2f : aa bb cc -> aa foof(cc, bb)
(%i4) foof(a,b):= (expand(a^2*b))^0.5 $
(%i5) apply1 (1.543080634815244*(5.0*t + 3.0)^0.5, r1f, r2f);
0.5
(%o5) (11.90548922770908 t + 7.143293536625449)
(%i6) apply1 (1.543080634815244*x*y*(5.0*t + 3.0)^0.5, r1f, r2f);
0.5
(%o6) (11.90548922770908 t + 7.143293536625449) x y
(%i7) apply1 (1/(1 + 345.43*(2.23e-2*u + 8.3e-4)^0.5), r1f, r2f);
1
(%o7) --------------------------------------------
0.5
(2660.87803327 u + 99.03716446700001) + 1
It took some experimentation to figure out suitable rules r1f and r2f. Note that these rules match ...^0.5 but not sqrt(...) (i.e. exponent = 1/2 instead of 0.5). Of course if you want to match sqrt(...) you can create additional rules for that.
Not guaranteed to work for you -- a rule might match too much or too little. It's worth a try, anyway.

YCbCr to RGB from matrix table

Below is a matrix to convert RGB to YCbCr. Can you tell me how can I get formula to convert YCbCr to RGB?
I mean, I have YCbCr value available and I want to get RGB from it.
If you are asking how the formula is derived, you may want to search for "color coordinate systems". This page has a good discussion on the YCbCr space, in particular.
We know that almost any color can be represented as a linear combination of red, green, and blue. But you can transform (or "rotate") that coordinate system such that the three basis elements are no longer RGB, but something else. In the case of YCbCr, the Y layer is the luminance layer, and Cb and Cr are the two chrominance layers. Cb correlates more closely to blue, and Cr correlates more closely to red.
YCbCr is often preferred because the human visual system is more sensitive to changes in luminance than quantitatively equivalent changes in chrominance. Therefore, an image coder such as JPEG can compress the two chrominance layers more than the luminance layer, resulting in a higher compression ratio.
EDIT: I misunderstood the question. (You should edit it to clarify.) Here is the formula to get RGB from YCbCr, taken from the above link:
r = 1.0 * y' + 0 * cB + 1.402 * cR
g = 1.0 * y' - 0.344136 * cB - 0.714136 * cR
b = 1.0 * y' + 1.772 * cB + 0 * cR
I'm not going to account for the round portion, but since M looks invertible:
You can round the resulting vector.
Y = 0.2126*(219/255)*R + 0.7152(219/255)*G + 0.0722*(219/255)*B + 16
CB = -0.2126/1.18556*(224/255)*R - 0.7152/1.8556(224/255)*G + 0.5*(219/255)*B + 128
CR = 0.5*(224/255)*R - 0.7152/1.5748(224/255)*G - 0.0722/1.5748*(224/255)*B + 128
http://www.fourcc.org/fccyvrgb.php has YUV to RGB conversions.
Convert to float and apply (since the coeff. are BT.709-2):
https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.709_conversion
1 0 1.5748
1 -0.1873 -0.4681
1 1.8556 0
These are the equations that work for 0-255 colors. Here they are in C
RGB to YCBCR
y = 0.299* (r) + 0.587 * (g) + 0.114* (b);
cb= 128 - 0.168736* (r) - 0.331364 * (g) + 0.5* (b);
cr= 128 + 0.5* (r) - 0.418688 * (g) - 0.081312* (b);
YCBCR to RGB
r = (y) + 1.402 * (cr -128);
g = (y) - 0.34414 * (cb - 128) - 0.71414 * (cr - 128);
b = (y) + 1.772 * (cb - 128);

Resources