How to use multiple nested parentheses with exponents? - r

I need to create a new variable in my data frame that is the output of an equation with many nested parentheses. Part of this equation is in the form of the last line below
temp=36
Tc = 647.097
( ( 1-273.15+temp )/Tc )^1.5
where temp will be a variable, and Tc will be a constant. However, when I run the code the result is always NA.
But, if I break the code down into the number that I know results from
( 1-273.15+temp )/Tc
and then add the exponent like so
-0.3649376^1.5
then the code works as it should.
Why is R not able to correctly output the calculation ( ( 1-273.15+temp )/Tc )^1.5 ?
And more importantly, how can I get R to give me the result of ( ( 1-273.15+temp )/Tc )^1.5 while retaining my use of objects for the constant and the variable?
I need to address this, because the full equation is even worse, where the problem I describe above is nested within itself:
e_sat_test <- Pc^( ( Tc/(273.15+temp ) ) *
( a1*( (1-273.15+temp)/Tc ) + a2*( (1-273.15+temp)/Tc )^1.5 +
a3*( (1-273.15+temp)/Tc )^3 + a4* ( (1-273.15+temp)/Tc )^3.5 +
a5*( (1-273.15+temp)/Tc)^4 + a6*( (1-273.15+temp)/Tc )^7.5 ) )

The problem is that
-0.3649376^1.5
is interpreted as
-(0.3649376^1.5)
not
(-0.3649376)^1.5
because the exponent operator has a higher precedence. And when you take something to a .5 exponent that's like taking a square root and those aren't defined for simple numeric vectors in R (unless you want to use imaginary numbers). Your calculation is simply NaN for your data values because your result is not real. You might want to check your formula again,

From the documentation:
Users are sometimes surprised by the value returned, for example why (-8)^(1/3) is NaN. For double inputs, R makes use of IEC 60559 arithmetic on all platforms, together with the C system function pow for the ^ operator. The relevant standards define the result in many corner cases. In particular, the result in the example above is mandated by the C99 standard. On many Unix-alike systems the command man pow gives details of the values in a large number of corner cases.

Related

Correct way to generate Poisson-distributed random numbers in Julia GPU code?

For a stochastic solver that will run on a GPU, I'm currently trying to draw Poisson-distributed random numbers. I will need one number for each entry of a large array. The array lives in device memory and will also be deterministically updated afterwards. The problem I'm facing is that the mean of the distribution depends on the old value of the entry. Therefore, I would have to do naively do something like:
CUDA.rand_poisson!(lambda=array*constant)
or:
array = CUDA.rand_poisson(lambda=array*constant)
Both of which don't work, which does not really surprise me, but maybe I just need to get a better understanding of broadcasting?
Then I tried writing a kernel which looks like this:
function cu_draw_rho!(rho::CuDeviceVector{FloatType}, λ::FloatType)
idx = (blockIdx().x - 1i32) * blockDim().x + threadIdx().x
stride = gridDim().x * blockDim().x
#inbounds for i=idx:stride:length(rho)
l = rho[i]*λ
# 1. variant
rho[i] > 0.f0 && (rho[i] = FloatType(CUDA.rand_poisson(UInt32,1;lambda=l)))
# 2. variant
rho[i] > 0.f0 && (rho[i] = FloatType(rand(Poisson(lambda=l))))
end
return
end
And many slight variations of the above. I get tons of errors about dynamic function calls, which I connect to the fact that I'm calling functions that are meant for arrays from my kernels. the 2. variant of using rand() works only without the Poisson argument (which uses the Distributions package, I guess?)
What is the correct way to do this?
You may want CURAND.jl, which provides curand_poisson.
using CURAND
n = 10
lambda = .5
curand_poisson(n, lambda)

How to add all arrays from all .fits files in a directory?

I have a directory on my computer with several .fits files that I am trying to work with in IDL. For some reason I am unable to add all of the arrays together correctly, as I'm getting negative numbers in the totaled array when all of the individual elements in all files are positive.
In IDL, I tried the following code:
flatsfiles = file_search(Vflats, filter)
addedflats = make_array(1530,1020,value=0)
FOREACH element, flatsfiles DO BEGIN
flatsarray = readfits(element)
addedflats = [addedflats + flatsarray]
ENDFOREACH
Pretty simple code, yet the values in addedflats are negative, while all the arrays have ONLY positive elements on the order of 10^4. Can anyone see where I'm going wrong, or have a different way of doing this?
I've also tried adding the arrays one by one to see where it goes wrong:
array = readfits(flatsfiles[0])
addedflats = [addedflats + array]
array2 = readfits(flatsfiles[1])
addedflats = [addedflats + array2]
Here, the first addedflats array shows the same thing as array, which is expected since it's just being added to an array of 0's. The second addedflats, however, gives negative numbers again. For reference, the first element of array is 25189, the first element of array2 is 24030, but the first element of addedflats is -16317 rather than the expected 49219. TIA!!
Be careful of integer types that might not be able to hold the sum. Your example is probably 16-bit signed integers (the default integers in IDL):
IDL> print, 25189 + 24030
-16317
To fix, declare at least one of the arrays that you are adding to be an integer type that is big enough to hold there result. Here, 32-bit longs are sufficient (just doing scalars for simplicity, but the same works for arrays):
IDL> print, 25189L + 24030L
49219

Function doesn't change value (R)

I have written a function that takes two arguments, a number between 0:16 and a vector which contains four parameter values.
The output of the function does change if I change the parameters in the vector, but it does not change if I change the number between 0:16.
I can add, that the function I'm having troubles with, includes another function (called 'pi') which takes the same arguments.
I have checked that the 'pi' function does actually change values if I change the value from 0:16 (and it does also change if I change the values of the parameters).
Firstly, here is my code;
pterm_ny <- function(x, theta){
(1-sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*theta[4])/pi(x, theta)
}
pi <- function(x, theta){
theta[1]*1*(x==0)+theta[2]*(theta[3]^(x))*exp((-1)*(theta[3]))+(1-
sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*(theta[4]))
}
Which returns 0.75 for pterm_ny(i,c(0.2,0.2,2,2)), were i = 1,...,16 and 0.2634 for i = 0, which tells me that the indicator function part in 'pi' does work.
With respect to raising a number to a certain power, I have been told that one should wrap the wished number in a 'I', as an example it would be like;
x^I(2)
I have tried to do that in my code, but that didn't help either.
I can't remember the argument for doing it, but I expect that it's to ensure that the number in parentheses is interpreted as an integer.
My end goal is to get 17 different values of the 'pterm' and to accomplish that, I was thinking of using the sapply function like this;
sapply(c(0:16),pterm_ny,theta = c(0.2,0.2,2,2))
I really hope that someone can point out what I'm missing here.
In advance, thank you!
You have a theta[4]^x term both in your main expression and in your pi() function; these are cancelling out, leaving the result invariant to changes in x ...
Also:
you might want to avoid using pi as your function name, as it's also a built-in variable (3.14159...) - this can sometimes cause confusion
the advice about using the "as is" function I() to protect powers is only relevant within formulas, e.g. as used in lm() (linear regression). (It would be used as I(x^2), not x^I(2)

how to write a formula in c#?

how to write a formula like
v_r (t)=∑_(n=0)^(N-1)▒〖A_r (L_2-L_1 ) e^j(ω_c t-4π/λ (R+υt+L_(1+L_2 )/2 cos⁡〖(θ)sin⁡(ω_r t+2πn/N)))〗 ┤) sinc(4π/λ-L_(2-L_1 )/2 cos⁡(θ) sin⁡(ω_r t+2πn/N))〗
in c#?
You have to convert the formula to something the compiler recognizes.
To it's equivalent using the a combination of basic algebra and the Math class like so:
p = rho*R*T + (B_0*R*T-A_0-((C_0) / (T*T))+((E_0) / (Math.Pow(T, 4))))*rho*rho +
(b*R*T-a-((d) / (T)))*Math.Pow(rho, 3) +
alpha*(a+((d) / (t)))*Math.Pow(rho, 6) +
((c*Math.Pow(rho, 3)) / (T*T))*(1+gamma*rho*rho)*Math.Exp(-gamma*rho*rho);
Example taken from: Converting Math Equations in C#
Well, first you have to figure out what all those symbols mean. I see the sigma which usually indicates sum-of, with ∑_(n=0)^(N-1) probably translating to:
N-1
∑
n=0
This generally means the sum of the following expression where n varies from 0 to N-1. So I gather you'd need a loop there.
The expression to be calculated within that loop consists of a lof of trigonometric-type functions involving π, θ, sin and cos, and the little known sinc which I assume is a typo :-)
The bottom line is that you need to understand the current expression before you can think about converting it to another form like a C# program. Short of knowing where it came from, or a little bit of context, we probably can't help you that much though there's always a possibility that we have a savant/genius here that will recognise that formula off the top of their head.

Replacing functions with Table Lookups

I've been watching this MSDN video with Brian Beckman and I'd like to better understand something he says:
Every imperitive programmer goes through this phase of learning that
functions can be replaced with table lookups
Now, I'm a C# programmer who never went to university, so perhaps somewhere along the line I missed out on something everyone else learned to understand.
What does Brian mean by:
functions can be replaced with table lookups
Are there practical examples of this being done and does it apply to all functions? He gives the example of the sin function, which I can make sense of, but how do I make sense of this in more general terms?
Brian just showed that the functions are data too. Functions in general are just a mapping of one set to another: y = f(x) is mapping of set {x} to set {y}: f:X->Y. The tables are mappings as well: [x1, x2, ..., xn] -> [y1, y2, ..., yn].
If function operates on finite set (this is the case in programming) then it's can be replaced with a table which represents that mapping. As Brian mentioned, every imperative programmer goes through this phase of understanding that the functions can be replaced with the table lookups just for performance reason.
But it doesn't mean that all functions easily can or should be replaced with the tables. It only means that you theoretically can do that for every function. So the conclusion would be that the functions are data because tables are (in the context of programming of course).
There is a lovely trick in Mathematica that creates a table as a side-effect of evaluating function-calls-as-rewrite-rules. Consider the classic slow-fibonacci
fib[1] = 1
fib[2] = 1
fib[n_] := fib[n-1] + fib[n-2]
The first two lines create table entries for the inputs 1 and 2. This is exactly the same as saying
fibTable = {};
fibTable[1] = 1;
fibTable[2] = 1;
in JavaScript. The third line of Mathematica says "please install a rewrite rule that will replace any occurrence of fib[n_], after substituting the pattern variable n_ with the actual argument of the occurrence, with fib[n-1] + fib[n-2]." The rewriter will iterate this procedure, and eventually produce the value of fib[n] after an exponential number of rewrites. This is just like the recursive function-call form that we get in JavaScript with
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
return result;
}
Notice it checks the table first for the two values we have explicitly stored before making the recursive calls. The Mathematica evaluator does this check automatically, because the order of presentation of the rules is important -- Mathematica checks the more specific rules first and the more general rules later. That's why Mathematica has two assignment forms, = and :=: the former is for specific rules whose right-hand sides can be evaluated at the time the rule is defined; the latter is for general rules whose right-hand sides must be evaluated when the rule is applied.
Now, in Mathematica, if we say
fib[4]
it gets rewritten to
fib[3] + fib[2]
then to
fib[2] + fib[1] + 1
then to
1 + 1 + 1
and finally to 3, which does not change on the next rewrite. You can imagine that if we say fib[35], we will generate enormous expressions, fill up memory, and melt the CPU. But the trick is to replace the final rewrite rule with the following:
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
This says "please replace every occurrence of fib[n_] with an expression that will install a new specific rule for the value of fib[n] and also produce the value." This one runs much faster because it expands the rule-base -- the table of values! -- at run time.
We can do likewise in JavaScript
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
fibTable[n] = result;
return result;
}
This runs MUCH faster than the prior definition of fib.
This is called "automemoization" [sic -- not "memorization" but "memoization" as in creating a memo for yourself].
Of course, in the real world, you must manage the sizes of the tables that get created. To inspect the tables in Mathematica, do
DownValues[fib]
To inspect them in JavaScript, do just
fibTable
in a REPL such as that supported by Node.JS.
In the context of functional programming, there is the concept of referential transparency. A function that is referentially transparent can be replaced with its value for any given argument (or set of arguments), without changing the behaviour of the program.
Referential Transparency
For example, consider a function F that takes 1 argument, n. F is referentially transparent, so F(n) can be replaced with the value of F evaluated at n. It makes no difference to the program.
In C#, this would look like:
public class Square
{
public static int apply(int n)
{
return n * n;
}
public static void Main()
{
//Should print 4
Console.WriteLine(Square.apply(2));
}
}
(I'm not very familiar with C#, coming from a Java background, so you'll have to forgive me if this example isn't quite syntactically correct).
It's obvious here that the function apply cannot have any other value than 4 when called with an argument of 2, since it's just returning the square of its argument. The value of the function only depends on its argument, n; in other words, referential transparency.
I ask you, then, what the difference is between Console.WriteLine(Square.apply(2)) and Console.WriteLine(4). The answer is, there's no difference at all, for all intents are purposes. We could go through the entire program, replacing all instances of Square.apply(n) with the value returned by Square.apply(n), and the results would be the exact same.
So what did Brian Beckman mean with his statement about replacing function calls with a table lookup? He was referring to this property of referentially transparent functions. If Square.apply(2) can be replaced with 4 with no impact on program behaviour, then why not just cache the values when the first call is made, and put it in a table indexed by the arguments to the function. A lookup table for values of Square.apply(n) would look somewhat like this:
n: 0 1 2 3 4 5 ...
Square.apply(n): 0 1 4 9 16 25 ...
And for any call to Square.apply(n), instead of calling the function, we can simply find the cached value for n in the table, and replace the function call with this value. It's fairly obvious that this will most likely bring about a large speed increase in the program.

Resources