To solve given expression in R and find T value? [closed] - r

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
enter image description hereI have an question concerning the possibility to solve functions in R, but to know the answer would really help to understand the R better.
0.10 = (1/(1+(((1.04e+19*((T/300)^(3/2)))/(4e+16))exp((-(0.045)/(0.0259(T/300)))))))
How can I solve this expression in R and find the value of T?

Your equation does not have solutions. To see this define the function, defined over the positive reals,
f <- function(T) (1/(1+(((1.04e+19*((T/300)^(3/2)))/(4e+16))*exp((-(-0.045)/(0.0259*(T/300)))))))
and define a function f(x) - a, in order to solve the equation f(x) - a = 0.
g <- function(x, a) f(x) - a
Now plot this second function.
curve(g(x, 0.10), 0, 1e3)
As the graph shows, all values of g(x, 0.10) = f(x) - 0.10 are positive, f(x) != 0.10 for all x.
Analytically, if the functions never changes sign it does not have roots. Since the function is continuous, all we need to do is to check its value near 0 and the maximum.
g(.Machine$double.eps, 0.10)
#[1] -0.1
The maximum is determined with optimise.
optimise(g, c(0, 1e3), a = 0.10, maximum = TRUE)
#$maximum
#[1] 347.4904
#
#$objective
#[1] -0.09931205
Both values are negative, which confirms what the graph had shown.
Edit
Everything that was said above is right but apparently the function's expression was wrong. With the correct expression the root can be found with uniroot. Note that the solution given in the question's image was found by trial and error, the solution below was found by a numeric method but they are the same solution.
f <- function(T) {
numer <- 1.04e19*(T/300)^(3/2) / 4e16
numer <- numer * exp(-0.045/(0.0259*T/300))
numer <- 1 + numer
1/numer
}
g <- function(x, a) f(x) - a
xzero <- uniroot(g, a = 0.10, interval = c(0, 1e3))
xzero
#$root
#[1] 192.9487
#
#$f.root
#[1] -1.149569e-10
#
#$iter
#[1] 13
#
#$init.it
#[1] NA
#
#$estim.prec
#[1] 6.103516e-05
curve(g(x, 0.10), 0, 1e3)
abline(h = 0)
points(xzero$root, 0, col = "blue", pch = 16)

Related

Optimization - Limits and simple constraint

I have a rather simple optimization question and while I'm fairly decent with R, optimization is something I haven't done a lot.
my.function <- function(parameters){
x <- parameters[1]
y <- parameters[2]
z <- parameters[3]
((10*x^2) - ((y/2) * (z/4)))^2
}
result <- optim(c(7,10,18),fn = my.function, method = 'L-BFGS-B',
lower = c(2,7,7),
upper = c(15,20,20))
result$par
#[1] 2.205169 19.546621 19.902243
This is a made up version of the problem I'm working on, so please forgive it if its purpose makes no sense. I have limits in place using the 'L-BFGS-B' method but I need to add a constraint and I'm unsure how to do it. My rules that I'm trying to implement are as follows:
x must be between 2 and 15
y must be between 7 and 20
z must be between 7 and 20
z <= y
It's the last one I don't know how to implement. Any help would be appreciated. Thank you.
Add a large number to the objective function if the constraint is violated, i.e. change the last line of my.function to:
((10*x^2) - ((y/2) * (z/4)))^2 + ifelse(y > z, 10^5, 0)
The result in this case is the following which does satisfy the constraint. Also, since the objective is non-negative its value cannot be less than 0 so we have achieved the minimum to numeric tolerance.
result$par
## [1] 2.223537 19.776462 20.000000
result$value
## [1] 1.256682e-11

Why is this causing my program to crash? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have created the following user defined function which generates random variables using the Acceptance-Rejection Method. However, whenever it is called, my program goes on and eventually crashes or I have to force quit. I have gone through it several times. Any ideas as to what could be causing this?
I am aware that this may not be the best way to have written this (Yesterday was the first time I have used R) so any extra tips are a bonus!
acceptReject <- function(){
Z <- 0
Y <- c(0,0)
while(Y[2] < .5*(Y[1]-1)**2){
U <- runif(2,0,1)
Y <- log(U)
}
Z <- Y[1]
U <- runif(1,0,1)
if(U <= .5){
Z <- abs(Z)
}
else{
Z <- -abs(Z)
}
Z
}
You have an infinite loop.
If you assume that Y ~ log( [0,1] ) (mathematically), that means that it always range between log(0) and log(1), equating to -Inf and 0, respectively. (Bottom line, it is always less-than-or-equal-to zero.)
Now let's look at your conditional: .5*(Y[1]-1)**2. If you know the domain of Y is c(-Inf,0), then the range of this formula is
.5*(c(-Inf,0)-1)**2
# [1] Inf 0.5
(This is always greater-than-or-equal-to 0.5.)
Since Y is always <= 0 and the formula is always >= 0.5, your conditional will mathematically always be true. Infinite loop.

Problems in Numerical Integration through R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have the following function
f(x)∝|x| exp(-1/2 |x| )+1/(1+(x-40)^4 ),xϵR
I want to find out E(X) and E(X^3) through Simpson's method (numerical integration), Standard Monte Carlo approach, Acceptance-rejection sampling, Importance sampling, Metropolis-Hasting Algorithm, Gibbs sampling and then Bayesian model using MCMC (I have not decided yet).
How can I validate my results obtained from different methods?
I have tried to solve E(X) mathematically but fail to find any close form. This function can be divided over different parts as
absolute(x)*double exponential density + another function utilizing higher power (4) of X in inverse form.
Due to absolute (x) and range [-Inf, Inf] We always have to divide it over [-Inf, 0] and [0, Inf]. Through Integration by parts I was able to see first part as (absolute (x) + (x^2/2) over infinite range) + Integral of this part can't be found mathematically.
So I make use of the following code to get numerical integration result as
Library(stats)
integrand <- function(x) {x*(abs(x)* exp(-0.5*abs(x))+(1/(1+(x-40)^4)))}
integrate(integrand, lower = -Inf, upper = Inf)
thus the result is E(X)= 88.85766 with absolute error < 0.004
The results which I obtain from these methods are not similar for instance
(i) Through Simpsons method I got E(X) = 0.3222642 and E(X^3)=677.0711..
simpson_v2 <- function(fun, a, b, n=100) {
# numerical integral using Simpson's rule
# assume a < b and n is an even positive integer
if (a == -Inf & b == Inf) {
f <- function(t) (fun((1-t)/t) + fun((t-1)/t))/t^2
s <- simpson_v2(f, 0, 1, n)
} else if (a == -Inf & b != Inf) {
f <- function(t) fun(b-(1-t)/t)/t^2
s <- simpson_v2(f, 0, 1, n)
} else if (a != -Inf & b == Inf) {
f <- function(t) fun(a+(1-t)/t)/t^2
s <- simpson_v2(f, 0, 1, n)
} else {
h <- (b-a)/n
x <- seq(a, b, by=h)
y <- fun(x)
y[is.nan(y)]=0
s <- y[1] + y[n+1] + 2*sum(y[seq(2,n,by=2)]) + 4 *sum(y[seq(3,n-1, by=2)])
s <- s*h/3
}
return(s)
}
EX <- function(x) x*(abs(x)* exp(-0.5*abs(x))+(1/(1+(x-40)^4)))
simpson_v2(EX, -Inf, Inf, n=100)
EX3 <- function(x) (x^3)*(abs(x)* exp(-0.5*abs(x))+(1/(1+(x-40)^4)))
simpson_v2(EX3, -Inf, Inf, n=100)
(ii) Importance Sampling
My proposal density is Normal with mean=0 and standard deviation =4. The summary of the Importance sampling process I am applying is as follows
Suppose I can't sample from f(x) which is true as it has no well-known form and no built-in function is available in R to use for sampling. So, I propose another log cancave tail distribution N(0, 4) to take samples such that instead of estimating E(x) I estimate E(x*f(x)/N(0,1)). I use the following code for this which takes 100000 samples from N(0,4)
X <- rnorm(1e5, sd=4)
Y <- (X)*(abs(X)*exp(-0.5*abs(X))+(1/(1+(x-40)^4)))/(dnorm(X, sd=4))
mean(Y)
Since this code needs random sampling from Normal distribution therefore, each time I got different answers but it is something around -0.1710694 which is almost similar to 0.3222642. I got from Simpsons method. But these results are very different E(X)= 88.85766 from integrate(). Note that integrate() use the Adaptive quadrature method. Is this method different from Simpsons and Importance sampling? What similarity in results I should expect while comparing these methods
First, EX and EX3 definition is wrong, you miss minus under exponent
Well, here are some simplifications
If you integrate this part x*(abs(x)* exp(-0.5*abs(x))), from -infinity...infinity result would be 0
If you integrate this part x^3*(abs(x)* exp(-0.5*abs(x))), from -infinity...infinity result would be 0
Integral x^3/(1+(x-40)^4) from -infinity...infinity would be infinity, I would venture, you'll get logarithms which are infinite at infinity, see http://integrals.wolfram.com/index.jsp?expr=%28xxx%29%2F%281+%2B+%28x-a%29%5E4%29&random=false
Integral x/(1+(x-40)^4) looks like something resembling inverse tangent, though online integrator provides ugly output http://integrals.wolfram.com/index.jsp?expr=x%2F%281+%2B+%28x-a%29%5E4%29&random=false
UPDATE
Looks like your EX would be 40*\pi / \sqrt{2}
And EX3 is not infinity, I might be wrong here
UPDATE 2
Yep, EX3 is finite, should be a^2*EX + \pi*a*3/\sqrt{2}, where a is equal to 40
UPDATE 3
As noted, there is also a normalization required to get true values of EX and EX3
N = 8 + \pi/\sqrt{2}
Computed integrals to be divided by N to get proper moments.

'Non-conformable arguments' in R code [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
: ) I previously wrote an R function that will compute a least-squares polynomial of arbitrary order to fit whatever data I put into it. "LeastSquaresDegreeN.R" The code works because I can reproduce results I got previously. However, when I try to put new data into it I get a "Non-conformable arguments" error.
"Error in Conj(t(Q))%*%t(b) : non-conformable arguments"
An extremely simple example of data that should work:
t <- seq(1,100,1)
fifthDegree <- t^5
LeastSquaresDegreeN(t,fifthDegree,5)
This should output and plot a polynomial f(t) = t^5 (up to rounding errors).
However I get "Non-conformable arguments" error even if I explicitly make these vectors:
t <- as.vector(t)
fifthDegree <- as.vector(fifthDegree)
LeastSquaresDegreeN(t,fifthDegree,5)
I've tried putting in the transpose of these vectors too - but nothing works.
Surely the solution is really simple. Help!? Thank you!
Here's the function:
LeastSquaresDegreeN <- function(t, b, deg)
{
# Usage: t is independent variable vector, b is function data
# i.e., b = f(t)
# deg is desired polynomial order
# deg <- deg + 1 is a little adjustment to make the R loops index correctly.
deg <- deg + 1
t <- t(t)
dataSize <- length(b)
A <- mat.or.vec(dataSize, deg) # Built-in R function to create zero
# matrix or zero vector of arbitrary size
# Given basis phi(z) = 1 + z + z^2 + z^3 + ...
# Define matrix A
for (i in 0:deg-1) {
A[1:dataSize,i+1] = t^i
}
# Compute QR decomposition of A. Pull Q and R out of QRdecomp
QRdecomp <- qr(A)
Q <- qr.Q(QRdecomp, complete=TRUE)
R <- qr.R(QRdecomp, complete=TRUE)
# Perform Q^* b^T (Conjugate transpose of Q)
c <- Conj(t(Q))%*%t(b)
# Find x. R isn't square - so we have to use qr.solve
x <- qr.solve(R, c)
# Create xPlot (which is general enough to plot any degree
# polynomial output)
xPlot = x[1,1]
for (i in 1:deg-1){
xPlot = xPlot + x[i+1,1]*t^i
}
# Now plot it. Least squares "l" plot first, then the points in red.
plot(t, xPlot, type='l', xlab="independent variable t", ylab="function values f(t)", main="Data Plotted with Nth Degree Least Squares Polynomial", col="blue")
points(t, b, col="red")
} # End

Non-conformable arrays in R

y <- matrix(c(7, 9, -5, 0, 2, 6), ncol = 1)
try <- t(y)
tryy <- try %*% y
i <- solve(tryy)
h <- y %*% i %*% try
uniroot(as.vector(solve(((1-x) * diag(6)) + h)), c(-Inf, Inf))
Error in (1 - x) * diag(6) : non-conformable arrays
The purpose of this command uniroot(as.vector(solve(((1-x) * diag(6)) + h)), c(-Inf, Inf)) is to solve the characteristics equation det[(1-λ)I+h] = 0
where, λ=eigenvalues , I=identity matrix , h=hat matrix=y(y'y)^(-1)y'
here λ is unknown ,we have to solve for it.
I am not understanding where is the problem here? I have tried as:
as.vector(solve(6*diag(6)+h))
This is not non-conformable. But why is not working inside the uniroot function?
Your question is a bit confusing, so I have to make a couple of assumptions. If you want the eigenvalues of h, then the characteristic equation is:
det(h - I*λ) = 0
not
det[(1-λ)I+h] = 0
So I used the former.
Given the above, the short answer is: do it this way.
f <- function(lambda) det(h -lambda*diag(6))
F <- Vectorize(f)
library(rootSolve)
uniroot.all(F,c(-1000,1000),n=2000)
# [1] 0 1
# or, much more simply
eigen(h)$values
# [1] 1.000000e+00 2.220446e-16 0.000000e+00 -2.731318e-18 -6.876381e-18 -7.365903e-17
So h has 2 eigenvalues, 0 and 1. Note that the built-in function eigen(...) finds 6 roots, but 5 of them are within the machine tolerance of 0.
The question about why your code fails is a bit more involved.
First, your code:
tryy <- try %*% y
is the dot product of y with itself (so, a scalar), returned as a matrix with one element. When you "invert" that using solve(...)
i <- solve(tryy)
you simply take the reciprocal, so i is also a matrix with 1 element. I'm not sure if this is what you had in mind.
Second, uniroot(...) does not work this way. The first argument must be a function; you've passed an expression which depends on x, which in turn is undefined. You could try:
f <- function(x) det(h-x*diag(6))
uniroot(f,c(-Inf,Inf))
but this wouldn't work either because (a) uniroot(...) works on a finite interval, (b) it requires that the function f(...) have different sign at the ends of the interval, and (c) in any event it would return only one root (the smaller one).
So you could use uniroot.all(...) in package rootSolve. uniroot.all(...) also requires a function as it's first argument, but there's a twist: the function must be "vectorized". This means that if you pass a vector of lambda values, f(...) should return a vector of the same length. Fortunately in R there is an easy way to "vectorize" a given function, as in:
F <- Vectorize(f).
Even this has it's limits. uniroot.all(...) also requires a finite interval, so we have to guess what that is, and also it evaluates F on n sub-intervals. So if your interval does not contain all the roots, or if the sub-intervals are not small enough, you will not find all the roots.
Using the built-in eigen(...) function is definitely the best option.

Resources