This is equation a <- x * t - 2 * x. I want to solve this equation for t.
So basically, set a = 0 and solve for t . I am new to the R packages for solving equations. I need the package that solves for complex roots. The original equations I am work with have real and imaginary roots. I am looking for an algebraic solution only, not numerical.
I tried:
a <- x * t - 2 * x
solve(a,t)
I run into an error:
Error in solve.default(a, t) : 'a' (1000 x 1) must be square
You can use Ryacas to get the solution as an expression of x:
library(Ryacas)
x <- Sym("x")
t <- Sym("t")
Solve(x*t-2*x == 0, t)
# Yacas vector:
# [1] t == 2 * x/x
As you can see, the solution is t=2 (assuming x is not zero).
Let's try a less trivial example:
Solve(x*t-2*x == 1, t)
# Yacas vector:
# [1] t == (2 * x + 1)/x
If you want to get a function which provides the solution as a function of x, you can do:
solution <- Solve(x*t-2*x == 1, t)
f <- function(x){}
body(f) <- yacas(paste0("t Where ", solution))$text
f
# function (x)
# (2 * x + 1)/x
You might be looking for optimize:
a=function(x,t) x*t-2*x
optimize(a,lower=-100,upper=100,t=10)
optimize(a,lower=-100,upper=100,x=2)
If you need more help, I need a reproductible example.
Related
This question already has answers here:
How would you program Pascal's triangle in R?
(2 answers)
How to work with large numbers in R?
(1 answer)
Closed 6 years ago.
For a class assignment, I need to create a function that calculates n Choose k. I did just that, and it works fine with small numbers (e.g. 6 choose 2), but I'm supposed to get it work with 200 choose 50, where it naturally doesn't.
The answer is too large and R outputs NaN or Inf, saying:
> q5(200, 50)
[1] "NaN"
Warning message:
In factorial(n) : value out of range in 'gammafn'
I tried using logs and exponents, but it doesn't cut it.
q5 <- function (n, k) {
answer <- log(exp( factorial(n) / ( (factorial(k)) * (factorial(n - k)) )))
paste0(answer)
}
The answer to the actual question is that R cannot show numbers it cannot represent, and some of the terms in your equation are too big to represent. So it fails. However there are approximations to factorial that can be used - they work with logarithms which get big a lot slower.
The most famous one, Sterling's approximation, was not accurate enough, but the Ramanujan's approximation came to the rescue :)
ramanujan <- function(n){
n*log(n) - n + log(n*(1 + 4*n*(1+2*n)))/6 + log(pi)/2
}
nchoosek <- function(n,k){
factorial(n)/(factorial(k)*factorial(n-k))
}
bignchoosek <- function(n,k){
exp(ramanujan(n) - ramanujan(k) - ramanujan(n-k))
}
nchoosek(20,5)
# [1] 15504
bignchoosek(20,5)
# [1] 15504.06
bignchoosek(200,50)
# [1] 4.538584e+47
You can try this too:
q5 <- function (n, k) {
# nchoosek = (n-k+1)(n-k+2)...n / (1.2...k)
return(prod(sapply(1:k, function(i)(n-k+i)/(i))))
}
q5(200, 50)
#[1] 4.538584e+47
or in log domain
q5 <- function (n, k) {
# ln (nchoosek) = ln(n-k+1) + ln(n-k+2) + ...+ ln(n) - ln(1) -ln(2) - ...- ln(k)
return(exp(sum(sapply(1:k, function(i)(log(n-k+i) - log(i))))))
}
q5(200, 50)
#[1] 4.538584e+47
The packages for large numbers:
Brobdingnag package for "Very large numbers in R":
https://cran.r-project.org/web/packages/Brobdingnag/index.html
Paper: https://www.researchgate.net/publication/251996764_Very_large_numbers_in_R_Introducing_package_Brobdingnag
library(Brobdingnag)
googol <- as.brob(10)^100 # googol:=10^100
googol
# [1] +exp(230.26) # exponential notation is convenient for very large numbers
gmp package for multiple Precision Arithmetic (big integers and rationals, prime number tests, matrix computation):
https://cran.r-project.org/web/packages/gmp/index.html
This solution calculates the complete row of the Pascal triangle:
x <- 1
print(x)
for (i in 1:200) { x <- c(0, x) + c(x, 0); print(x) }
x[51] ### 200 choose 50
## > x[51]
## [1] 4.538584e+47
(as I proposed for How would you program Pascal's triangle in R? )
If you want to speed up the code then do not the print(x) (output is a relative slow operation).
To put the code in a function we can do
nchoosek <- function(n,k) {
x <- 1
for (i in 1:n) x <- c(0, x) + c(x, 0)
x[k+1] ### n choose k
}
nchoosek(200, 50) ### testing the function
## [1] 4.538584e+47
Here is a more refined version of my function:
nchoosek <- function(n, k) {
if (k==0) return(1)
if (k+k > n) k <- n-k
if (k==0) return(1)
x <- 1
for (i in 1:k) x <- c(0, x) + c(x, 0)
for (i in 1:(n-k)) x <- x + c(0, head(x, -1))
tail(x, 1)
}
nchoosek(200, 50) ### testing the function
## [1] 4.538584e+47
I was wondering whether there exist any packages or other pre-built solutions for R that are capable of translating sets of linear equations into matrix form (e.g. for solution via the Gauss Seidel algorithm), similar to the equationsToMatrix(eqns,vars) function in Matlab?
An example from Matlab:
[A, b] = equationsToMatrix([x - y == 0, x + 2*y == 3, [x, y])
A =
[ 1, -1]
[ 1, 2]
b =
0
3
Suggestions for building blocks would be very helpful, too.
1) This is not exactly what you are asking for but maybe it will help anyways:
library(Ryacas)
x <- Sym("x")
y <- Sym("y")
Simplify(Solve(List(x - y == 0, x + 2*y == 3), List(x, y)))
giving:
expression(list(list(x - y == 0, y - 1 == 0)))
2) If we know these are linear equations exactly of the form shown in the question then try this. The two strapply calls perform matches of the regular expression against the components of args, capture the strings matched by the portions of the regular expressions within parentheses and call the function specified as the third argument with those captured strings as arguments. We combine the strapply outputs using rbind.fill and replace any NAs it generates with zero.
library(gsubfn) # strapply
library(plyr) # rbind.fill
eqn <- function(...) {
args <- c(...)
x2num <- function(x, y) { # determine coefficient value as a numeric
z <- gsub(" ", "", x)
setNames(if (z == "-") -1 else if (z == "") 1 else as.numeric(z), y)
}
lhs <- strapply(args, "(-? *\\d*)[ *]*([a-z])", x2num)
lhs <- do.call(rbind.fill, lapply(lhs, function(x) as.data.frame(t(x))))
lhs <- as.matrix(lhs)
lhs[] <- ifelse(is.na(lhs), 0, lhs)
list(lhs = lhs, rhs = strapply(args, "== *(\\d)", as.numeric, simplify = TRUE))
}
# test it out
eqn("x - y == 0", "2*y == 3")
giving:
$lhs
x y
[1,] 1 -1
[2,] 0 2
$rhs
[1] 0 3
Update: Generalized so that now not all variables need to be in each equation and also variables can be in different orders in different equations.
Can't tell from your example whether you just want simple linear equation solving or a more general system solver. If the latter, take a look at the BB and nleqslv packages.
You might also be interested in a "wrapper" tool written by some seriously twisted mind :-) , in the package ktsolve . This last tool lets you set up an arbitrary set of equations and back-solve for any desired set of variables.
How can I integrate over a PCHIP (Piecewise Cubic Hermite Interpolation Polynomial) function in R? pchip {pracma} returns interpolated point data, and to integrate we of course need a function. I see under the help menu for pchip(), "TODO: A `pchipfun' should be provided," I don't know how hard this would be to generate manually? Any other suggestions? You could fit an nth degree polynomial regression to the interpolated points and integrate off that to get a rough approximation, but that gets messy pretty quick...
Here's the source code for pchip {pracma} which returns points and not a function, I suppose returning a function is more of a math question not an R question, but I'm open for any and all suggestions! Please!
function (xi, yi, x)
{
h <- diff(xi)
delta <- diff(yi)/h
d <- .pchipslopes(h, delta)
n <- length(xi)
a <- (3 * delta - 2 * d[1:(n - 1)] - d[2:n])/h
b <- (d[1:(n - 1)] - 2 * delta + d[2:n])/h^2
k <- rep(1, length(x))
for (j in 2:(n - 1)) {
k[xi[j] <= x] <- j
}
s <- x - xi[k]
v <- yi[k] + s * (d[k] + s * (a[k] + s * b[k]))
return(v)
}
Thanks!
What does not work for you? You have to define a function using pchipfun() like this:
> library(pracma)
> xs <- linspace(0, pi, 10)
> ys <- sin(xs)
> pchipfun <- function(xi, yi) function(x) pchip(xi, yi, x)
> f <- pchipfun(xs, ys)
> integrate(f, 0, pi)
2.000749 with absolute error < 0.00017
I have updated pracma 1.7.2 on R-Forge to include pchipfun()
and added some error checking to pchip().
I'm trying to use R to estimate E[u(X)] where u is a utility function and X is a random variable. More specifically, I want to be able to rank E[u(X)] and E[u(Y)] for two random variables X and Y -- only the ranking matters.
My problem is that u(x) = -exp(-sigma * x) for some sigma > 0, and this converges very rapidly to zero. So I have many cases where I expect, say, E[u(X)] > E[u(Y)], but because they are so close to zero, my simulation cannot distinguish them.
Does anyone have any advice for me?
I am only interested in ranking the two expected utilities, so u(x) can be replaced by any u.tilde(x) = a * u(x) + b, where a > 0 and b can be any number.
Below is an example where X and Y are both normal (in which case I think there is a closed form solution, but pretend X and Y have complicated distributions that I can only simulate from).
get.u <- function(sigma=1) {
stopifnot(sigma > 0)
utility <- function(x) {
return(-exp(-sigma * x))
}
return(utility)
}
u <- get.u(sigma=1)
curve(u, from=0, to=10) # Converges very rapidly to zero
n <- 10^4
x <- rnorm(n, 10^4, sd=10)
y <- rnorm(n, 10^4, sd=10^3)
mean(u(x)) == mean(u(y)) # Returns True (they're both 0), but I expect E[u(x)] > E[u(y)]
## An example of replacing u with a*u + b
get.scaled.u <- function(sigma=1) {
stopifnot(sigma > 0) # Risk averse
utility <- function(x) {
return(-exp(-sigma * x + sigma * 10^4))
}
return(utility)
}
u <- get.scaled.u(sigma=1)
mean(u(x)) > mean(u(y)) # True as desired
x <- rnorm(n, 10^4, sd=10^3)
y <- rnorm(n, 10^4, sd=2*10^3)
mean(u(x)) > mean(u(y)) # False again -- they're both -Inf
Is finding a clever way to scale u the correct way to deal with this problem? For example, suppose X and Y both have bounded support -- if I know the bounds, how can I scale u to guarantee that a*u + b will be neither too close to -Inf, nor too close to zero?
Edit: I didn't know about multiple precision packages. Rmpfr is helpful:
library(Rmpfr)
x.precise <- mpfr(x, 100)
y.precise <- mpfr(y, 100)
mean(u(x.precise)) > mean(u(y.precise)) # True
not for the first time, I guess that the answer is quite simple. But searching for R solutions is regularly hard work and after two hours its probably at time to ask someone...
I am working with a non-linear formula (this is only the first work on it, it will actually become non-linear soon) and to test my initial values, i would like to simply calculate the values over a series of x values.
Here is some code:
x <- c(1,2,3,4,5,6,7,8,9,10,11,12) #etc
y <- c(NA,332,248,234,84,56,26,24,27,33,37,25) #etc
# This is my formula I shall soon expand
fEst <- y ~ 1 / (x / a + 1) * b
# Initial value
a <- 800
# Initial value based on inverted formula and second measure
b <- y[2] * (x[2] / a + 1)
# Can i use my formula fEst to do this step?
p <- 1 / (x / a + 1) * b
The point is that I am working on the formula - and it seems strange to make each change, twice...
What I found was a package nls2 where something like this was possible and a function apply.a.formula which seems to be an element from another package - but as this is a very basic use of a function, I guess that the R base packe already has the appropriate functions. Just ... where?
Thanks!
I came across this thread whilst looking up the avenues you'd tried and the solution posted by Gabor. Note that apply.a.formula() is a made up function name that the OP in the thread was looking to find a real function for.
Using the example that Gabor provided in the thread this is a solution using the nls2 package:
## your data
x <- c(1,2,3,4,5,6,7,8,9,10,11,12) #etc
y <- c(NA,332,248,234,84,56,26,24,27,33,37,25) #etc
# This is my formula I shall soon expand
fEst <- y ~ 1 / (x / a + 1) * b
# Initial value
a <- 800
# Initial value based on inverted formula and second measure
b <- y[2] * (x[2] / a + 1)
## install.packages("nls2", depend = TRUE) if not installed
require(nls2)
fitted(nls2(fEst, start = c(a = a, b = b), alg = "brute"))
The last line gives:
R> fitted(nls2(fEst, start = c(a = a, b = b), alg = "brute"))
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113
attr(,"label")
[1] "Fitted values"
which is essentially the same as 1 / (x / a + 1) * b would give:
R> 1 / (x / a + 1) * b
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113
From the comments, Carl Witthoft notes that if you want to generalise equations like 1 / (x / a + 1) * b then a function might be a useful way of encapsulating the operation without typing out 1 / (x / a + 1) * b every time. For example
myeqn <- function(a, b, x) { 1 / (x / a + 1) * b }
R> myeqn(a, b, x)
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113