Calculate Taylor series using rSymPy - r

I have been trying out the R interface rSymPy to the CAS SymPy and it works quite well. However, I cannot find the correct syntax for using some of the more complex features, such as finding a Taylor series.
For example, I have tried the following:
library(rSymPy)
sympy("var('p')")
#
##### Cannot make this work ???
#
sympy("from sympy.mpmath import *")
xt <- sympy("p=taylor(exp, 0, 10)")
But it throws the error:
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
SyntaxError: ("no viable alternative at input '='", ('<string>', 1, 8, '__Rsympy= from sympy.mpmath import *\n'))
Any help appreciated.

There does not appear to be an explicit Taylor series available, but the series function is available. The following code works:
library(rSymPy)
sympy("var('p')")
sympy("var('x')") # or sympy("x = Symbol('x', real=True)")
#
xt <- sympy("p=series(exp(x), x, 0, 10)") # expand about 0 to 10th order
which gives the answer:
[1] "1 + x + x**2/2 + x**3/6 + x**4/24 + x**5/120 + x**6/720 + x**7/5040 + x**8/40320 + x**9/362880 + O(x**10)"
We can check this answer by modifying the code to:
library(rSymPy)
sympy("var('p')")
sympy("var('x')") # or sympy("x = Symbol('x', real=True)")
#
xt <- sympy("p=series(exp(x), x, 0, 10)") # expand about 0 to 10th order
# Remove order information
xt0 <- sympy("p.removeO()")
# Test results
x <- 1/3
T1 <- eval(parse(text=xt0)) # Evaluate the result, xt0
T2 <- exp(x) # The correct value
print(T1-T2) # Print the error
Finally, the error from the series expansion is:
[1] -4.811929e-12
I hope this is helpful to anyone else wishing to use the R package rSymPy

Related

addTA - Error in naCheck(x, n) : Series contains non-leading NAs

I recently tried to create my own technical indicator, a simple golden cross indicator. 50 - 200 day EMA to be added to my chartSeries chart. This worked fine with the code below at first, but after the updated package of quantmod was released it gives me this error message:
Code (stock data is downloaded through the getSymbols function in quantmod)
#20dayEMA - 50dayEMA Technical indicator, Price and Volume
newEMA <- function(x){(removeNA(EMA(p[,6],n=50)-(EMA(p[,6],n=200))))
}
emaTA <- newTA(newEMA)
emaTA(col='lightgoldenrod3', 'Price')
Then it gives me this error message:
Error in naCheck(x, n) : Series contains non-leading NAs
Does anyone know how to remove these non-leading NAs?
You can use na.omit and there is no need to convert to an xts-object as this is the default.
library(quantmod)
getSymbols("VELO.CO")
p <- na.omit(VELO.CO)
newEMA <- function(x) {
EMA(p[,6], n = 20) - (EMA(p[,6], n = 50))
}
emaTA <- newTA(newEMA)
barChart(VELO.CO)
emaTA(col = "lightgoldenrod3", "Price")
I'm not familiar with the quantmod package, but I played around with your code and I think I found a working solution:
library("quantmod")
getSymbols("VELO.CO")
p <- as.xts(c(VELO.CO))
# remove incomplete cases
vec <- which(!complete.cases(p)) # rows 2305 2398
p2 <- p[-vec, ]
newEMA <- function(x) {
EMA(p2[, 6], n = 20) - (EMA(p2[, 6], n = 50))
}
emaTA <- newTA(newEMA)
barChart(VELO.CO)
emaTA(col = "lightgoldenrod3", "Price")

R - numerical errors with analytical gradient?

I've got the following code:
theta=0.05
n=1000
m=200
r=rnorm(2000)
#ER check function
nu=Vectorize(function(a,tau){return(abs(tau-(a<0))*a^2)})
#Selecting 10 lowest sum values (lowest10 function returns indices)
lowest10=function(x){
values=sort(x)[1:min(10,length(x))]
indices=match(values,x)
return(indices)
}
sym.expectile=function(beta,e,abs.r){return(beta[1]+beta[2]*e+beta[3]*abs.r)}
ERsum=function(beta,tau,start,end){
y=r[(start+1):end]
X1=rep(1,n-1)
X3=abs(r[start:(end-1)])
X2=c()
X2[1]=e.sym.optimal[start-m]
for (i in 2:(n-1)){
X2[i]=sym.expectile(beta,X2[i-1],X3[i-1])
}
X=matrix(c(X1,X2,X3),ncol=3)
res=y-X%*%beta
sum.nu=mean(nu(res,tau))
return(sum.nu)
}
ERsum.gr=function(beta,tau,start,end){
y=r[(start+1):end]
X1=rep(1,n-1)
X3=abs(r[start:(end-1)])
X2=c()
X2[1]=e.sym.optimal[start-m]
for (i in 2:(n-1)){
X2[i]=sym.expectile(beta,X2[i-1],X3[i-1])
}
X=matrix(c(X1,X2,X3),ncol=3)
partial.beta0=c()
for (i in 1:(n-1)){partial.beta0[i]=-(1-beta[2]^(i))/(1-beta[2])}
gr.beta0=2/T*sum(abs(tau-(y<X%*%beta))*(y-X%*%beta)*partial.beta0)/1000
partial.beta1=c()
partial.beta1[1]=-X2[1]
for (i in 2:(n-1)){partial.beta1[i]=partial.beta1[i-1]*beta[2]-X2[i]}
gr.beta1=2/T*sum(abs(tau-(y<X%*%beta))*(y-X%*%beta)*partial.beta1)/1000
partial.beta2=c()
partial.beta2[1]=-X3[1]
for (i in 2:(n-1)){partial.beta2[i]=partial.beta2[i-1]*beta[2]-X3[i]}
gr.beta2=2/T*sum(abs(tau-(y<X%*%beta))*(y-X%*%beta)*partial.beta2)/1000
c(gr.beta0,gr.beta1,gr.beta2)
}
beta=matrix(nrow=1e4,ncol=3)
beta[,1]=runif(1e4,-1,0)#beta0
beta[,2]=runif(1e4,0,1)#beta1
beta[,3]=runif(1e4,-1,0)#beta2
e.sym.optimal=c()
tau.found.sym.optim=0.02234724
library('expectreg')
e.sym.optimal[1]=expectile(r[1:m],tau.found.sym.optim)
ERsums.sym=c()
for (i in 1:nrow(beta)){
ERsums.sym[i]=ERsum(beta[i,],tau.found.sym.optim,m+1,m+n)
}
initialbeta.esym=beta[lowest10(ERsums.sym),]
intermedietebeta.esym=matrix(ncol=3,nrow=10)
for (i in 1:10){
intermedietebeta.esym[i,]=optim(initialbeta.esym[i,],ERsum,
gr=ERsum.gr,tau=tau.found.sym.optim,
start=m+1,end=m+n,
method="BFGS")$par
}
I tried to replace the optim function with optimx, but got the following error:
Error: Gradient function might be wrong - check it!
To check if my gradient is ok I tried to evaluate values of gradient function using function grad from numDeriv and directly calling my ERsum.gr function. For the sample vector
beta
[1] -0.8256490 0.7146256 -0.4945032
I obtained following results:
>grad(function(beta) ERsum(c(beta[1],beta[2],beta[3]),tau.found.sym.optim,m+1,m+n),beta)
[1] -0.6703170 2.8812666 -0.5573101
> ERsum.gr2(beta,tau.found.sym.optim,m+1,m+n)
[1] -0.6696467 2.8783853 -0.5567527
So here is my question: is it possible that these differences are just some numerical errors caused by rounding down the partial.beta0, partial.beta1, partial.beta2 which are just the components of the sum representing gradient? I think so, because if my analytical formula for gradient misses something, the discrepancies would be probably much larger, but how can I be sure? If this is a case is there any other way to obtain more accurate values of gradient?
You've got further problems down the line even if you solve the question of whether that is really a proper gradient, which I see as too complex to tackle. If you take out the gr argument and try to run with only optimx instead of optim, you get:
Error in intermedietebeta.esym[i, ] <- optimx(initialbeta.esym[i, ], ERsum, :
number of items to replace is not a multiple of replacement length
This probably relates to the fact that optimx does not return the same structure as is returned by optim:
> optimx(initialbeta.esym[i,],ERsum,
+ tau=tau.found.sym.optim,
+ start=m+1,end=m+n,
+ method="BFGS")$par
NULL
> optimx(initialbeta.esym[i,],ERsum,
+ tau=tau.found.sym.optim,
+ start=m+1,end=m+n,
+ method="BFGS") # leave out `$par`
p1 p2 p3 value fevals gevals niter convcode kkt1 kkt2 xtimes
BFGS -1.0325 0.2978319 0.04921863 0.09326904 102 100 NA 1 TRUE FALSE 3.366
If you disagree with the decision to allow a default gradient estimate, hten you need to narrow down your debugging to the function that throws the error:
Error: Gradient function might be wrong - check it!
> traceback()
3: stop("Gradient function might be wrong - check it! \n", call. = FALSE)
2: optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,
upper, hessian, optcfg$ctrl, have.bounds = optcfg$have.bounds,
usenumDeriv = optcfg$usenumDeriv, ...)
1: optimx(initialbeta.esym[i, ], ERsum, gr = ERsum.gr, tau = tau.found.sym.optim,
start = m + 1, end = m + n, method = "BFGS")
And look at the documentation (there was no help page) and code for optimx:::optimx.check. This is the section of code that does the checking:
if (!is.null(ugr) && !usenumDeriv) {
gname <- deparse(substitute(ugr))
if (ctrl$trace > 0)
cat("Analytic gradient from function ", gname,
"\n\n")
fval <- ufn(par, ...)
gn <- grad(func = ufn, x = par, ...)
ga <- ugr(par, ...)
teps <- (.Machine$double.eps)^(1/3)
if (max(abs(gn - ga))/(1 + abs(fval)) >= teps) {
stop("Gradient function might be wrong - check it! \n",
call. = FALSE)
optchk$grbad <- TRUE
}

Convert ashape3d class to mesh3d

Can somebody help me convert an 'ashape3d' class object to class 'mesh3d'?
In ashape3d, the triangle en tetrahedron faces are are stored in different fields. As I don't think there's a function that can create a mesh3d object from triangles&tetrahedrons simultaneously, I tried the following (pseudocode):
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
vert <- model$x[model$vert[,2]==1,]
vert <- cbind(vert,rep(1,nrow(vert)))
tria <- model$triang[model$triang[,4]==1,1:3]
tetr <- model$tetra[model$tetra[,6]==1,1:4]
m3dTria <- tmesh3d(vertices=vert , indices=tria)
m3dTetr <- qmesh3d(vertices=vert , indices=tetr)
m3d <- mergeMeshes(m3dTria,m3dTetr)
plot.ashape3d(model) # works fine
plot3d(m3d) # Error in x$vb[1, x$it] : subscript out of bounds
Does anybody have a better way?
I needed to do this recently and found this unanswered question. The easiest way to figure out what is going on is to look at plot.ashape3d and read the docs for ashape3d. plot.ashape3d only plots triangles.
The rgl package has a generic as.mesh3d function. This defines a method for that generic function.
as.mesh3d.ashape3d <- function(x, ...) {
if (length(x$alpha) > 1)
stop("I don't know how to handle ashape3d objects with >1 alpha value")
iAlpha = 1
# from help for ashape3d
# for each alpha, a value (0, 1, 2 or 3) indicating, respectively, that the
# triangle is not in the alpha-shape or it is interior, regular or singular
# (columns 9 to last)
# Pick the rows for which the triangle is regular or singular
selrows = x$triang[, 8 + iAlpha] >= 2
tr <- x$triang[selrows, c("tr1", "tr2", "tr3")]
rgl::tmesh3d(
vertices = t(x$x),
indices = t(tr),
homogeneous = FALSE
)
}
You can try it out on the data above
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
plot(model, edges=F, vertices=F)
library(rgl)
model2=as.mesh3d(model)
open3d()
shade3d(model2, col='red')

Solve simple equation in R

I have a probably really basic question concerning the possibility to solve functions in R, but to know the answer would really help to understand R better.
I have following equation:
0=-100/(1+r)+(100-50)/(1+r)^2+(100-50)/(1+r)^3+...(100-50)/(1+r)^10
How can I solve this equation in R finding the variable r?
I tried sth. like this:
n <- c(2:10)
0 = -100/(r+1)+sum((100-50)/((1+r)^n))
But got an error message:
Error in 0 = -100/(r + 1) + sum((100 - 50)/((1 + r)^n)) :
invalid (do_set) left-hand side to assignment
What's the problem and how can I find r?
There are plenty of optimization and root finding libraries for R link here. But in native R:
fnToFindRoot = function(r) {
n <- c(2:10)
return(abs(-100/(r+1)+sum((100-50)/((1+r)^n))))
}
# arbitrary starting values
r0 = 0
# minimise the function to get the parameter estimates
rootSearch = optim(r0, fnToFindRoot,method = 'BFGS', hessian=TRUE)
str(rootSearch)
fnToFindRoot(rootSearch$par)
That function is very volatile. If you are willing to bracket the root, you are probably better off with uniroot:
fnToFindRoot = function(r,a) {
n <- c(2:10)
return((-100/(r+1)+sum((100-50)/((1+r)^n)))-a)
}
str(xmin <- uniroot(fnToFindRoot, c(-1E6, 1E6), tol = 0.0001, a = 0))
The a argument is there so you can look for a root to any arbitrary value.
Try bisection. This converges to r = 0.4858343 in 25 iterations:
library(pracma)
bisect(function(r) -100/(1+r) + sum(50/(r+1)^seq(2, 10)), 0, 1)
giving:
$root
[1] 0.4858343
$f.root
[1] 8.377009e-07
$iter
[1] 25
$estim.prec
[1] 1.490116e-08
Let x = 1/(1+r), so your equation should be:
0-100x + 50x^2 + 50x^3 + ... + 50x^10 = 0.
then in R:
x <- polyroot(c(0, -100, rep(50, 9)))
(r <- 1/x - 1)
Here is the answer:
[1] Inf+ NaNi 0.4858344-0.0000000i -1.7964189-0.2778635i
[4] -0.3397136+0.6409961i -0.3397136-0.6409961i -1.4553556-0.7216708i
[7] -0.9014291+0.8702213i -0.9014291-0.8702213i -1.7964189+0.2778635i
[10] -1.4553556+0.7216708i

Using Beta.Select function in R (prior estimate)

I am trying to formulate the priors by using total counts and beta distribution.
I have following written:
quantile(df$row, probs=c(0.00001, 0.5, 0.99999))
quantile1 <- list(p=0.5, x=8)
quantile2 <- list(p=0.99999, x=10)
quantile3 <- list(p=0.00001, x=1)
library("LearnBayes")
findBeta <- function(quantile1,quantile2,quantile3)
quantile1_p <- quantile1[[1]]; quantile1_q <- quantile1[[2]]
quantile2_p <- quantile2[[1]]; quantile2_q <- quantile2[[2]]
quantile3_p <- quantile3[[1]]; quantile3_q <- quantile3[[2]]
priorA <- beta.select(list(p=0.5, x=8), list(p=0.99999, x=10))
and once I am trying to calculate priorA using beta.select function I get following error:
Error in if (p0 < p) m.hi = m0 else m.lo = m0 :
missing value where TRUE/FALSE needed
In addition: Warning message:
In pbeta(x, K * m0, K * (1 - m0)) : NaNs produced
I just can't get rid of the error and do not know how to approach it any more. Urgently need help.
I am guessing (completely out of thin air) that you are dealing with percentages. In which case you want to use x/100
beta.select(list(p=0.5, x=.08), list(p=0.9, x=.10))
# [1] 28.02 318.74
Either way, while it would be nice of beta.select to throw a more appropriate error message (or rather, to have an error check in there), the root of the issue is that your x's are out of bounds. (As #Didzis noted, the interval for a beta dist is [0, 1])

Resources