I want to use interp2 function of MATLAB in Julia.
I tried GR module but I failed.
Now I'm using julia 0.64 version
Hope you guys can help me
Have a look at Interpolations.jl. This example is equivalent to the interp2 function:
A = rand(8,20)
knots = ([x^2 for x = 1:8], [0.2y for y = 1:20])
itp = interpolate(knots, A, Gridded(Linear()))
itp(4,1.2) # approximately A[2,6]
http://juliamath.github.io/Interpolations.jl/latest/control/#Gridded-interpolation-1
You might find the Dierckx.jl package helpful to achieve 2-d spline fitting. See for instance Spline2D.
Related
I'm trying to call the R function garchFit from Julia using Rcall. When I do things directly in R, all is well: the following works
library("fGarch")
library("rugarch")
spxData <- read.csv(file = 'SPXlogreturns.csv')
y = spxData$y
fit.arch <- garchFit(~garch(1,0),data=y,trace=F,include.mean=FALSE)
But when I have the same vector of log returns in Julia and try to do the same thing using RCall:
using RCall
#rput y
R"""
library("fGarch")
library("rugarch")
fit.arch <- garchFit(~garch(1,0),data=y,trace=F,include.mean=FALSE)
"""
I get the error Multivariate data inputs require lhs for the formula. Yet when I #rget y back from R, it's a vector, so I don't understand what garchFit wants. Any help much appreciated.
In case anyone googles it and has a similar problem, the answer is that you need to unlist. For no (at least to me) readily obvious reason, #rput creates a list in R, not a vector. So the answer is
using RCall
#rput y
R"""
library("fGarch")
library("rugarch")
yy <- unlist(y)
fit.arch <- garchFit(~garch(1,0),data=yy,trace=F,include.mean=FALSE)
"""
I want to convert my articial neural network implementations to the new tensorflow 2 platform, where keras is an implicit part of (tf.keras). Are there any recommended sources that explain the implementation of ANNs using tensorflow 2/tf.keras within R?
Furthermore, why there is an extra keras package from F. Chollet available, when keras is as mentioned an implicit part of tensorflow now?
Sorry guys maybe for such basic questions, but my own searches were unfortunately not crowned with success.
From original tensorflow documentation I extract the following Python code:
input1 = keras.layers.Input(shape=(16,))
x1 = keras.layers.Dense(8, activation='relu')(input1)
input2 = keras.layers.Input(shape=(32,))
x2 = keras.layers.Dense(8, activation='relu')(input2)
added = keras.layers.add([x1, x2])
out = keras.layers.Dense(4)(added)
model = keras.models.Model(inputs=[input1, input2], outputs=out)
My own R conversions are
library(tensorflow)
k <- tf$keras
l <- k$layers
input1 <- k$layers$Input(shape = c(16,?))
x1 <- k$layers$Dense(units = 8, activation = "relu") (input1)
input2 <- k$layers$Input(shape = c(32,?))
x2 <- k$layers$Dense(units = 8, activation = "relu") (input2)
added <- k$layers$add(inputs = c(x1,x2))
My question hopefully seems not to be too stupid, but I've problems to implement a python tuple resp. scalar into its R equivalent. So my question: How must the shape argument in the input layers be converted into R?
I think the following page should provide the answer to your question: https://blogs.rstudio.com/ai/posts/2019-10-08-tf2-whatchanges/.
In essence, your code should stay the same if you are using Keras with a version 2.2.4.1 or above. For more details, refer to the linked site above.
I am trying to use activation functions other than the pre-implemented "logistic" and "tanh" in the R package neuralnet. Specifically, I would like to use rectified linear units (ReLU) f(x) = max{x,0}. Please see my code below.
I believe I can use custom functions if defined by (for example)
custom <- function(a) {x*2}
but if I set max(x,0) instead of x*2 then R tells me that 'max is not in the derivatives table', and same for '>' operator. So I am looking for a sensible workaround as I am thinking numerical integration of max in this case wouldn't be an issue.
nn <- neuralnet(
as.formula(paste("X",paste(names(Z[,2:10]), collapse="+"),sep="~")),
data=Z[,1:10], hidden=5, err.fct="sse",
act.fct="logistic", rep=1,
linear.output=TRUE)
Any ideas? I am a bit confused as I didn't think the neuralnet package would do analytical differentiation.
The internals of the neuralnet package will try to differentiate any function provided to act.fct. You can see the source code here.
At line 211 you will find the following code block:
if (is.function(act.fct)) {
act.deriv.fct <- differentiate(act.fct)
attr(act.fct, "type") <- "function"
}
The differentiate function is a more complex use of the deriv function which you can also see in the source code above. Therefore, it is currently not possible to provide max(0,x) to the act.fct. It would require an exception placed in the code to recognize the ReLU and know the derivative. It would be a great exercise to get the source code, add this in and submit to the maintainers to expand (but that may be a bit much).
However, regarding a sensible workaround, you could use softplus function which is a smooth approximation of the ReLU. Your custom function would look like this:
custom <- function(x) {log(1+exp(x))}
You can view this approximation in R as well:
softplus <- function(x) log(1+exp(x))
relu <- function(x) sapply(x, function(z) max(0,z))
x <- seq(from=-5, to=5, by=0.1)
library(ggplot2)
library(reshape2)
fits <- data.frame(x=x, softplus = softplus(x), relu = relu(x))
long <- melt(fits, id.vars="x")
ggplot(data=long, aes(x=x, y=value, group=variable, colour=variable))+
geom_line(size=1) +
ggtitle("ReLU & Softplus") +
theme(plot.title = element_text(size = 26)) +
theme(legend.title = element_blank()) +
theme(legend.text = element_text(size = 18))
You can approximate the max function with a differentiable function, such as:
custom <- function(x) {x/(1+exp(-2*k*x))}
The variable k determines the accuracy of the approximation.
Other approximations can be derived from equations in section "Analytic approximations": https://en.wikipedia.org/wiki/Heaviside_step_function
a bit belated, but in case anyone else is still looking for an answer. Here's how to incorporate the non-approximated ReLu function. This is achieved by loading it from a package.
Note that while you could technically define the relu function yourself (with max() or if(x<0) etc.), this wouldn't work in the neural net package because it needs a differentiable function.
First, load the relu function from sigmoid package, which is differentiable
install.packages('sigmoid')
library(sigmoid)
relu()
Second, insert in your code
nn <- neuralnet(
as.formula(paste("X",paste(names(Z[,2:10]), collapse="+"),sep="~")),
data=Z[,1:10],
hidden=5, err.fct="sse",
act.fct=relu,
rep=1,
linear.output=TRUE)
I found this solution in another post, but can't for the life of me rememeber which one, so credits to unknown.
The following program works perfectly with R\2.15.3 with the mgcv packages:
foo<-c(0.08901294, 0.04221170, 0.01608613, 0.04389676, 0.04102295, 0.03552413, 0.06571099, 0.11004966, 0.08380553, 0.09181121, 0.07422538,
0.11494897, 0.18523257, 0.13809043, 0.13569868, 0.13433534, 0.16056145, 0.15559133, 0.22381149, 0.13998797, 0.02831030)
infant.gamfit<-gam(foo~s(c(1:21)), family=gaussian(link = "logit"))
But with R\3.1.1 and 3.1.2, it produces the following error:
Error in reformulate(pav) : 'termlabels' must be a character vector
of length at least one
Which is an error I don't understand.
Of course the values in foo is an example among others, but I have the same problem with other values. Fixing k in the spline doesn't change anything.
That wouldn't be a problem if I wouldn't need to use it on a large scale with a supercomputer where all the versions of R create the same error...
(for the sake of the discussion, the R versions I tested on the supercomputer were:
R/2.15.3-foss-2014a-default;
R/2.15.3-foss-2014a-st;
R/2.15.3-intel-2014a-default;
R/3.0.2-foss-2014a-default)
So that's not a supercomputer problem, but more a problem related to the use of mgcv in different version of R.
I didn't find any answer on the internet.
Thank you in advance for your help.
Guillaume
It looks like recent versions of mgcv::gam can be a bit fragile when your predictor is an expression, as opposed to a named variable. This works:
x <- 1:21
gam(foo~s(x), family=gaussian(link = "logit"))
As does this:
x <- 1:21
gam(foo~s(x + 0), ...)
But this doesn't:
x <- rep(0, 21)
gam(foo~s(x + 1:21), ...)
In general, I'd suggest you should precompute your predictors when using gam.
PS. Gaussian family with logit link isn't very sensible, but that's another issue.
I want to do a cross-validation for the ca20-Dataset from the geoR
package. With for example the meuse-dataset, this works fine, but for
this dataset, I encounter a strange problem with the dimensions of the
SpatialPointsDataFrame. Maybe you can try this for yourself and explain
why the autoKrige.cv function does not work (I tried several
nfold-values but this only changes the locations-value of the error
message...):
library(geoR)
library(gstat)
library(automap)
data(ca20)
east=ca20$coords[,1]
north=ca20$coords[,2]
concentration=ca20$data
frame=data.frame(east,north)
data=data.frame(concentration)
points<-SpatialPoints(data.frame(east,north),proj4string=CRS(as.character(NA)))
pointsframe<-SpatialPointsDataFrame(points,data, coords.nrs = numeric(0),proj4string = CRS(as.character(NA)), match.ID = TRUE)
krig=autoKrige(pointsframe$concentration~1,pointsframe)
plot(krig)
cv=autoKrige.cv(pointsframe$concentration~1,pointsframe)
I hope someone can reproduce the problem, my R version is 2.15, all packages are up to date (at least not older than a month or so...).
Thanks for your help!!
First, the way you build your SpatialPointsDataFrame can be done more easily:
library(geoR)
library(gstat)
library(automap)
...and build the SPDF:
pointsframe = data.frame(ca20$coords)
pointsframe$concentration = ca20$data
coordinates(pointsframe) = c("east", "north")
The problem you have is in how you use the formula argument. You add the spatial object pointsframe to the formula, in essence putting a vector directly into the formula. You should just use the column name in the formula, like this:
cv=autoKrige.cv(concentration~1,pointsframe)
and it works:
> summary(cv)
[,1]
mean_error -0.01134
me_mean -0.0002237
MAE 6.02
MSE 60.87
MSNE 1.076
cor_obspred 0.7081
cor_predres 0.01343
RMSE 7.802
RMSE_sd 0.7041
URMSE 7.802
iqr 9.519