Does R have something similar to TransformedDistribution in Mathematica? - r

I have a random variable X and a transformation f and I would like to know the probability distribution function of f(X), at least approximately. In Mathematica there is TransformedDistribution, but I could not find something similar in R. As I said, some kind of approximative solution would be fine, too.

You can check the distr package. For instance, say that y = x^2+2x+1, where x is normally distributed with mean 2 and standard deviation 5. You can:
require(distr)
x<-Norm(2,5)
y<-x^2+2*x+1
#y#r gives random samples. We make an histogram.
hist(y#r(10000))
#y#d and y#p are the density and the cumulative functions
y#d(80)
#[1] 0.002452403
y#p(80)
#[1] 0.8891796

Related

Computing ECDF of a data for parameter estimation using weighted nonlinear least square in R

I am writing a code for estimating the parameter of a GPD using weighted nonlinear least square(WNLS) method.
The WNLS method consist of 2 steps
step 1: $(\hat{\xi_1} , \hat{b_1}) = arg\ \min_{(\xi,b)} \sum_{i=1}^{n} [\log(1-F_n(x_i)) - log(1-G_{\xi,b}(x_i))]$,
here $F_n$ is the ECDF and $1-G_{\xi,b}$ is the generalized pareto distribution.
Can anyone let me know how to calculate EDF function $F_n$ for a data "X" in R?
Does ecdf(X)(X) will calculate the ECDF? If so then, what is the need for ecdf(X) other than plotting? Also it would be really helpful if someone share some example code which involves the calculation of ECDF for data.
The ecdf call creates a function. That is, you can apply ecdf(X) to other data, as your ecdf(X)(X) call does. However, you might want to apply ecdf(X) to something other than X itself. If you want to know the empirical quantile to which three numbers a, b, and c_ correspond, an easy way to do that is to call ecdf(X)(c(a, b, c_)).

Finding the intersection of two curves in a scatterplot (here: pvalues vs test-statistics)

i do
library(Hmisc)
df <- as.matrix(replicate(20, rnorm(20)))
cor.df <- rcorr(df)
plot(cor.df$r,cor.df$P)
abline(h=0.05)
and i would like to know if R can compute the meeting point of the horizontal line and the bell-curve. Since i have a scatterplot, do i need to model the x,y-curve first, and then balance the two functions? Or can R do that graphically?
I actually want to know what the treshold for (uncorrected) pvalues indicating a significant test statistics for a given dataset would be. I am not a trained statistician, so excuse me if that is a basic question.
Thank you very much!
There is no function to graphically calculate an intersection. There are functions like uniroot that you can use in R to find intersections, but you need to have proper functions and have a good idea of the interval where the intersection occurs.
It would be best to properly model the curve in question, but a simply way to approximate a function when you have a bunch of points on the curve is just to use linear interpolation between the observed points. You can create a function for your points with approxfun
f1 <- approxfun(cor.df$r,cor.df$P, rule=2)
(again, a proper model would be better, but just for the sake of example, i'll continue with this function).
Now we can find the place where this curve cross 0.05 with
uniroot(function(x) f1(x)-.05, c(-1,-.001))$root
# [1] -0.4437796
uniroot(function(x) f1(x)-.05, c(.001, 1))$root
# [1] 0.4440005

How to show the value of the AUC from geom_density/stat_density

I have produced some density plots using ggplot2 and stat_density. My colleague mentioned he wasn't convinced that the area under each curve would sum to 1. So, I set out to calculate the area under the curve, and I am wondering if there might be a better approach than what I did.
Here is an example of what I did:
data(iris)
p<-ggplot(iris,aes(x=Petal.Length))+
stat_density(aes(colour=Species),geom="line",position="identity")
q<-print(p)
q<-q$data[[1]]
# calculate interval between density estimates for a given point.
# assume it is the same interval for all estimates
interval<-q$x[2]-q$x[1]
# calculate AUC by summing interval*height for the density estimate at each point
tapply(q$density*interval,
q$group,
sum)
The result:
1 2 3
0.9913514 1.0009785 0.9817040
It seems to works decently, but I wonder if there is a better way of doing this. In particular, my calculation of the interval (i.e. dx, I suppose) seems like it could be a problem, especially if the different density curves use different intervals.
Your way is already good.
Another way to do it is using the trapezoid rule:
data <- cbind(q$x, q$y)
by(data, q$group, FUN = function(x) trapz(x[, 1], x[, 2]))
The results are nearly the same:
INDICES: 1
[1] 0.9903457
INDICES: 2
[1] 1.000978
INDICES: 3
[1] 0.9811152
This is because at the bandwidth needed to make the graph of the densities look reasonable (interval in your code), you are very close to what you would get if you could do the actual integral.

analytical derivative of splinefun()

I'm trying to fit a natural cubit spline to probabilistic data (probabilities that a random variable is smaller than certain values) to obtain a cumulative distribution function, which works well enough using splinefun():
cutoffs <- c(-90,-60,-30,0,30,60,90,120)
probs <- c(0,0,0.05,0.25,0.5,0.75,0.9,1)
CDF.spline <- splinefun(cutoffs,probs, method="natural")
plot(cutoffs,probs)
curve(CDF.spline(x), add=TRUE, col=2, n=1001)
I would then, however, like to use the density function, i.e. the derivative of the spline, to perform various calculations (e.g. to obtain the expected value of the random variable).
Is there any way of obtaining this derivative as a function rather than just evaluated at a discrete number of points via splinefun(x, deriv=1)?
This is pretty close to what I'm looking for, but alas the example doesn't seem to work in R version 2.15.0.
Barring an analytical solution, what's the cleanest numerical way of going about this?
If you change the environment assignment line for g in the code the Berwin Turlach provided on R-help to this:
environment(g) <- environment(f)
... you succeed in R 2.15.1.

Is there an R library that estimates a multivariate natural cubic spline (or similar) function?

note: originally posted on Cross Validated (stats SE) on 07-26-2011, with no correct answers to date.
Background
I have a model, f, where Y=f(X)
X is an n x m matrix of samples from m parameters and Y is the n x 1 vector of model outputs.
f is computationally intensive, so I would like to approximate f using a multivariate cubic spline through (X,Y) points, so that I can evaluate Y at a larger number of points.
Question
Is there an R function that will calculate an arbitrary relationship between X and Y?
Specifically, I am looking for a multivariate version of the splinefun function, which generates a spline function for the univariate case.
e.g. this is how splinefun works for the univariate case
x <- 1:100
y <- runif(100)
foo <- splinefun(x,y, method = "monoH.FC")
foo(x) #returns y, as example
The test that the function interpolates exactly through the points is successful:
all(y == foo(1:100))
## TRUE
What I have tried
I have reviewed the mda package, and it seems that the following should work:
library(mda)
x <- data.frame(a = 1:100, b = 1:100/2, c = 1:100*2)
y <- runif(100)
foo <- mars(x,y)
predict(foo, x) #all the same value
however the function does not interpolate exactly through the design points:
all(y == predict(foo,x))
## FALSE
I also could not find a way to implement a cubic-spline in either the gam, marss, or earth packages.
Actually several packages can do it. The one I use is the "rms" package which has rcs, but the survival package also has pspline and the splines package has the ns function {}. "Natural splines" (constructed with ns) are also cubic splines. You will need to form multivariate fitting function with the '*' operator in the multivariate formula creating "crossed" spline terms.
that the example you offered was not sufficiently rich.
I guess I am confused that you want exact fits. R is a statistical package. Approximate estimation is the goal. Generally exact fits are more of a problem because they lead to multicollinearity.
Have a look at the DiceKriging package which was developed to undertake tasks like this.
http://cran.r-project.org/web/packages/DiceKriging/index.html
I've provided an example application at
https://stats.stackexchange.com/questions/13510/fitting-multivariate-natural-cubic-spline/65012#65012
I'm not sure if this is precisely what you are looking for, but you could try Tps() in the R package fields. It's meant for doing thin-plate splines interpolations (2D equivalent of cubic splines) for spatial data, but will take up to four covariates, although it will expect them to be euclidean x,y,z + time, so you need to be clear that you are selecting the correct options for your particular case. If you want to interpolate, set the smoothing parameter lambda to zero. You might also try the function polymars() in the R package polspline.

Resources