Sorry this might be basic but I am a newbie. I will be making lots of curves so some advice will be useful for me.
I have a function which I want to plot:
f <- function(x) sum(4*sin(x*seq(1,21,2))/(pi*seq(1,21,2)))
using
curve(f, -pi, pi, n=100)
Unfortunately ,this does not work for me. Please advise.
Thanks
You function isn't vectorized. At the moment it will only take a single scalar input and output a single return value. curve expects that it should be able to feed in a vector of the x values it wants to plot for and should receive a vector of response values. The easiest solution is to just use Vectorize to automatically convert your function into one that can take vector inputs.
f2 <- Vectorize(f)
curve(f2, -pi, pi, n = 100)
However, you might just want to write a vectorized version of the function directly.
Related
Suppose I have a matrix such like:
x <- matrix(rnorm(1000000), nrow = 500)
How to track or show the progress bar for a single function:
dist(x)
I tried a package pbapply::pblapply(x, dist) but it seems it calculated the dist for each value rather than for the matrix.
I found it can be realized by:
pbapply::pblapply(list(x), dist)[[1]]
But it might be not the best way.
I found some posts and discussions about the above, but I'm not sure... could someone please check if I am doing anything wrong?
I have a set of N points of the form (x,y,z). The x and y coordinates are independent variables that I choose, and z is the output of a rather complicated (and of course non-analytical) function that uses x and y as input.
My aim is to find a set of values of (x,y) where z=z0.
I looked up this kind of problem in R-related forums, and it appears that I need to interpolate the points first, perhaps using a package like akima or fields.
However, it is less clear to me: 1) if that is necessary, or the basic R functions that do the same are sufficiently good; 2) how I should use the interpolated surface to generate a correct matrix of the desired (x,y,z=z0) points.
E.g. this post seems somewhat related to the problem I am describing, but it looks extremely complicated to me, so I am wondering whether my simpler approach is correct.
Please see below some example code (not the original one, as I said the generating function for z is very complicated).
I would appreciate if you could please comment / let me know if this approach is correct / suggest a better one if applicable.
df <- merge(data.frame(x=seq(0,50,by=5)),data.frame(y=seq(0,12,by=1)),all=TRUE)
df["z"] <- (df$y)*(df$x)^2
ta <- xtabs(z~x+y,df)
contour(ta,nlevels=20)
contour(ta,levels=c(1000))
#why are the x and y axes [0,1] instead of showing the original values?
#and how accurate is the algorithm that draws the contour?
li2 <- as.data.frame(contourLines(ta,levels=c(1000)))
#this extracts the contour data, but all (x,y) values are wrong
require(akima)
s <- interp(df$x,df$y,df$z)
contour(s,levels=c(1000))
li <- as.data.frame(contourLines(s,levels=c(1000)))
#at least now the axis values are in the right range; but are they correct?
require(fields)
image.plot(s)
fancier, but same problem - are the values correct? better than the akima ones?
I feel stupid asking, but what is the intent of R's crossprod function with respect to vector inputs? I wanted to calculate the cross-product of two vectors in Euclidean space and mistakenly tried using crossprod .
One definition of the vector cross-product is N = |A|*|B|*sin(theta) where theta is the angle between the two vectors. (The direction of N is perpendicular to the A-B plane). Another way to calculate it is N = Ax*By - Ay*Bx .
base::crossprod clearly does not do this calculation, and in fact produces the vector dot-product of the two inputs sum(Ax*Bx, Ay*By).
So, I can easily write my own vectorxprod(A,B) function, but I can't figure out what crossprod is doing in general.
See also R - Compute Cross Product of Vectors (Physics)
According to the help function in R: crossprod (X,Y) = t(X)%*% Y is a faster implementation than the expression itself. It is a function of two matrices, and if you have two vectors corresponds to the dot product. #Hong-Ooi's comments explains why it is called crossproduct.
Here is a short code snippet which works whenever the cross product makes sense: the 3D version returns a vector and the 2D version returns a scalar. If you just want simple code that gives the right answer without pulling in an external library, this is all you need.
# Compute the vector cross product between x and y, and return the components
# indexed by i.
CrossProduct3D <- function(x, y, i=1:3) {
# Project inputs into 3D, since the cross product only makes sense in 3D.
To3D <- function(x) head(c(x, rep(0, 3)), 3)
x <- To3D(x)
y <- To3D(y)
# Indices should be treated cyclically (i.e., index 4 is "really" index 1, and
# so on). Index3D() lets us do that using R's convention of 1-based (rather
# than 0-based) arrays.
Index3D <- function(i) (i - 1) %% 3 + 1
# The i'th component of the cross product is:
# (x[i + 1] * y[i + 2]) - (x[i + 2] * y[i + 1])
# as long as we treat the indices cyclically.
return (x[Index3D(i + 1)] * y[Index3D(i + 2)] -
x[Index3D(i + 2)] * y[Index3D(i + 1)])
}
CrossProduct2D <- function(x, y) CrossProduct3D(x, y, i=3)
Does it work?
Let's check a random example I found online:
> CrossProduct3D(c(3, -3, 1), c(4, 9, 2)) == c(-15, -2, 39)
[1] TRUE TRUE TRUE
Looks pretty good!
Why is this better than previous answers?
It's 3D (Carl's was 2D-only).
It's simple and idiomatic.
Nicely commented and formatted; hence, easy to understand
The downside is that the number '3' is hardcoded several times. Actually, this isn't such a bad thing, since it highlights the fact that the vector cross product is purely a 3D construct. Personally, I'd recommend ditching cross products entirely and learning Geometric Algebra instead. :)
The help ?crossprod explains it quite clearly. Take linear regression for example, for a model y = XB + e you want to find X'X, the product of X transpose and X. To get that, a simple call will suffice: crossprod(X) is the same as crossprod(X,X) is the same as t(X) %*% X. Also, crossprod can be used to find the dot product of two vectors.
In response to #Bryan Hanson's request, here's some Q&D code to calculate a vector crossproduct for two vectors in the plane. It's a bit messier to calculate the general 3-space vector crossproduct, or to extend to N-space. If you need those, you'll have to go to Wikipedia :-) .
crossvec <- function(x,y){
if(length(x)!=2 |length(y)!=2) stop('bad vectors')
cv <- x[1]*y[2]-x[2]*y[1]
return(invisible(cv))
}
Here is a minimalistic implementation for 3D vectors:
vector.cross <- function(a, b) {
if(length(a)!=3 || length(b)!=3){
stop("Cross product is only defined for 3D vectors.");
}
i1 <- c(2,3,1)
i2 <- c(3,1,2)
return (a[i1]*b[i2] - a[i2]*b[i1])
}
If you want to get the scalar "cross product" of 2D vectors u and v, you can do
vector.cross(c(u,0),c(v,0))[3]
There is a useful math operations package named pracma (https://rdrr.io/rforge/pracma/api/ or CRAN https://cran.r-project.org/web/packages/pracma/index.html).
Easy to use and quick. The cross product is literally given by pracma::cross(x, y) for any two vectors.
I would like to know if there is any function that will give a local maxima for matrix on a plane?
I found one solution from
Given a 2D numeric "height map" matrix in R, how can I find all local maxima?
but it seems that there are some mistakes where for this line
localmax <- focal(r, fun = f, pad=TRUE, padValue=NA)
Error in focal(r, fun = f, pad = TRUE, padValue = NA) :
argument "w" is missing
Not sure on how to contact the person who gave the solution, so I just post it here
Regards
Aftar
Personally I'd dump your matrix into imageJ to do this.
As another option, you might port this Matlab code http://www.mathworks.com/matlabcentral/fileexchange/37388-fast-2d-peak-finder . That module does some smoothing to improve the chance of finding "real" peaks in an image. IMHO local maxima only have meaning if the surface is smooth in the mathematical sense, i.e. everywhere differentiable.
I'm trying to arcsine squareroot data lying on [-1,1]. Using transf.arcsine from the metafor package produces NaNs when trying to squareroot the negative datapoints. Conceptually, I want to use arcsin(sgn(x)√|x|) i.e. square the absolute value, apply its previous sign, then arcsine transform it. The trouble is I have no idea how to begin doing this in R. Help would be appreciated.
x <- seq(-1, 1, length = 20)
asin(sign(x) * sqrt(abs(x)))
or as a function
trans.arcsine <- function(x){
asin(sign(x) * sqrt(abs(x)))
}
trans.arcsine(x)
Help in R is just help() or help.search(). So, let's try the obvious,
> help(arcsin)
No documentation for ‘arcsin’ in specified packages and libraries:
OK, that's not good. But it must be able to trig... let's try something even simpler.
help(sin)
There's all the trig functions. And I note, there's a link to Math on the page. Clicking that seems to provide all of the functions you need. It turns out that I could have just typed..
help(Math)
also,
help.search('trigonometry')
I had a similar prob. I wanted to arcsine transform most of the dataset "logmeantd.ascvr" & approached it in this manner:
First make are data range has been transformed b/t -1 and 1 (in this case they were expressed as percentages):
logmeantd.ascvr[1:12] <- logmeantd.ascvr[1:12] * 0.01
Next apply the square root function, sqrt():
logmeantd.ascvr[1:12] <- sqrt(logmeantd.ascvr[1:12])
lastly apply the arc sine function, asin():
logmeantd.ascvr[1:12] <- asin(logmeantd.ascvr[1:12])
*note in this instance I had excluded the MEAN variable of my dataset because I wanted to apply a log function to it, log():
logmeantd.ascvr$MEAN <- log(logmeantd.ascvr$MEAN)