Rotating spatial coordinates omits other data - r

I have a data.frameof spatial data (X,Y) with a third column of attribute data (Z) assigned to each spatial coordinate. I need to be able to manipulate the spatial coordinates by rotating them around a central point and then analyse them with the attribute attached. I'm able to rotate the coordinates, but I can't seem to carry the third column of data through the rotation process. i.e. I need to remove the third column of data from the data.frame to perform the spatial manipulation.
Here's the sample data
And here's the code I have thus far
data<-read.csv("Sample.csv",head=T)
head(data)
#Omit the attribute data
a<-data[-3]
angle <- pi/8
M <- matrix( c(cos(angle), -sin(angle), sin(angle), cos(angle)), 2, 2 )
# FIRST ROTATION
b<-as.matrix(a) %*% M
colnames(b)<-c("X","Y")
b<-as.data.frame(b)
# SECOND ROTATION
c<-as.matrix(b) %*% M
colnames(c)<-c("X","Y")
c<-as.data.frame(c)
# THIRD ROTATION
d<-as.matrix(c) %*% M
colnames(d)<-c("X","Y")
d<-as.data.frame(d)
par(mfrow=c(2,2))
par(mar=c(2,2,2,2))
plot(a,col='red',pch=19,asp=1)
plot(b,col='blue',pch=19,asp=1)
plot(c,col='green',pch=19,asp=1)
plot(d,col='purple',pch=19,asp=1)
What I need to be able to do is manipulate the spatial data in the way I have done above, but also carry across the original attribute assigned to each point coordinate (Z).
I've tried
M <- matrix( c(cos(angle), -sin(angle), sin(angle), cos(angle),1,1), 2,3 )
b<-as.matrix(data) %*% M
but get the following error
Error in as.matrix(a) %*% M : non-conformable arguments
I'm stuck. can anyone help??

Oh, I figured it out. It was so obvious. Just add in the extra column after the manipulation.
b$Z<-data$Z
c$Z<-data$Z
d$Z<-data$Z

Related

Subset 3D matrix using polygon coordinates

I'm working on some bioacoustical analysis and got stuck with an issue that I believe it can be worked out mathematically. I'll use an sound sample from seewavepackage:
library(seewave)
library(tuneR)
data(tico)
By storing a spectrogram (i.e. graphic representation of the sound wave tico) in an R object, we can now deal with the wave file computationally.
s <- spectro(tico, plot=F)
class(s)
>[1] "list"
length(s)
>[1] 3
The object created s consists in two numerical vectors x = s$time, y = s$freq representing the X and Y axis, respectively, and a matrix z = s$amp of amplitude values with the same dimensions of x and y. Z is a virtually a 3D matrix that can be plotted using persp3D (plot3D), plot_ly (plotly) or plot3d (rgl). Alternatively, the wave file can be plotted in 3D using seewave if one wishes to visualize it as an interative rgl plot.
spectro3D(tico)
That being said, the analysis I'm conducting aims to calculate contours of relative amplitude:
con <- contourLines(x=s$time, y=s$freq, z=t(s$amp), levels=seq(-25, -25, 1))
Select the longest contour:
n.con <- numeric(length(con))
for(i in 1:length(con)) n.con[i] <- length(con[[i]]$x)
n.max <- which.max(n.con)
con.max <- con[[n.max]]
And then plot the selected contour against the spectrogram of tico:
spectro(tico, grid=F, osc=F, scale=F)
polygon(x=con.max$x, y=con.max$y, lwd=2)
Now it comes the tricky part. I must find a way to "subset" the matrix of amplitude values s$amp using the coordinates of the longest contour con.max. What I aim to achieve is a new matrix containing only the amplitude values inside the polygon. The remaining parts of the spectrogram should then appear as blank spaces.
One approach I though it could work would be to create a loop that replaces every value outside the polygon for a given amplitude value (e.g. -25 dB). I once did an similar approach to remove the values below -30 dB and it worked out perfectly:
for(i in 1:length(s$amp)){if(s$amp[i] == -Inf |s$amp[i] <= -30)
{s$amp[i] <- -30}}
Another though would be to create a new matrix with the same dimensions of s$amp, subset s$amp using the coordinates of the contour, then replace the subset on the new matrix. Roughly:
mt <- matrix(-30, nrow=nrow(s$amp), ncol = ncol(s$amp))
sb <- s$amp[con.max$y, con.max$x]
new.mt <- c(mt, sb)
s$amp <- new.mt
I'll appreciate any help.

Find correct 2D translation of a subset of coordinates

I have a problem I wish to solve in R with example data below. I know this must have been solved many times but I have not been able to find a solution that works for me in R.
The core of what I want to do is to find how to translate a set of 2D coordinates to best fit into an other, larger, set of 2D coordinates. Imagine for example having a Polaroid photo of a small piece of the starry sky with you out at night, and you want to hold it up in a position so they match the stars' current positions.
Here is how to generate data similar to my real problem:
# create reference points (the "starry sky")
set.seed(99)
ref_coords = data.frame(x = runif(50,0,100), y = runif(50,0,100))
# generate points take subset of coordinates to serve as points we
# are looking for ("the Polaroid")
my_coords_final = ref_coords[c(5,12,15,24,31,34,48,49),]
# add a little bit of variation as compared to reference points
# (data should very similar, but have a little bit of noise)
set.seed(100)
my_coords_final$x = my_coords_final$x+rnorm(8,0,.1)
set.seed(101)
my_coords_final$y = my_coords_final$y+rnorm(8,0,.1)
# create "start values" by, e.g., translating the points we are
# looking for to start at (0,0)
my_coords_start =apply(my_coords_final,2,function(x) x-min(x))
# Plot of example data, goal is to find the dotted vector that
# corresponds to the translation needed
plot(ref_coords, cex = 1.2) # "Starry sky"
points(my_coords_start,pch=20, col = "red") # start position of "Polaroid"
points(my_coords_final,pch=20, col = "blue") # corrected position of "Polaroid"
segments(my_coords_start[1,1],my_coords_start[1,2],
my_coords_final[1,1],my_coords_final[1,2],lty="dotted")
Plotting the data as above should yield:
The result I want is basically what the dotted line in the plot above represents, i.e. a delta in x and y that I could apply to the start coordinates to move them to their correct position in the reference grid.
Details about the real data
There should be close to no rotational or scaling difference between my points and the reference points.
My real data is around 1000 reference points and up to a few hundred points to search (could use less if more efficient)
I expect to have to search about 10 to 20 sets of reference points to find my match, as many of the reference sets will not contain my points.
Thank you for your time, I'd really appreciate any input!
EDIT: To clarify, the right plot represent the reference data. The left plot represents the points that I want to translate across the reference data in order to find a position where they best match the reference. That position, in this case, is represented by the blue dots in the previous figure.
Finally, any working strategy must not use the data in my_coords_final, but rather reproduce that set of coordinates starting from my_coords_start using ref_coords.
So, the previous approach I posted (see edit history) using optim() to minimize the sum of distances between points will only work in the limited circumstance where the point distribution used as reference data is in the middle of the point field. The solution that satisfies the question and seems to still be workable for a few thousand points, would be a brute-force delta and comparison algorithm that calculates the differences between each point in the field against a single point of the reference data and then determines how many of the rest of the reference data are within a minimum threshold (which is needed to account for the noise in the data):
## A brute-force approach where min_dist can be used to
## ameliorate some random noise:
min_dist <- 5
win_thresh <- 0
win_thresh_old <- 0
for(i in 1:nrow(ref_coords)) {
x2 <- my_coords_start[,1]
y2 <- my_coords_start[,2]
x1 <- ref_coords[,1] + (x2[1] - ref_coords[i,1])
y1 <- ref_coords[,2] + (y2[1] - ref_coords[i,2])
## Calculate all pairwise distances between reference and field data:
dists <- dist( cbind( c(x1, x2), c(y1, y2) ), "euclidean")
## Only take distances for the sampled data:
dists <- as.matrix(dists)[-1*1:length(x1),]
## Calculate the number of distances within the minimum
## distance threshold minus the diagonal portion:
win_thresh <- sum(rowSums(dists < min_dist) > 1)
## If we have more "matches" than our best then calculate a new
## dx and dy:
if (win_thresh > win_thresh_old) {
win_thresh_old <- win_thresh
dx <- (x2[1] - ref_coords[i,1])
dy <- (y2[1] - ref_coords[i,2])
}
}
## Plot estimated correction (your delta x and delta y) calculated
## from the brute force calculation of shifts:
points(
x=ref_coords[,1] + dx,
y=ref_coords[,2] + dy,
cex=1.5, col = "red"
)
I'm very interested to know if there's anyone that solves this in a more efficient manner for the number of points in the test data, possibly using a statistical or optimization algorithm.

R: Converting cartesian coordinates to polar coordinates, and then calculating distance from origin

I've been looking for a solution to convert cartesian coordinates (lat, long) that I have to polar coordinates in order to facilitate a simulation that I want to run, but I haven't found any questions or answers here for doing this in R. There are a number of options, including the built in function cart2pol in Matlab, but all of my data are in R and I'd like to continue getting comfortable working in this framework.
Question:
I have lat/long coordinates from tagging data, and I want to convert these to polar coordinates (meaning jump size and angle: http://en.wikipedia.org/wiki/Polar_coordinate_system) so that I can then shuffle or bootstrap them (haven't decided which) about 1,000 times, and calculate the straight-line distance of each simulated track from the starting point. I have a true track, and I'm interested in determining if this animal is exhibiting site affinity by simulating 1,000 random tracks with the same jump sizes and turning angles, but in completely different orders and combinations. So I need 1,000 straight-line distances from the origin to create a distribution of distances and then compare this to my true data set's straight-line distance.
I'm comfortable doing the bootstrapping, but I'm stuck at the very first step, which is converting my cartesian lat/long coordinates to polar coordinates (jump size and turning angle). I know there are built in functions to do this in other programs such as Matlab, but I can't find any way to do it in R. I could do it manually by hand in a for-loop, but if there's a package out there or any easier way to do it, I'd much prefer that.
Ideally I'd like to convert the data to polar coordinates, run the simulation, and then for each random track output an end point as cartesian coordinates, lat/long, so I can then calculate the straight-line distance traveled.
I didn't post any sample data, as it would just be a two-column data frame of lat and long coordinates.
Thanks for any help you can provide! If there's an easy explanation somewhere on this site or others that I missed, please point me in that direction! I couldn't find anything.
Cheers
For x-y coordinates that are in the same units (e.g. meters rather than degrees of latitude and degrees of longitude), you can use this function to get a data.frame of jump sizes and turning angles (in degrees).
getSteps <- function(x,y) {
d <- diff(complex(real = x, imaginary = y))
data.frame(size = Mod(d),
angle = c(NA, diff(Arg(d)) %% (2*pi)) * 360/(2*pi))
}
## Try it out
set.seed(1)
x <- rnorm(10)
y <- rnorm(10)
getSteps(x, y)
# size angle
# 1 1.3838360 NA
# 2 1.4356900 278.93771
# 3 2.9066189 101.98625
# 4 3.5714584 144.00231
# 5 1.6404354 114.73369
# 6 1.3082132 135.76778
# 7 0.9922699 74.09479
# 8 0.2036045 141.67541
# 9 0.9100189 337.43632
## A plot helps check that this works
plot(x, y, type = "n", asp = 1)
text(x, y, labels = 1:10)
You can do a transformation bewteen cartesian and polar this way:
polar2cart <- function(r, theta) {
data.frame(x = r * cos(theta), y = r * sin(theta))
}
cart2polar <- function(x, y) {
data.frame(r = sqrt(x^2 + y^2), theta = atan2(y, x))
}
Since it is fairly straight forward, you can write your own function. Matlab-like cart2pol function in R:
cart2pol <- function(x, y)
{
r <- sqrt(x^2 + y^2)
t <- atan(y/x)
c(r,t)
}
I used Josh O'Brien's code and got what appear to be reasonable jumps and angles—they match up pretty well to eyeballing the rough distance and heading between points. I then used a formula from his suggestions to create a function to turn the polar coordinates back to cartesian coordinates, and a for loop to apply the function to the data frame of all of the polar coordinates. The loops appear to work, and the outputs are in the correct units, but I don't believe the values that it's outputting are corresponding to my data. So either I did a miscalculation with my formula, or there's something else going on. More details below:
Here's the head of my lat long data:
> head(Tag1SSM[,3:4])
lon lat
1 130.7940 -2.647957
2 130.7873 -2.602994
3 130.7697 -2.565903
4 130.7579 -2.520757
5 130.6911 -2.704841
6 130.7301 -2.752182
When I plot the full dataset just as values, I get this plot:
which looks exactly the same as if I were to plot this using any spatial or mapping package in R.
I then used Josh's function to convert my data to polar coordinates:
x<-Tag1SSM$lon
y<-Tag1SSM$lat
getSteps <- function(x,y) {
d <- diff(complex(real = x, imaginary = y))
data.frame(size = Mod(d),
angle = c(NA, diff(Arg(d)) %% (2*pi)) * 360/(2*pi))
}
which produced the following polar coordinates appropriately:
> polcoords<-getSteps(x,y)
> head(polcoords)
size angle
1 0.04545627 NA
2 0.04103718 16.88852
3 0.04667590 349.38153
4 0.19581350 145.35439
5 0.06130271 59.37629
6 0.01619242 31.86359
Again, these look right to me, and correspond well to the actual angles and relative distances between points. So far so good.
Now I want to convert these back to cartesian coordinates and calculate a euclidian distance from the origin. These don't have to be in true lat/long, as I'm just comparing them amongst themselves. So I'm happy for the origin to be set as (0,0) and for distances to be calculated in reference x,y values instead of kilometers or something like that.
So, I used this function with Josh's help and a bit of web searching:
polar2cart<-function(x,y,size,angle){
#convert degrees to radians (dividing by 360/2*pi, or multiplying by pi/180)
angle=angle*pi/180
if(is.na(x)) {x=0} #this is for the purpose of the for loop below
if(is.na(y)) {y=0}
newx<-x+size*sin(angle) ##X #this is how you convert back to cartesian coordinates
newy<-y+size*cos(angle) ##Y
return(c("x"=newx,"y"=newy)) #output the new x and y coordinates
}
And then plugged it into this for loop:
u<-polcoords$size
v<-polcoords$angle
n<-162 #I want 162 new coordinates, starting from 0
N<-cbind(rep(NA,163),rep(NA,163)) #need to make 163 rows, though, for i+1 command below— first row will be NA
for(i in 1:n){
jump<-polar2cart(N[i,1],N[i,2],u[i+1],v[i+1]) #use polar2cart function above, jump from previous coordinate in N vector
N[i+1,1]<-jump[1] #N[1,] will be NA's which sets the starting point to 0,0—new coords are then calculated from each previous N entry
N[i+1,2]<-jump[2]
Dist<-sqrt((N[163,1]^2)+(N[163,2]^2))
}
And then I can take a look at N, with my new coordinates based on those jumps:
> N
[,1] [,2]
[1,] NA NA
[2,] 0.011921732 0.03926732
[3,] 0.003320851 0.08514394
[4,] 0.114640605 -0.07594871
[5,] 0.167393509 -0.04472125
[6,] 0.175941466 -0.03096891
This is where the problem is... the x,y coordinates from N get progressively larger—there's a bit of variation in there, but if you scroll down the list, y goes from 0.39 to 11.133, with very few backward steps to lower values. This isn't what my lat/long data do, and if I calculated the cart->pol and pol->cart properly, these new values from N should match my lat/long data, just in a different coordinate system. This is what the N values look like plotted:
Not the same at all... The last point in N is the farthest point from the origin, while in my lat/long data, the last point is actually quite close to the first point, and definitely not the farthest point away. I think the issue must be in my conversion from polar coordinates back to cartesian coordinates, but I'm not sure how to fix it...
Any help in solving this would be much appreciated!
Cheers
I think this code I wrote converts to polar coordinates:
# example data
x<-runif(30)
y<-runif(30)
# center example around 0
x<-x-mean(x)
y<-y-mean(y)
# function to convert to polar coordinates
topolar<-function(x,y){
# calculate angles
alphas<-atan(y/x)
# correct angles per quadrant
quad2<-which(x<0&y>0)
quad3<-which(x<0&y<0)
quad4<-which(x>0&y<0)
alphas[quad2]<-alphas[quad2]+pi
alphas[quad3]<-alphas[quad3]+pi
alphas[quad4]<-alphas[quad4]+2*pi
# calculate distances to 0,0
r<-sqrt(x^2+y^2)
# create output
polar<-data.frame(alphas=alphas,r=r)
}
# call function
polar_out<-topolar(x,y)
# get out angles
the_angles<-polar_out$alphas
Another option only in degree
pol2car = function(angle, dist){
co = dist*sin(angle)
ca = dist*cos(angle)
return(list(x=ca, y=co))
}
pol2car(angle = 45, dist = sqrt(2))
cart2sph {pracma} Transforms between cartesian, spherical, polar, and cylindrical coordinate systems in two and three dimensions.

Rotate graph by angle

I have multiple matrices filled with the x and y coordinates of multiple points in 2D space that make up a graph. The matrices look something like this
x1 x2 x3 x4 ...
y1 y2 y3 y4 ...
A possible graph looks something like this
What I want to do is rotate the graph around point A so that the line between the points A and B are parallel to the X-Axis.
My idea was to treat the line AB as the hypothenuse of a right-triangle, calculate α (the angle at point A) and rotate the matrix for this graph by it using a rotation matrix.
What I did so far is the following
#df is the subset of my data that describes the graph we're handling right now,
#df has 2 or more rows
beginx=df[1,]$xcord #get the x coordinate of point A
beginy=df[1,]$ycord #get the y coordinate of point A
endx=df[nrow(df)-1,]$xcord #get the x coordinate of point B
endy=df[nrow(df)-1,]$ycord #get the y coordinate of point B
xnow=df$xcord
ynow=df$ycord
xdif=abs(beginx-endx)
ydif=abs(beginy-endy)
if((xdif != 0) & (ydif!=0)){
direct=sqrt(abs((xdif^2)-(ydif^2))) #calculate the length of the hypothenuse
sinang=abs(beginy-endy)/direct
angle=1/sin(sinang)
if(beginy>endy){
angle=angle
}else{
angle=360-angle
}
rotmat=rot(angle) # use the function rot(angle) to get the rotation matrix for
# the calculated angle
A = matrix(c(xnow,ynow),nrow=2,byrow = TRUE) # matrix containing the graph coords
admat=rotmat%*%A #multiply the matrix with the rotation matrix
}
This approach fails because it isn't flexible enough to always calculate the needed angle with the result being that the graph is rotated by the wrong angle and / or in the wrong direction.
Thanks in advance for reading and hopefully some of you can bring some fresh ideas to this
Edit: Data to reproduce this can be found here
X-Coordinates
Y-Coordinates
Not sure how to provide the data you've asked for, I'll gladly provide it in another way if you specify how you'd like it
Like this?
#read in X and Y as vectors
M <- cbind(X,Y)
#plot data
plot(M[,1],M[,2],xlim=c(0,1200),ylim=c(0,1200))
#calculate rotation angle
alpha <- -atan((M[1,2]-tail(M,1)[,2])/(M[1,1]-tail(M,1)[,1]))
#rotation matrix
rotm <- matrix(c(cos(alpha),sin(alpha),-sin(alpha),cos(alpha)),ncol=2)
#shift, rotate, shift back
M2 <- t(rotm %*% (
t(M)-c(M[1,1],M[1,2])
)+c(M[1,1],M[1,2]))
#plot
plot(M2[,1],M2[,2],xlim=c(0,1200),ylim=c(0,1200))
Edit:
I'll break down the transformation to make it easier to understand. However, it's just basic linear algebra.
plot(M,xlim=c(-300,1200),ylim=c(-300,1200))
#shift points, so that turning point is (0,0)
M2.1 <- t(t(M)-c(M[1,1],M[1,2]))
points(M2.1,col="blue")
#rotate
M2.2 <- t(rotm %*% (t(M2.1)))
points(M2.2,col="green")
#shift back
M2.3 <- t(t(M2.2)+c(M[1,1],M[1,2]))
points(M2.3,col="red")
Instead of a data frame, it looks like your data is better served as a matrix (via as.matrix).
This answer very similar to Roland's, but breaks things down into more steps and has some special-case handling when the angle is a multiple of pi/2.
#sample data
set.seed(1) #for consistency of random-generated data
d <- matrix(c(sort(runif(50)),sort(runif(50))),ncol=2)
#rotation about point A
rotA <- function(d) {
d.offset <- apply(d,2,function(z) z - z[1]) #offset data
endpoint <- d.offset[nrow(d.offset),] #gets difference
rot <- function(angle) matrix(
c(cos(angle),-sin(angle),sin(angle),cos(angle)),nrow=2) #CCW rotation matrix
if(endpoint[2] == 0) {
return(d) #if y-diff is 0, then no action required
} else if (endpoint[1] == 0) {
rad <- pi/2 #if x-diff is 0, then rotate by a right angle
} else {rad <- atan(endpoint[2]/endpoint[1])}
d.offset.rotate <- d.offset %*% rot(-rad) #rotation
d.rotate <- sapply(1:2,function(z) d.offset.rotate[,z] + d[1,z]) #undo offset
d.rotate
}
#results and plotting to check visually
d.rotate <- rotA(d)
plot(d.rotate)
abline(h=d[1,2])

selecting points within a geometrical shape in a grid with R

Here is my problem.
I have an hypercube I built using the following codes:
X <- seq (-1/sqrt(2),1/sqrt(2),length.out=100)
Y <- seq (-sqrt(2)/(2*sqrt(3)),sqrt(2)/sqrt(3),length.out=100)
Z <- seq (-1/(2*sqrt(3)),sqrt(3)/2,length.out=100)
grid <- data.frame (expand.grid(X=X,Y=Y,Z=Z))
Then, I would delete from the grid data.frame all the points that are not located within the tetrahedron defined by the following coordinates:
w : (0,0,sqrt(3)/2)
x : (0,sqrt(2)/sqrt(3),-1/(2*sqrt(3)))
y : (-1/sqrt(2),-sqrt(2)/(2*sqrt(3)),-1/(2*sqrt(3)))
z : (1/sqrt(2),-sqrt(2)/(2*sqrt(3)),-1/(2*sqrt(3)))
I do not find a away to do this without howfully long codes. Can anyone help me please
Thanks !!!
Package ptinpoly has a function pip3d to find wether a point is in a polyhedron or not.
library(ptinpoly)
X <- seq(-1/sqrt(2),1/sqrt(2),length.out=10) #I used a smaller dataset here
Y <- seq(-sqrt(2)/(2*sqrt(3)),sqrt(2)/sqrt(3),length.out=10)
Z <- seq(-1/(2*sqrt(3)),sqrt(3)/2,length.out=10)
# The query points has to be inputted as a matrix.
grid <- as.matrix(expand.grid(X=X,Y=Y,Z=Z))
w <- c(0,0,sqrt(3)/2)
x <- c(0,sqrt(2)/sqrt(3),-1/(2*sqrt(3)))
y <- c(-1/sqrt(2),-sqrt(2)/(2*sqrt(3)),-1/(2*sqrt(3)))
z <- c(1/sqrt(2),-sqrt(2)/(2*sqrt(3)),-1/(2*sqrt(3)))
# The matrix of vertices
tetra_vert <- matrix(c(w,x,y,z),byrow=TRUE,nrow=4)
# The matrix of faces (each row correspond to a vector of vertices linked by a face.
tetra_faces <- matrix(c(1,2,3,
1,2,4,
1,3,4,
2,3,4),byrow=TRUE,nrow=4)
inout <- pip3d(tetra_vert, tetra_faces, grid)
The result is a vector of integers, 0 means the point fall on a face, 1 that it is inside the polyhedron, -1 outside.
The solution of your problem is therefore:
grid[inout%in%c(0,1),]
make planes which form the tetrahedron and compare if a point is on the right side of each of the planes.
pointers: think of calculating dot products with the plane normal and such. One option is to draw a vector from tetrahedron point to each corner, 4 in total and 1 vector from point to point and then use dotproducts and whatnot to see if the point-point vector is within the 4 others.
the point is probably within the tetrahedron if vector to it can be expressed as a sum of non negative multiples of the corner vectors and the vector short enough.

Resources