Calculate the angles between lat/long coordinates with moving animal - r

I want to work out what the green and yellow angles are on the diagrams. The red line is the movement of animal 1 with each numbered point representing its location at each time point (rowid in the data). A3 represents the position of animal 3 when animal 1 is at point 2.
To work out the orange angle, I think I need to work out the angle drawn on in black and then do 180° minus the black angle, but i'm not sure how to work out this either.
I want to work this out for each timepoint in the data, a sample of which i've included below. In some cases I don't have the location of animal 3 which is fine and the angle can be NA. I've included 2 diagrams to show the different situations that could occur. The crs= 4326. Any help is much appreciated!
rowid,id,t_,lon,lat,Animal3.lon,Animal3 .lat
1,Animal 1,01/01/2017 06:19,-9.95545,3.777097,#N/A,#N/A
2,Animal 1,01/01/2017 08:45,-9.93917,3.774998,-9.95192,3.789981
3,Animal 1,01/01/2017 16:34,-9.94561,3.779115,-9.94959,3.783688
4,Animal 1,01/02/2017 08:18,-9.94575,3.784986,-9.94617,3.798219
5,Animal 1,01/02/2017 15:57,-9.94198,3.794307,-9.94861,3.802043
6,Animal 1,01/03/2017 07:24,-9.9353,3.783469,-9.9472,3.795541
7,Animal 1,01/03/2017 17:44,-9.93446,3.775781,-9.93526,3.81313
8,Animal 1,01/03/2017 19:33,-9.94091,3.773766,#N/A,#N/A
9,Animal 1,01/04/2017 06:33,-9.93553,3.775065,-9.93203,3.799718
10,Animal 1,01/04/2017 17:01,-9.93588,3.779135,-9.93348,3.796017
11,Animal 1,01/05/2017 08:43,-9.92929,3.774276,-9.93471,3.794776
12,Animal 1,01/05/2017 16:43,-9.92989,3.778653,-9.93755,3.803964

Here's my solution based on a simplified version of the matrix you provided. Credit to #mdsummer's function - here - which I modified only slightly.
library(tidyverse)
library(maptools)
library(sf)
# Function - get angle from set of three points
trackAngle <- function(xy) {
if(any(is.na(xy))){return(NA)}
angles <- abs(c(trackAzimuth(xy), 0) -
c(0, rev(trackAzimuth(xy[nrow(xy):1, ]))))
angles <- ifelse(angles > 180, 360 - angles, angles)
angles[is.na(angles)] <- 180
angles[-c(1, length(angles))]
}
# Original Matrix
animal_mat <- t(matrix(c(-9.95545,3.777097,NA, NA,
-9.93917,3.774998,-9.95192,3.789981,
-9.94561,3.779115,-9.94959,3.783688,
-9.94575,3.784986,-9.94617,3.798219,
-9.94198,3.794307,-9.94861,3.802043,
-9.9353,3.783469,-9.9472,3.795541,
-9.93446,3.775781,-9.93526,3.81313,
-9.94091,3.773766,NA, NA,
-9.93553,3.775065,-9.93203,3.799718,
-9.93588,3.779135,-9.93348,3.796017,
-9.92929,3.774276,-9.93471,3.794776,
-9.92989,3.778653,-9.93755,3.803964), 4, 12))
## Reformat to get lists of points
# Other animal angles
animal_pts1 <- map(2:nrow(animal_mat), function(idx){
animal_pt <- unname(animal_mat[idx,][c(1,2)])
other_pt <- unname(animal_mat[idx,][c(3,4)])
animal_last_pt <- unname(animal_mat[idx-1,][c(1,2)])
animal_pts_mat <- rbind(animal_last_pt, animal_pt, other_pt)
animal_pts_mat
})
# Main animal angles
animal_pts2 <- map(2:(nrow(animal_mat)-1), function(idx){
animal_pt <- unname(animal_mat[idx,][c(1,2)])
animal_last_pt <- unname(animal_mat[idx-1,][c(1,2)])
animal_next_pt <- unname(animal_mat[idx+1,][c(1,2)])
animal_pts_mat <- rbind(animal_last_pt, animal_pt, animal_next_pt)
animal_pts_mat
})
## Angles
# Other animal angles (11)
other_animal_angles <- 180 - map(animal_pts1, ~trackAngle(.x)) %>% unlist
# Main animal angles (10)
main_animal_angles <- 180 - map(animal_pts2, ~trackAngle(.x)) %>% unlist
## Combining angles info
angles_tbl <- tibble(main = main_animal_angles, other = other_animal_angles[1:10])
## Final dataframe
angles_tbl %>% mutate(tot = main+other)
# A tibble: 10 x 3
main other tot
<dbl> <dbl> <dbl>
1 155. 138. 292.
2 56.0 16.4 72.4
3 23.3 0.451 23.8
4 126. 62.5 189.
5 25.4 167. 192.
6 78.8 175. 254.
7 176. NA NA
8 81.3 68.3 150.
9 131. 13.0 144.
10 134. 141. 275.
Note that you get angles greater than 180 with this solution, so something may be wrong here (possible with CRS?). Technically, it is possible to get values greater than 180, although I don't know if it makes sense in practice because I don't have any domain knowledge that would give me a sense of what angles are possible in this setting.

Related

Apply point-in-polygon to multiple polygons with identifier

I have a dataframe with thousands of lat long along with other attributes
> head(df)
created_on latitude longitude day Value Order Total.Value
1 55:40.8 13.01504 80.19199 7/9/2022 2 74 19
2 08:27.0 12.97431 80.19029 7/9/2022 19 49 14
3 39:59.5 12.95778 80.19588 7/9/2022 26 205 50
4 20:43.0 13.07842 80.18144 7/8/2022 1 178 7
5 34:06.2 12.92485 80.09914 7/10/2022 7 106 12
6 08:12.1 12.88727 80.23384 7/10/2022 10 167 17
I have a few polygons in WKT format
head(Poly)
WKT name
1 POLYGON ((80.156 13.058, 80.156 13.040, 80.182 13.052, 80.156 13.058)) Polygon-1
2 POLYGON ((80.2015739 13.050, 80.223 13.033, 80.223 13.049, 80.201 13.050)) Polygon-2
3 POLYGON ((80.185 13.021, 80.200 13.009, 80.202 13.020, 80.185 13.021)) Polygon-3
I want to apply Point-in-Polygon to the df and identify the lat longs that lie in any of these polygons. The output should contain the original attributes of the lat longs as well as the name of the polygon in which they lie.
I have been able to do it for one polygon at a time but it becomes tedious when I have up to 10-20 polygon to run through. I used the rgeos package to translate the WKT into a dataframe and the sp package to apply point in polygon). Below is the code already used
Poly <- read.csv("PIP0.csv")
str <- Poly[2,1]
test <- readWKT(str)
#convert wkt to data.frame of coordinates
coords <- as.data.frame(coordinates(test#polygons[[1]]#Polygons[[1]]))
Points <- read.csv("PIP2.csv")
#logical vector to determine whether points in the df-Point are present in Poly
A <- as.logical(point.in.polygon(Points$latitude, Points$longitude, coords$y, coords$x, mode.checked = FALSE ))
Points$flag[A] <- "This point is present in Polygon"
Thanks,

Calculating measure of spatial segregation?

There is five polygons for five different cities (see attached file in the link, it's called bound.shp). I also have a point file "points.csv" with longitude and latitude where for each point I know the proportion of people belonging to group m and group h.
I am trying to calculate the spatial segregation proposed by Reardon and O’Sullivan, “Measures of Spatial Segregation”
There is a package called "seg" which should allow us to do it. I am trying to do it but so far no success.
Here is the link to the example file: LINK. After downloading the "example". This is what I do:
setwd("~/example")
library(seg)
library(sf)
bound <- st_read("bound.shp")
points <- st_read("points.csv", options=c("X_POSSIBLE_NAMES=x","Y_POSSIBLE_NAMES=y"))
#I apply the following formula
seg::spseg(bound, points[ ,c(group_m, group_h)] , smoothing = "kernel", sigma = bandwidth)
Error: 'x' must be a numeric matrix with two columns
Can someone help me solve this issue? Or is there an alternate method which I can use?
Thanks a lot.
I don't know what exactly spseg function does but when evaluating the spseg function in the seg package documentation;
First argument x should be dataframe or object of class Spatial.
Second argument data should be matrix or dataframe.
After evaluating the Examples for spseg function, it should have been noted that the data should have the same number of rows as the id number of the Spatial object. In your sample, the id is the cities that have different polygons.
First, let's examine the bound data;
setwd("~/example")
library(seg)
library(sf)
#For the fortify function
library(ggplot2)
bound <- st_read("bound.shp")
bound <- as_Spatial(bound)
class(bound)
"SpatialPolygonsDataFrame"
attr(,"package")
"sp"
tail(fortify(bound))
Regions defined for each Polygons
long lat order hole piece id group
5379 83.99410 27.17326 972 FALSE 1 5 5.1
5380 83.99583 27.17339 973 FALSE 1 5 5.1
5381 83.99705 27.17430 974 FALSE 1 5 5.1
5382 83.99792 27.17552 975 FALSE 1 5 5.1
5383 83.99810 27.17690 976 FALSE 1 5 5.1
5384 83.99812 27.17700 977 FALSE 1 5 5.1
So you have 5 id's in your SpatialPolygonsDataFrame. Now, let's read the point.csv with read.csv function since the data is required to be in matrix format for the spseg function.
points <- read.csv("c://Users/cemozen/Downloads/example/points.csv")
tail(points)
group_m group_h x y
950 4.95 78.49000 84.32887 26.81203
951 5.30 86.22167 84.27448 26.76932
952 8.68 77.85333 84.33353 26.80942
953 7.75 82.34000 84.35270 26.82850
954 7.75 82.34000 84.35270 26.82850
955 7.75 82.34000 84.35270 26.82850
In the documentation and the example within, it has been strictly stated that; the row number of the points which have two attributes (group_m and group_h in our data), should be equal to the id number (which is the cities). Maybe, you should calculate a value by using the mean for each polygon or any other statistics for each city in your data to be able to get only one value for each polygon.
On the other hand, I just would like to show that the function is working properly after feeding with a matrix that has 5 rows and 2 groups.
sample_spseg <- spseg(bound, as.matrix(points[1:5,c("group_m", "group_h")]))
print(sample_spseg)
Reardon and O'Sullivan's spatial segregation measures
Dissimilarity (D) : 0.0209283
Relative diversity (R): -0.008781
Information theory (H): -0.0066197
Exposure/Isolation (P):
group_m group_h
group_m 0.07577679 0.9242232
group_h 0.07516285 0.9248372
--
The exposure/isolation matrix should be read horizontally.
Read 'help(spseg)' for more details.
first: I do not have experience with the seg-package and it's function.
What I read from your question, is that you want to perform the spseg-function, om the points within each area?
If so, here is a possible apprach:
library(sf)
library(tidyverse)
library(seg)
library(mapview) # for quick viewing only
# read polygons, make valif to avoid probp;ems later on
areas <- st_read("./temp/example/bound.shp") %>%
sf::st_make_valid()
# read points and convert to sf object
points <- read.csv("./temp/example/points.csv") %>%
sf::st_as_sf(coords = c("x", "y"), crs = 4326) %>%
#spatial join city (use st_intersection())
sf::st_join(areas)
# what do we have so far??
mapview::mapview(points, zcol = "city")
# get the coordinates back into a data.frame
mydata <- cbind(points, st_coordinates(points))
# drop the geometry, we do not need it anymore
st_geometry(mydata) <- NULL
# looks like...
head(mydata)
# group_m group_h city X Y
# 1 8.02 84.51 2 84.02780 27.31180
# 2 8.02 84.51 2 84.02780 27.31180
# 3 8.02 84.51 2 84.02780 27.31180
# 4 5.01 84.96 2 84.04308 27.27651
# 5 5.01 84.96 2 84.04622 27.27152
# 6 5.01 84.96 2 84.04622 27.27152
# Split to a list by city
L <- split(mydata, mydata$city)
# loop over list and perform sppseg function
final <- lapply(L, function(i) spseg(x = i[, 4:5], data = i[, 1:2]))
# test for the first city
final[[1]]
# Reardon and O'Sullivan's spatial segregation measures
#
# Dissimilarity (D) : 0.0063
# Relative diversity (R): -0.0088
# Information theory (H): -0.0067
# Exposure/Isolation (P):
# group_m group_h
# group_m 0.1160976 0.8839024
# group_h 0.1157357 0.8842643
# --
# The exposure/isolation matrix should be read horizontally.
# Read 'help(spseg)' for more details.
spplot(final[[1]], main = "Equal")

County area calculated from NLCD (Landcover data) rasters is too large

I'm trying to calculate landcover repartition for each US county.
I have obtained NLCD for the Apache county using the FedData package (devtools version) and I'm using county shapefiles from the Census bureau.
The problem is that I get an area that is much larger than the official one and the one indicated in my shapefile, namely 51,000km^2 instead of 29,0000km^2 officially. There must be something I don't understand about the raster object but I'm a very confused after hours of websearching, any help appreciated.
The following describes the code used and the method used to calculate. The county data can be downloaded here:
https://www2.census.gov/geo/tiger/TIGER2016/COUNTY/
The following code assumes the county shapefile is saved and unzipped.
Get and read the data
#devtools::install_github("ropensci/FedData")
library(FedData)
library(rgdal)
library(dplyr)
#Get Apache polygone
counties<- readOGR('tl_2016_us_county/tl_2016_us_county.shp')
apache <- subset(counties,counties$GEOID=="04001")
# Get NCLD data
nlcd_data <- get_nlcd(apache,
year = 2011,
label = "Apache",
force.redo = TRUE)
nlcd_data #inspect the object, can see that number of cells is around 57 million
I have then extracted the values of the raster and put them into a frequency table. From there I calculate the resulting area. Since the NLCD data is 30m resolution, I multiply the number of cell of each category by 900 and divide the result by 1 million to obtain the area in Square Kilometer.
The calculated total area is too large.
# Calculating the landcover repartition in County
landcover<-data.frame(x2011 = values(nlcd_data)) #Number of rows corresponds to number of cells
landcover_freq<-table(landcover)
df_landcover <- as.data.frame(landcover_freq)
res <- df_landcover %>%
mutate(area_type_sqm = Freq*900,
area_type_km=area_type_sqm/1e6,
area_sqkm = sum(area_type_km))%>%
group_by(landcover)%>%
mutate(pc_land =round(100*area_type_km/area_sqkm,1))
head(arrange(res,desc(pc_land)))
# A tibble: 6 x 6
# Groups: landcover [6]
landcover Freq area_type_sqm area_type_km area_sqkm pc_land
<fct> <int> <dbl> <dbl> <dbl> <dbl>
1 52 33455938 30110344200 30110. 51107. 58.9
2 42 16073820 14466438000 14466. 51107. 28.3
3 71 5999412 5399470800 5399. 51107. 10.6
4 31 488652 439786800 440. 51107. 0.9
5 21 362722 326449800 326. 51107. 0.6
6 22 95545 85990500 86.0 51107. 0.2
## Total area calculated from raster is 51,107 square km
apache_area <- as.data.frame(apache) %>%
mutate(AREA=(as.numeric(ALAND)+as.numeric(AWATER))/1e6) %>%
select(AREA)
apache_area$AREA
29055.47 #Official area of apache county (in square km)
Visual inspection of the shapefile and the raster:
The difference doesn't seem large enough to justify the difference
apache <- spTransform(apache,proj4string(nlcd_data))
plot(nlcd_data)
plot(apache,add=TRUE)
The reason is that you get the data returned in the Mercator projection.
crs(nlcd_data)
CRS arguments:
+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=#null +wktext
+no_defs
This coordinate reference system preserves shape and that is why it is used for web-mapping. It should not be used for most other purposes, as it is also notorious for distortion of area.
The take-home message is to never just trust the nominal resolution of a projected raster and assume that it is correct and/or constant. The reliable way to compute area is to use longitude/latitude coordinates, because these are, by definition, not distorted.
The reported spatial resolution is
res(nlcd_data)
[1] 30 30
So it is not surprising that you expected that the cells have an area of 30 x 30 = 900 m2. But the cells sizes are actually between 573 and 625 m2 for Apache county. Illustrated below.
First get the data
library(FedData)
counties <- raster::getData("GADM", country="USA", level=2)
apache <- subset(counties,counties$NAME_2=="Apache")
nlcd <- get_nlcd(apache, year = 2011, label = "nlcd_apache", force.redo = TRUE)
# move to terra
library(terra)
r <- rast(nlcd)
ap <- vect(apache)
# county boundaries to Mercator
apm <- project(ap, crs(r))
To compute the area of the grid cells I represent them as polygons. I first aggregate to get much larger cells to avoid getting too many small polygons (it would take very long, and perhaps crash R). I then transform the Mercator polygons to longitude/latitude, compute their true area, and transform back (just for consistent display purposes).
f <- 300
a <- aggregate(rast(r), f)
p <- as.polygons(a)
# compute area
g <- project(p, "+proj=longlat")
g$area <- round(expanse(g) / (f * f), 5)
# project back and plot
merc <- project(g, crs(r))
plot(merc, "area", border=NA)
lines(apm)
The map shows the approximate variation in the size of the original "900 m2" cells (between 573 and 625). This is not the case when I use the original data, as illustrated below.
library(terra)
# "filename" is the local file that has the nlcd data
usnlcd <- rast(filename)
crs(usnlcd, proj4=TRUE)
#[1] "+proj=aea +lat_0=23 +lon_0=-96 +lat_1=29.5 +lat_2=45.5 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs"
res(x)
#[1] 30 30
Note that +proj=aea stands for the Albers Equal Area projection!
ap <- vect(apache)
apm <- project(ap, crs(usnlcd))
x <- crop(usnlcd, apm)
par(mfrow=c(1,2))
plot(x)
lines(apm)
# gg2 computed as above
plot(gg2, "area", border=NA)
As you can see, the cell area is indeed 900 m2, with only very little distortion, so small that it can be ignored.
You could transform the Mercator data back to the original +proj=aea, but then you would have degraded the quality of the data twice. So that is a really bad idea. You could also account for the true cell area of each cell, but that is rather convoluted.
Finally, to get the area covered by each land cover class
m <- mask(x, apm)
f <- freq(m)
f <- data.frame(f)
f$area <- round(f$count * 900 / 1e6, 1)
# the next step is a bit tricky and will be done by `freq` in future versions
levs <- levels(m)[[1]]
f$class <- levs[f$value + 1]
Voila:
f[, c("class", "area")]
# class area
#1 Open Water 21.5
#2 Developed, Open Space 175.3
#3 Developed, Low Intensity 46.4
#4 Developed, Medium Intensity 9.7
#5 Developed, High Intensity 1.0
#6 Barren Land 232.5
#7 Deciduous Forest 44.6
#8 Evergreen Forest 7604.6
#9 Mixed Forest 2.0
#10 Shrub/Scrub 18262.8
#11 Herbaceuous 2514.4
#12 Hay/Pasture 1.9
#13 Cultivated Crops 0.0
#14 Woody Wetlands 38.8
#15 Emergent Herbaceuous Wetlands 53.6
And the total is as expected
sum(f$area)
#[1] 29009.1
PS. This problem has now been solved at the source --- but I hope this answer will remain useful for others using spatial data with a Mercator CRS.
Y'all, thanks so much for catching this, and thanks Robert for alerting me on Github! I hadn't realized that WCS was serving up transformed data. Luckily, the MRLC has made the data available in its native format as well, so I've gone ahead and pushed an update to FedData that provides those data (in CONUS Albers).
FWIW (no pictures, because this is my first SO post):
#devtools::install_github("ropensci/FedData")
library(FedData)
library(magrittr)
library(dplyr)
library(tigris)
library(raster)
#Get Apache polygon
apache <-
tigris::counties(state = "AZ") %>%
dplyr::filter(NAME == "Apache")
# Get NCLD data
nlcd_data <-
FedData::get_nlcd(
template = apache,
year = 2011,
label = "Apache",
force.redo = TRUE
)
# Transform Apache polygon to NLCD CRS
apache %<>%
sf::st_transform(raster::crs(nlcd_data))
# Plot NLCD raster and transformed Apache polygon
raster::plot(nlcd_data)
apache %>%
sf::st_geometry() %>%
plot(border = "white", lwd = 2, add = TRUE)
https://i.imgur.com/5b09t7P.png
# Mask NLCD data by Apache County (and plot result)
nlcd_data %<>%
raster::mask(apache)
plot(nlcd_data)
https://i.imgur.com/UinQ0v3.png
# Table of areas in km^2
nlcd_data_table <-
nlcd_data %>%
values() %>%
tibble::tibble(landcover = .) %>%
na.omit() %>%
dplyr::group_by(landcover) %>%
count(name = "freq") %>%
dplyr::mutate(area = (freq * 900) %>%
units::set_units("m^2") %>%
units::set_units("km^2"))
nlcd_data_table
#> # A tibble: 15 x 3
#> # Groups: landcover [15]
#> landcover freq area
#> <int> <int> [km^2]
#> 1 11 24134 21.7206
#> 2 21 194720 175.2480
#> 3 22 51321 46.1889
#> 4 23 10485 9.4365
#> 5 24 1069 0.9621
#> 6 31 259987 233.9883
#> 7 41 48906 44.0154
#> 8 42 8456196 7610.5764
#> 9 43 2179 1.9611
#> 10 52 19941185 17947.0665
#> 11 71 3190650 2871.5850
#> 12 81 2169 1.9521
#> 13 82 4 0.0036
#> 14 90 42628 38.3652
#> 15 95 59014 53.1126
# Total area NLCD raster:
sum(nlcd_data_table$area)
#> 29056.18 [km^2]
# Area of Apache county
apache %>%
sf::st_area() %>%
units::set_units("km^2")
#> 29056.19 [km^2]
Created on 2021-05-26 by the reprex package (v2.0.0)

Find shortest distance between multiple points

Imagine a small dataset of xy coordinates. These points are grouped by a variable called indexR, there are 3 groups in total. All xy coordinates are in the same units. The data looks approximately like so:
# A tibble: 61 x 3
indexR x y
<dbl> <dbl> <dbl>
1 1 837 924
2 1 464 661
3 1 838 132
4 1 245 882
5 1 1161 604
6 1 1185 504
7 1 853 870
8 1 1048 859
9 1 1044 514
10 1 141 938
# ... with 51 more rows
The goal is to determine which 3 points, one from each group, are closest to each other, in the sense of minimizing the sum of the pairwise distances between selected points.
I have attempted this by considering euclidian distances, as follows. (Credit goes to #Mouad_S, in this thread, and https://gis.stackexchange.com/questions/233373/distance-between-coordinates-in-r)
#dput provided at bottom of this post
> df$dummy = 1
> df %>%
+ full_join(df, c("dummy" = "dummy")) %>%
+ full_join(df, c("dummy" = "dummy")) %>%
+ filter(indexR.x != indexR.y & indexR.x != indexR & indexR.y != indexR) %>%
+ mutate(dist =
+ ((.$x - .$x.x)^2 + (.$y- .$y.x)^2)^.5 +
+ ((.$x - .$x.y)^2 + (.$y- .$y.y)^2)^.5 +
+ ((.$x.x - .$x.y)^2 + (.$y.x- .$y.y)^2)^.5,
+ dist = round(dist, digits = 0)) %>%
+ arrange(dist) %>%
+ filter(dist == min(dist))
# A tibble: 6 x 11
indexR.x x.x y.x dummy indexR.y x.y y.y indexR x y dist
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 638 324 1 2 592 250 3 442 513 664
2 1 638 324 1 3 442 513 2 592 250 664
3 2 592 250 1 1 638 324 3 442 513 664
4 2 592 250 1 3 442 513 1 638 324 664
5 3 442 513 1 1 638 324 2 592 250 664
6 3 442 513 1 2 592 250 1 638 324 664
From this we can identify the three points closest together (minimum distance apart; enlarged on the figure below). However, the challenge comes when extending this such that indexR has 4,5 ... n groups. The problem is in finding a more practical or optimised method for making this calculation.
structure(list(indexR = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3), x = c(836.65, 464.43, 838.12, 244.68, 1160.86,
1184.52, 853.4, 1047.96, 1044.2, 141.06, 561.01, 1110.74, 123.4,
1087.24, 827.83, 100.86, 140.07, 306.5, 267.83, 1118.61, 155.04,
299.52, 543.5, 782.25, 737.1, 1132.14, 659.48, 871.78, 1035.33,
867.81, 192.94, 1167.8, 1099.59, 1097.3, 1089.78, 1166.59, 703.33,
671.64, 346.49, 440.89, 126.38, 638.24, 972.32, 1066.8, 775.68,
591.86, 818.75, 953.63, 1104.98, 1050.47, 722.43, 1022.17, 986.38,
1133.01, 914.27, 725.15, 1151.52, 786.08, 1024.83, 246.52, 441.53
), y = c(923.68, 660.97, 131.61, 882.23, 604.09, 504.05, 870.35,
858.51, 513.5, 937.7, 838.47, 482.69, 473.48, 171.78, 774.99,
792.46, 251.26, 757.95, 317.71, 401.93, 326.32, 725.89, 98.43,
414.01, 510.16, 973.61, 445.33, 504.54, 669.87, 598.75, 225.27,
789.45, 135.31, 935.51, 270.38, 241.19, 595.05, 401.25, 160.98,
778.86, 192.17, 323.76, 361.08, 444.92, 354, 249.57, 301.64,
375.75, 440.03, 428.79, 276.5, 408.84, 381.14, 459.14, 370.26,
304.05, 439.14, 339.91, 435.85, 759.42, 513.37)), class = c("tbl_df",
"tbl", "data.frame"), row.names = c(NA, -61L), .Names = c("indexR",
"x", "y"))
One possibility would be to formulate the problem of identifying the closest elements, one from each group, as a mixed integer program. We could define decision variables y_i for whether each point i is selected, as well as x_{ij} for whether points i and j are both selected (x_{ij} = y_iy_j). We need to select one element from each group.
In practice, you could implement this mixed integer program using the lpSolve package (or one of the other R optimization packages).
opt.closest <- function(df) {
# Compute every pair of indices
library(dplyr)
pairs <- as.data.frame(t(combn(nrow(df), 2))) %>%
mutate(G1=df$indexR[V1], G2=df$indexR[V2]) %>%
filter(G1 != G2) %>%
mutate(dist = sqrt((df$x[V1]-df$x[V2])^2+(df$y[V1]-df$y[V2])^2))
# Compute a few convenience values
n <- nrow(df)
nP <- nrow(pairs)
groups <- sort(unique(df$indexR))
nG <- length(groups)
gpairs <- combn(groups, 2)
nGP <- ncol(gpairs)
# Solve the optimization problem
obj <- c(pairs$dist, rep(0, n))
constr <- rbind(cbind(diag(nP), -outer(pairs$V1, seq_len(n), "==")),
cbind(diag(nP), -outer(pairs$V2, seq_len(n), "==")),
cbind(diag(nP), -outer(pairs$V1, seq_len(n), "==") - outer(pairs$V2, seq_len(n), "==")),
cbind(matrix(0, nG, nP), outer(groups, df$indexR, "==")),
cbind((outer(gpairs[1,], pairs$G1, "==") &
outer(gpairs[2,], pairs$G2, "==")) |
(outer(gpairs[2,], pairs$G1, "==") &
outer(gpairs[1,], pairs$G2, "==")), matrix(0, nGP, n)))
dir <- rep(c("<=", ">=", "="), c(2*nP, nP, nG+nGP))
rhs <- rep(c(0, -1, 1), c(2*nP, nP, nG+nGP))
library(lpSolve)
mod <- lp("min", obj, constr, dir, rhs, all.bin=TRUE)
which(tail(mod$solution, n) == 1)
}
This can compute the closest 3 points, one from each cluster, in your example dataset:
df[opt.closest(df),]
# A tibble: 3 x 3
# indexR x y
# <dbl> <dbl> <dbl>
# 1 1 638.24 323.76
# 2 2 591.86 249.57
# 3 3 441.53 513.37
It can also compute the best possible solution for datasets with more points and groups. Here are the runtimes for datasets with 7 groups each and 100 and 200 points:
make.dataset <- function(n, nG) {
set.seed(144)
data.frame(indexR = sample(seq_len(nG), n, replace=T), x = rnorm(n), y=rnorm(n))
}
df100 <- make.dataset(100, 7)
system.time(opt.closest(df100))
# user system elapsed
# 11.536 2.656 15.407
df200 <- make.dataset(200, 7)
system.time(opt.closest(df200))
# user system elapsed
# 187.363 86.454 323.167
This is far from instantaneous -- it takes 15 seconds for the 100-point, 7-group dataset and 323 seconds for the 200-point, 7-group dataset. Still, it is much quicker than iterating through all 92 million 7-tuples in the 100-point dataset or all 13.8 billion 7-tuples in the 200-point dataset. You could set a runtime limit with a solver like the one from the Rglpk package to get the best solution obtained within that limit.
You cannot afford to enumerate all possible solutions, and I don't see any obvious shortcut.
So I guess you'll have to do a branch and bound optimization approach.
First guess a reasonably good solution. Like the closest two points with different labels. Then add the nearest with a different label until you have all labels covered.
Now do some trivial optimization: for every label, try if there is some point that you can use instead of the current point to improve the result. Stop when you can't find any further improvement.
For this initial guess, compute the distances. This will give you an upper bound, which allows you to stop your search early. You can also compute a lower bound, the sum of all best two-label solutions.
Now you can try to remove points, where the nearest neighbors of each label + the lower bounds for all other labels is already worse than your initial solution. This will hopefully eliminate a lot of points.
Then you can start enumerating solutions (probably begin with the smallest labels first), but stop recursion whenever the current solution + the remaining lower bounds are larger than your best known solution (branch and bound).
You can also try sorting points e.g. by minimum distance to the remaining labels, to hopefully find better bounds fast.
I'd certainly not choose R to implement this...
you can use cross joins to have all the points combinations, calculate the total distance between all three points, then take the minimum of that.
df$id <- row.names(df) # to create ID's for the points
df2 <- merge(df, df, by = NULL ) # the first cross join
df3 <- merge(df2, df, by = NULL) # the second cross join
# eliminating rows where the points are of the same indexR
df3 <- df3[df3$indexR.x != df3$indexR.y & df3$indexR.x != df3$indexR
& df3$indexR.y != df3$indexR,]
## calculating the total distance
df3$total_distance <- ((df3$x - df3$x.x)^2 + (df3$y- df3$y.x)^2)^.5 +
((df3$x - df3$x.y)^2 + (df3$y- df3$y.y)^2)^.5 +
((df3$x.x - df3$x.y)^2 + (df3$y.x- df3$y.y)^2)^.5
## minimum distance
df3[which.min(df3$total_distance),]
indexR.x x.x y.x id.x indexR.y x.y y.y id.y indexR x y id total_distance
155367 3 441.53 513.37 61 2 591.86 249.57 46 1 638.24 323.76 42 664.3373
I developed a simple algorithm to quickly solve this problem. The first step is to overlay a grid on the entire area of points. The first step is to assign each point from each group to the cell or unit square where it is located. Next we go to the lower left corner of the graph and go over one cell and up one cell. This is the starting cell. Then we define a region of interest consisting of this cell and all of its 8 neighbors. Then a test is made to determine whether or not at least one point from each of the groups is within this 9 cell region. If so then the distance from each point represented in this region from each of the groups of points to all other points from all other groups is calculated. In other words all combinations of points in this 9-cell region are used to get a total distance where paired points for distance calculation are never from the same group. From these calculations the one with the minimum distance involving a single point from each group is saved as a possible solution. Then this entire process is repeated by going over one cell to the right. Each 9-cell region is calculated as the central cell moves on to the right. This is stopped one cell from the right end. When the first row is completed the process proceeds by going up one row and starting again at the left but one cell over again. Thus the each cell has been considered when the top row is finished. The solution will be the minimum distance computed from all the tests done for each 9-cell region.
The reason we consider a 9-cell region and not just go cell-by-cell is that we could miss closely spaced points from different groups that are located in the corners of cells.
It's important to choose the correct cell or grid size. If the cells are too small then no possible solution will be found because none of the regions will encompass at least one point from each group. If the cells are too large then there will be many points from each group and calculation time will be excessive. Fortunately this optimal cell size can be quickly found through trial and error.
I've run this algorithm multiple times with varying number of groups and number of points in a group. For randomly scattered points in all groups I found that a 15 x 15 grid size works well for a 10 group - 400 point (40 points per group) case. That example runs in under one second.

Peak detection in Manhattan plot

The attached plot (Manhattan plot) contains on the x axis chromosome positions from the genome and on the Y axis -log(p), where p is a p-value associated with the points (variants) from that specific position.
I have used the following R code to generate it (from the gap package) :
require(gap)
affy <-c(40220, 41400, 33801, 32334, 32056, 31470, 25835, 27457, 22864, 28501, 26273,
24954, 19188, 15721, 14356, 15309, 11281, 14881, 6399, 12400, 7125, 6207)
CM <- cumsum(affy)
n.markers <- sum(affy)
n.chr <- length(affy)
test <- data.frame(chr=rep(1:n.chr,affy),pos=1:n.markers,p=runif(n.markers))
oldpar <- par()
par(cex=0.6)
colors <- c("red","blue","green","cyan","yellow","gray","magenta","red","blue","green", "cyan","yellow","gray","magenta","red","blue","green","cyan","yellow","gray","magenta","red")
mhtplot(test,control=mht.control(colors=colors),pch=19,bg=colors)
> head(test)
chr pos p
1 1 1 0.79296584
2 1 2 0.96675136
3 1 3 0.43870076
4 1 4 0.79825513
5 1 5 0.87554143
6 1 6 0.01207523
I am interested in getting the coordinates of the peaks of the plot above a certain threshold (-log(p)) .
If you want the indices of the values above the 99th percentile:
# Add new column with log values
test = transform(test, log_p = -log10(test[["p"]]))
# Get the 99th percentile
pct99 = quantile(test[["log_p"]], 0.99)
...and get the values from the original data test:
peaks = test[test[["log_p"]] > pct99,]
> head(peaks)
chr pos p log_p
5 1 5 0.002798126 2.553133
135 1 135 0.003077302 2.511830
211 1 211 0.003174833 2.498279
586 1 586 0.005766859 2.239061
598 1 598 0.008864987 2.052322
790 1 790 0.001284629 2.891222
You can use this with any threshold. Note that I have not calculated the first derivative, see this question for some pointers:
How to calculate first derivative of time series
after calculating the first derivative, you can find the peaks by looking at points in the timeseries where the first derivative is (almost) zero. After identifying these peaks, you can check which ones are above the threshold.
Based on my experience after plotting the graph you can use following R code to find the peak coordinate
plot(x[,1], x[,2])
identify(x[,1], x[,2], labels=row.names(x))
note here x[,1] refers to x coordinate(genome coordinate and x[,2] would be #your -log10P value
at this time use point you mouse to select a point and hit enter which #will give you peak location and then type the following code to get the #coordinate
coords <- locator(type="l")
coords

Resources