How to simplify the code using Apply Function - r

i have this script:
library(plyr)
library(gstat)
library(sp)
library(dplyr)
library(ggplot2)
library(scales)
a<-c(10,20,30,40,50,60,70,80,90,100)
b<-c(15,25,35,45,55,65,75,85,95,105)
x<-rep(a,3)
y<-rep(b,3)
E<-sample(30)
freq<-rep(c(100,200,300),10)
data<-data.frame(x,y,freq,E)
data<-arrange(data,x,y,freq)
df <- ddply(data,"freq", function (h){
dim_h<-length(h$x)
perc_max <- 0.9
perc_min <- 0.8
u<-round((seq(perc_max,perc_min,by=-0.1))*dim_h)
dim_u<-length(u)
perc_punti<- percent(seq(perc_max,perc_min,by=-0.1))
for (i in 1:dim_u)
{ t<-u[i]
time[i]<-system.time(
for (j in 1:2)
{
df_tass <- sample_n(h, t)
df_residuo <- slice(h,-as.numeric(rownames(df_tass)))
coordinates(df_tass)= ~x + y
x.range <- range(h$x)
y.range <- range(h$y)
grid <- expand.grid(x = seq(from = x.range[1], to = x.range[2], by = 1), y = seq(from = y.range[1],
to = y.range[2], by = 1))
coordinates(grid) <- ~x + y
gridded(grid) <- TRUE
nearest = krige(E ~ 1, df_tass, grid, nmax = 1)
nearest_df<-as.data.frame(nearest)
names(nearest_df) <- c("x", "y", "E")
#Error of prediction
df_pred <- inner_join(nearest_df[1:3],select(df_residuo,x,y,E),by=c("x","y"))
names(df_pred) <- c("x", "y", "E_pred","E")
sqm[j] <- mean((df_pred[,4]-df_pred[,3])^2)
})[3]
sqmm[i]<-mean(sqm)
}
df_finale<-data.frame(sqmm,time,perc_punti)
})
df
I measured in several points of coordinates (x,y) the value of the electromagnetic field (E value) at different frequencies (freq value). For each frequency value, I use once 90% of points and once 80% (with the for loop with l) to interpolate the value of the electromagnetic field (E) inside grid with Nearest Neighbour Interpolation (krige function); and i repeat this 2 times. The remaining points will then be used to calculate the prediction error. I hope it's clear.
This script above is a simplified case. Unfortunately, in my case the script takes too long for the two for-loops implemented.
I want to ask if it's possible to simplify the code in some way, for instance by using the apply function family. Thanks.
Reply #clemlaflemme ok it works! thanks... now i have a little proble with the final dataframe, it looks like this:
freq 1 2
1 100 121.00 338.00
2 100 0.47 0.85
3 200 81.00 462.50
4 200 0.74 0.73
5 300 36.00 234.00
6 300 0.82 0.76
but i want something like this:
freq sqmm time
1 100 121.0 0.47
2 100 338.0 0.85
3 200 81.0 0.74
4 200 462.5 0.73
5 300 36.0 0.82
6 300 234.0 0.76
how can i do that??

Related

How to optimzie my function by dropping loops

I have the following function that uses nested loops and honestly I'm not sure how to proceed with making the code run more efficient. It runs fine for 100 sims in my opinion but when I ran for 2000 sims it took almost 12 seconds.
This code will generate any n Brownian Motion simulations and works well, the issue is once the simulation size is increased to say 500+ then it starts to bog down, and when it hits 2k then it's pretty slow ie 12.
Here is the function:
ts_brownian_motion <- function(.time = 100, .num_sims = 10, .delta_time = 1,
.initial_value = 0) {
# TidyEval ----
T <- as.numeric(.time)
N <- as.numeric(.num_sims)
delta_t <- as.numeric(.delta_time)
initial_value <- as.numeric(.initial_value)
# Checks ----
if (!is.numeric(T) | !is.numeric(N) | !is.numeric(delta_t) | !is.numeric(initial_value)){
rlang::abort(
message = "All parameters must be numeric values.",
use_cli_format = TRUE
)
}
# Initialize empty data.frame to store the simulations
sim_data <- data.frame()
# Generate N simulations
for (i in 1:N) {
# Initialize the current simulation with a starting value of 0
sim <- c(initial_value)
# Generate the brownian motion values for each time step
for (t in 1:(T / delta_t)) {
sim <- c(sim, sim[t] + rnorm(1, mean = 0, sd = sqrt(delta_t)))
}
# Bind the time steps, simulation values, and simulation number together in a data.frame and add it to the result
sim_data <- rbind(
sim_data,
data.frame(
t = seq(0, T, delta_t),
y = sim,
sim_number = i
)
)
}
# Clean up
sim_data <- sim_data %>%
dplyr::as_tibble() %>%
dplyr::mutate(sim_number = forcats::as_factor(sim_number)) %>%
dplyr::select(sim_number, t, y)
# Return ----
attr(sim_data, ".time") <- .time
attr(sim_data, ".num_sims") <- .num_sims
attr(sim_data, ".delta_time") <- .delta_time
attr(sim_data, ".initial_value") <- .initial_value
return(sim_data)
}
Here is some output of the function:
> ts_brownian_motion(.time = 10, .num_sims = 25)
# A tibble: 275 × 3
sim_number t y
<fct> <dbl> <dbl>
1 1 0 0
2 1 1 -2.13
3 1 2 -1.08
4 1 3 0.0728
5 1 4 0.562
6 1 5 0.255
7 1 6 -1.28
8 1 7 -1.76
9 1 8 -0.770
10 1 9 -0.536
# … with 265 more rows
# ℹ Use `print(n = ...)` to see more rows
As suggested in the comments, if you want speed, you should use cumsum. You need to be clear what type of Brownian Motion you want (arithmetic, geometric). For geometric Brownian motion, you'll need to correct the approximation error by adjusting the mean. As an example, the NMOF package (which I maintain), contains a function gbm that implements geometric Brownian Motion through cumsum. Here is an example call for 2000 paths with 100 timesteps each.
library("NMOF")
library("zoo") ## for plotting
timesteps <- 100
system.time(b <- NMOF::gbm(2000, tau = 1, timesteps = 100, r = 0, v = 1))
## user system elapsed
## 0.013 0.000 0.013
dim(b) ## each column is one path, starting at time zero
## [1] 101 2000
plot(zoo(b[, 1:5], 0:timesteps), plot.type = "single")

R: interpolate a value from dataframe based on two inputs

I have a data frame that looks like this:
Teff logg M_div_H U B V R I J H K L Lprime M
1 2000 4.0 -0.1 -13.443 -11.390 -7.895 -4.464 -1.831 1.666 3.511 2.701 4.345 4.765 5.680
2 2000 4.5 -0.1 -13.402 -11.416 -7.896 -4.454 -1.794 1.664 3.503 2.728 4.352 4.772 5.687
3 2000 5.0 -0.1 -13.358 -11.428 -7.888 -4.431 -1.738 1.664 3.488 2.753 4.361 4.779 5.685
4 2000 5.5 -0.1 -13.220 -11.079 -7.377 -4.136 -1.483 1.656 3.418 2.759 4.355 4.753 5.638
5 2200 3.5 -0.1 -11.866 -9.557 -6.378 -3.612 -1.185 1.892 3.294 2.608 3.929 4.289 4.842
6 2200 4.5 -0.1 -11.845 -9.643 -6.348 -3.589 -1.132 1.874 3.310 2.648 3.947 4.305 4.939
...
Let's say I have two values:
input_Teff = 4.8529282904170595E+003
input_log_g = 1.9241934741026787E+000
Notice how every V value has a unique Teff, logg combination. From the input values, I would like to interpolate a value for V. Is there a way to do this in R?
Edit 1: Here is the link to the full data frame: https://www.dropbox.com/s/prbceabxmd25etx/lcb98cor.dat?dl=0
Building on Ian Campbell's observation that you can consider your data as points on a two-dimensional plane, you can use spatial interpolation methods. The simplest approach is inverse-distance weighting, which you can implement like this
library(data.table)
d <- fread("https://www.dropbox.com/s/prbceabxmd25etx/lcb98cor.dat?dl=1")
setnames(d,"#Teff","Teff")
First rescale the data as appropriate (not shown here, see Ian's answer)
library(gstat)
# fit model
idw <- gstat(id="V", formula = V~1, locations = ~Teff+logg, data=d, nmax=7, set=list(idp = .5))
# new "points" to predict to
newd <- data.frame(Teff=c(4100, 4852.928), logg=c(1.5, 1.9241934741026787))
p <- predict(idw, newd)
#[inverse distance weighted interpolation]
p$V.pred
#[1] -0.9818571 -0.3602857
For higher dimensions you could use fields::Tps (I think you can force that to be an exact method, that is, exactly honor the observations, by making each observation a node)
We can imagine that Teff and logg exist in a 2-dimensional plane. We can see that your input point exists in that same space:
library(tidyverse)
ggplot(data,aes(x = Teff, y = logg)) +
geom_point() +
geom_point(data = data.frame(Teff = 4.8529282904170595e3, logg = 1.9241934741026787),
color = "orange")
However, we can see the scale of Teff and logg are not the same. Simply taking log(Teff) gets us pretty close, but not quite. So we can rescale between 0 and 1 instead. We can create a custom rescale function. It will become clear why we can't use scales::rescale in a moment.
rescale = function(x,y){(x - min(y))/(max(y)-min(y))}
We can now rescale the data:
data %>%
mutate(Teff.scale = rescale(Teff,Teff),
logg.scale = rescale(logg,logg)) -> data
From here, we might use raster::pointDistance to calculate the distance from the input point to all of the scaled values:
raster::pointDistance(cbind(rescale(input_Teff,data$Teff),rescale(input_log_g,data$logg)),
data[,c("Teff.scale","logg.scale")],
lonlat = FALSE)
We can use which.min to find the row with the minimum distance:
data[which.min(raster::pointDistance(cbind(rescale(input_Teff,data$Teff),rescale(input_log_g,data$logg)),
data[,c("Teff.scale","logg.scale")],
lonlat = FALSE)),]
Teff logg M_div_H U B V R I J H K L Lprime M Teff.scale logg.scale
1: 4750 2 -0.1 -2.447 -1.438 -0.355 0.159 0.589 1.384 1.976 1.881 2.079 2.083 2.489 0.05729167 0.4631902
Here we can visualize the result:
ggplot(data,aes(x = Teff.scale, y = logg.scale)) +
geom_point() +
geom_point(data = data[which.min(raster::pointDistance(cbind(rescale(input_Teff,data$Teff),rescale(input_log_g,data$logg)),data[,c("Teff.scale","logg.scale")], FALSE)),],
color = "blue") +
geom_point(data = data.frame(Teff.scale = rescale(input_Teff,data$Teff),logg.scale = rescale(input_log_g,data$logg)),
color = "orange")
And access the appropriate value for V:
data[which.min(raster::pointDistance(cbind(rescale(input_Teff,data$Teff),rescale(input_log_g,data$logg)),data[,c("Teff.scale","logg.scale")], FALSE)),"V"]
V
1: -0.355
Data:
library(data.table)
data <- fread("https://www.dropbox.com/s/prbceabxmd25etx/lcb98cor.dat?dl=1")
setnames(data,"#Teff","Teff")
input_Teff = 4.8529282904170595E+003
input_log_g = 1.9241934741026787E+000

Use apply() on a 1-dim vector to find the best threshold

My current mission: pick up some "good" columns from a incomplete matrix, trying to remove NAs while keeping real data.
My idea: I can calculate evey column's missing data NA%. For a given threshold t, all the NA% > t columns will be removed. The removed columns also contain some real data. In these columns, present/missing will show the "price" of deleting these columes. My idea is to search the lowest "price" to delete NA as much as possible, for each dataset.
I already wrote my function till the last 2 steps:
myfunc1 <- function(x){
return(sum(is.na(x))
}
myfunc2 <- function(x){
return (round(myfunc1(x) / length(x),4))
}
myfunc3 <- function(t, set){
m <- which(apply(set, MARGIN = 2, myfunc2) > t)
missed <- sum(is.na(set[m]))
present <- sum(!is.na(set[m]))
return(present/ missed)
}
myfunc3(0.5, setA) # worked
threshold <- seq(from = 0, to = 0.95, by = 0.5)
apply(X = threshold, MARGIN = 1, FUN = myfunc3, set = setA) # not worked. stuck here.
I have 10 datasets from setA to setJ, I want to test all thresholds from 0 to 0.95. I want a matrix as a return with 10 datasets as column and 20 rows threshold with every 0.05 interval.
Did I do this correctly? Are there better ideas, or already existing libraries that I could use?
----------edit: example-----------
setA <- data.frame(cbind(c(1,2,3,4,NA,6,7,NA), c(1,2,NA,4,5,NA,NA,8),c(1,2,3,4,5,6,NA,8), c(1,2,3,4,5,6,7,8),c(NA,NA,NA,4,NA,6,NA,NA)))
colnames(setA) <- sprintf("col%s", seq(1:5))
rownames(setA) <- sprintf("sample%s", seq(1:8))
View(setA)
myfunc1 <- function(x){
return(sum(is.na(x)))
}
myfunc2 <- function(x){
return (round(myfunc1(x) / length(x),4))
}
myfunc3 <- function(t, set){
m <- which(apply(set, MARGIN = 2, myfunc2) > t)
missed <- sum(is.na(set[m]))
present <- sum(!is.na(set[m]))
return(present/ missed)
}
In setA, there are 8 samples. Each sample has 5 attributes to describe the sample. Unfortunately, some data are missing. I need to delete some column with too many NAs. First, let me calculate every column's NA% .
> apply(setA, MARGIN = 2, myfunc2)
col1 col2 col3 col4 col5
0.250 0.375 0.125 0.000 0.750
If I set the threshold t = 0.3, that means col2, col5 are considered too many NAs and need to be deleted, others are acceptable. If I delete the 2 columns, I also delete some real data. (I deleted 7 real data and 9 NAs, 7/9 = 0.78. This means I sacrifice 0.78 real data when I delete 1 NA)
> myfunc3(0.3, setA)
[1] 0.7777778
I want to try every threshold's result and then decide.
threshold <- seq(from = 0, to = 0.9, by = 0.1)
apply(X= threshold, MARGIN = 1, FUN = myfunc3, set = setA) # not work
I manualy calculate setA part:
threshold: 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
price: 1.667 1.667 1.118 0.778 0.334 0.334 0.334 0.334 NaN NaN
At last I want a talbe like:
threshold: 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
setA: 1.667 1.667 1.118 0.778 0.334 0.334 0.334 0.334 NaN NaN
setB:
setC:
...
setJ:
Do I have the correct way with the problem?
-----------Edit---------------
I already solved the problem and please close the thread.

How to calculate sum of unique ways found in a distance matrx

I have a data frame of 3 points in space represented by their longitude and latitute:
myData <- structure(list(lng = c(-37.06852042, -37.07473406, -37.07683313
), lat = c(-11.01471746, -11.02468103, -11.02806217)), .Names = c("lng",
"lat"), row.names = c(NA, 3L), class = "data.frame")
Next, I use the geosphere package to get a distance matrix (in meters, which I convert to km) for the points:
> m <- round(distm(myData)/1000,2)
> rownames(m) <- c("A", "B", "C")
> colnames(m) <- c("A", "B", "C")
> m
A B C
A 0.00 1.30 1.74
B 1.30 0.00 0.44
C 1.74 0.44 0.00
Given this is a distance matrix and I have 6 ways of going to A, B and C (like A -> B -> C, C -> A >-B, and so on), I would like to extract some information from it, like the minimum, the median, and the maximum distance.
To illustrate it, I calculated all the possible ways of my example manually:
ways <- c(abc <- 1.3 + 0.44,
acb <- 1.74 + 0.44,
bac <- 1.3 + 1.74,
bca <- 0.44 + 1.74,
cab <- 1.74 + 1.3,
cba <- 0.44 + 1.3)
> min(ways)
[1] 1.74
> median(ways)
[1] 2.18
> max(ways)
[1] 3.04
How do I automate this task, given that I'll be working with more than 10 locals and this problem has factorial complexity?
I wrote a package called trotter that maps integers to different arrangement types (permutations, combinations and others). For this problem, it seems that you are interested in the permutations of locations. One of the objects in the package is the permutation pseudo-vector that is created using the function ppv.
First install "trotter":
install.packages("trotter")
Then an automated version of your task might look something like:
library(geosphere)
myData <- data.frame(
lng = c(-37.06852042, -37.07473406, -37.07683313),
lat = c(-11.01471746, -11.02468103, -11.02806217)
)
m <- round(distm(myData) / 1000, 2)
locations <- c("A", "B", "C")
rownames(m) <- colnames(m) <- locations
library(trotter)
perms <- ppv(k = length(locations), items = locations)
ways <- c()
for (i in 1:length(perms)) {
perm <- perms[i]
route <- paste(perm, collapse = "")
ways[[route]] <- sum(
sapply(
1:(length(perm) - 1),
function(i) m[perm[i], perm[i + 1]]
)
)
}
Back in the R console:
> ways
ABC ACB CAB CBA BCA BAC
1.74 2.18 3.04 1.74 2.18 3.04
> # What is the minimum route length?
> min(ways)
[1] 1.74
> # Which route (index) is this?
> which.min((ways))
ABC
1
Just remember, like you said, you're dealing with factorial complexity and you might end up waiting a while running this brute force search with more than a few locations...

How to find value for/match to coordinates of closest proximity in a second df

I have a series of geographical positions at sea which I am trying to get geological sediment type information for. I am using an export of the national british geological sediment database (df1)which is a large data set of coordinates and sediment information.
Currently I have been rounding the coordinates in the BGS export file (df1) and averaging/recalculating the sediment type for these coordinate squares, then I have rounded my coordinates in (df2) and matched these to these squares to get a sediment classification.
The BGS export looks like this (df1);
NUM X Y GRAV SAND MUD
1 228 1.93656 52.31307 1.07 98.83 0.10
2 142 1.84667 52.45333 0.00 52.60 47.40
3 182 1.91950 52.17750 9.48 90.38 0.14
4 124 1.88333 52.70833 0.00 98.80 1.20
5 2807 1.91050 51.45000 2.05 97.91 0.05
6 2787 1.74683 51.99382 41.32 52.08 6.60
7 2776 1.66117 51.63550 9.83 87.36 2.81
8 2763 1.82467 51.71767 43.92 47.25 8.83
9 2753 1.76867 51.96349 57.66 39.18 3.15
10 68 2.86967 52.96333 0.30 98.90 0.80
11 2912 1.70083 51.77783 26.90 64.87 8.22
12 2914 1.59750 51.88882 32.00 65.02 2.97
13 2886 1.98833 51.34267 1.05 98.91 0.04
14 2891 1.87817 51.31549 68.57 31.34 0.08
15 2898 1.37433 51.41249 35.93 61.48 2.59
16 45 2.06667 51.82500 9.70 88.10 2.20
17 2904 1.63617 51.45999 16.28 66.67 17.05
My positions at sea look like this (df2);
haul DecStartLat DecStartLong
1993H_2 55.23983 -5.512830
2794H_1 55.26670 -5.516700
1993H_1 55.27183 -5.521330
0709A_71 55.26569 -5.519730
0396H_2 55.44120 -5.917800
0299H_2 55.44015 -5.917310
0514A_26 55.46897 -5.912167
0411A_64 55.47289 -5.911820
0410A_65 55.46869 -5.911930
0514A_24 55.63585 -5.783500
0295H_4 55.57250 -5.754300
0410A_62 55.63656 -6.041870
0413A_53 55.73280 -6.020600
0396H_13 55.66470 -6.002300
2794H_8 55.83330 -5.883300
0612A_15 55.84025 -5.912130
0410A_74 55.84311 -5.910180
0299H_16 55.90568 -5.732490
0200H_18 55.88600 -5.742900
0612A_18 55.90450 -5.835880
This is my script...
get.Sed.type <- function(x,y) {
x$Y2 <- round(x$Y, digits=1)
x$X2 <- round(x$X, digits=1)
x$BGSQ <- paste(x$Y2,x$X2,sep="_")
x$RATIO <- x$SAND/x$MUD
x <- aggregate(cbind(GRAV,RATIO)~BGSQ,data=x,FUN=mean)
FOLK <- (x$GRAV)
FOLK[(FOLK)<1] <- 0
FOLK[(FOLK)>=1&(FOLK)<5] <- 1
FOLK[(FOLK)>=5&(FOLK)<30] <- 5
FOLK[(FOLK)>=30&(FOLK)<80] <- 30
FOLK[(FOLK)>=80] <- 80
R_CLASS <- (x$RATIO)
R_CLASS[(R_CLASS)<1/9] <- 0
R_CLASS[(R_CLASS)>=1/9&(R_CLASS)<1] <- 0.1
R_CLASS[(R_CLASS)>=1&(R_CLASS)<9] <- 1
R_CLASS[(R_CLASS)>=9] <- 9
x$FOLK_CLASS <- NULL
x$FOLK_CLASS[(R_CLASS)==0&(FOLK)==0] <- "M"
x$FOLK_CLASS[(R_CLASS)%in%c(0,0.1)&(FOLK)==5] <- "gM"
x$FOLK_CLASS[(R_CLASS)==0.1&(FOLK)==0] <- "sM"
x$FOLK_CLASS[(R_CLASS)==0&(FOLK)==1] <- "(g)M"
x$FOLK_CLASS[(R_CLASS)==0.1&(FOLK)==1] <- "(g)sM"
x$FOLK_CLASS[(R_CLASS)==9&(FOLK)==0] <- "S"
x$FOLK_CLASS[(R_CLASS)==1&(FOLK)==0] <- "mS"
x$FOLK_CLASS[(R_CLASS)==9&(FOLK)==1] <- "(g)S"
x$FOLK_CLASS[(R_CLASS)==1&(FOLK)==1] <- "(g)sM"
x$FOLK_CLASS[(R_CLASS)==1&(FOLK)==5] <- "gmS"
x$FOLK_CLASS[(R_CLASS)==9&(FOLK)==5] <- "gS"
x$FOLK_CLASS[(FOLK)==80] <- "G"
x$FOLK_CLASS[(R_CLASS)%in%c(0,0.1)&(FOLK)==30] <- "mG"
x$FOLK_CLASS[(R_CLASS)==1&(FOLK)==30] <- "msG"
x$FOLK_CLASS[(R_CLASS)==9&(FOLK)==30] <- "sG"
y$Lat <- round(y$DecStartLat, digits=1)
y$Long <- round(y$DecStartLong, digits=1)
y$LATLONG100_sq <- paste(y$Lat,y$Long,sep="_")
y <- merge(y, x[,c(1,4)],all.x=TRUE,by.x="LATLONG100_sq",by.y="BGSQ")
#Delete unwanted columns
y <- y[, !(colnames(y) %in% c("Lat","Long","LATLONG100_sq"))]
#Name column something logical
colnames(y)[colnames(y) == 'FOLK_CLASS'] <- 'BGS_class'
return(y)
}
However I have a dozen or so positions in db2 for which there are no corresponding values in the BGS export (db1), I want to know how I can either ask it to do another average for the squares surrounding that respective square (i.e. round to a larger number and repeat the process) OR to ask it to find the coordinate in the BGS export file that is closest in proximity and take the existing value.
Going for the second option stated in the question, I suggest to frame the question as follows:
Say that you have a set of m coordinates from db1 and n coordinates from db2, m <=n, and that currently the intersection of these sets is empty.
You'd like to match each point from db1 with a point from db2 such that the "error" of the matching, e.g. sum of distances, will be minimized.
A simple greedy approach for solving this might be to generate an m x n matrix with the distances between each pair of coordinates, and sequentially select the closest match for each point.
Of course, If there are many points to match, or if you're after an optimal solution, you may want to consider more elaborate matching algorithms (e.g. the Hungarian algorithm).
Code:
#generate some data (this data will generate sub-optimal matching with greedy matching)
db1 <- data.frame(id=c("a1","a2","a3","a4"), x=c(1,5,10,20), y=c(1,5,10,20))
db2 <- data.frame(id=c("b1","b2","b3","b4"),x=c(1.1,2.1,8.1,14.1), y=c(1.1,1.1,8.1,14.1))
#create cartesian product
product <- merge(db1, db2, by=NULL)
#calculate auclidean distances for each possible matching
product$d <- sqrt((product$x.x - product$x.y)^2 + (product$y.x - product$y.y)^2)
#(naively & greedily) find the best match for each point
sorted <- product[ order(product[,"d"]), ]
found <- vector()
res <- vector() #this vector will hold the result
for (i in 1:nrow(db1)) {
for (j in 1:nrow(sorted)) {
db2_val <- as.character(sorted[j,"id.y"])
if (sorted[j,"id.x"] == db1[i, "id"] && length(grep(db2_val, found)) == 0) {
#print(paste("matching ", db1[i, "id"], " with ", db2_val))
res[i] <- db2_val
found <- c(found, db2_val)
break
}
}
}
Note that I'm sure the code can be improved and made more elegant by using methods other than loop.
Hopefully I do not misunderstand, but as far as I get from the title, you need to match based on minimum distance. If this distance is allowed to be Euclidean distance, then one can use the fast RANN package, if not, then one needs to compute the great circle distance.
Some of the provided data
BGS_df <-
read.table(text =
" NUM X Y GRAV SAND MUD
1 228 1.93656 52.31307 1.07 98.83 0.10
2 142 1.84667 52.45333 0.00 52.60 47.40
3 182 1.91950 52.17750 9.48 90.38 0.14
4 124 1.88333 52.70833 0.00 98.80 1.20
5 2807 1.91050 51.45000 2.05 97.91 0.05",
header = TRUE)
my_positions <-
read.table(text =
"haul DecStartLat DecStartLong
1993H_2 55.23983 -5.512830
2794H_1 55.26670 -5.516700
1993H_1 55.27183 -5.521330",
header = TRUE)
Euclidean distance (using RANN package)
library(RANN)
# For each point in my_positions, find the nearest neighbor from BGS_df:
# Give X and then Y (longtitude and then latitude)
# Note that argument k sets the number of nearest neighbours, here 1 (the closest)
closest_RANN <- RANN::nn2(data = BGS_df[, c("X", "Y")],
query = my_positions[, c("DecStartLong", "DecStartLat")],
k = 1)
results_RANN <- cbind(my_positions[, c("haul", "DecStartLong", "DecStartLat")],
BGS_df[closest_RANN$nn.idx, ])
results_RANN
# haul DecStartLong DecStartLat NUM X Y GRAV SAND MUD
# 4 1993H_2 -5.51283 55.23983 124 1.88333 52.70833 0 98.8 1.2
# 4.1 2794H_1 -5.51670 55.26670 124 1.88333 52.70833 0 98.8 1.2
# 4.2 1993H_1 -5.52133 55.27183 124 1.88333 52.70833 0 98.8 1.2
Great circle distance (using geosphere package)
library(geosphere)
# Compute matrix of great circle distances
dist_mat <- geosphere::distm(x = BGS_df[, c("X", "Y")],
y = my_positions[, c("DecStartLong", "DecStartLat")],
fun = distHaversine) # can try other distances
# For each column (point in my_positions) get the index of row of min dist
# (corresponds to row index in BGS_df)
BGS_idx <- apply(dist_mat, 2, which.min)
results_geo <- cbind(my_positions[, c("haul", "DecStartLong", "DecStartLat")],
BGS_df[BGS_idx, ])
identical(results_geo, results_RANN) # here TRUE, but not always expected

Resources