Having a tracking dataset with 3 time moments (36,76,96) for a match.
My requirement is to calculate distances between a given player and opponents.
Dataframe contains following 5 columns
- time_id (second or instant)
- player ( identifier for player)
- x (x position)
- y (y position)
- team (home or away)
As an example for home player = 26
I need to calculate distances with
all away players ( "12","17","24","37","69","77" ) in the
3 distinct time_id (36,76,96)
Here we can see df data
https://pasteboard.co/ICiyyFB.png
Here it is the link to download sample rds with df
https://1drv.ms/u/s!Am7buNMZi-gwgeBpEyU0Fl9ucem-bw?e=oSTMhx
library(tidyverse)
dat <- readRDS(file = "dat.rds")
# Given home player with id 26
# I need to calculate on each time_id the euclidean distance
# with all away players on each time_id
p36_home <- dat %>% filter(player ==26)
# all away players
all_away <- dat %>% filter(team =='away')
# I know I can calculate it if i put on columns but not elegant
# and require it group by time_id
# mutate(dist= round( sqrt((x1-x2)^2 +(y1-y2)^2),2) )
# below distances row by row should be calculated
# time_id , homePlayer, awayPlayer , distance
#
# 36 , 26 , 12 , x
# 36 , 26 , 17 , x
# 36 , 26 , 24 , x
# 36 , 26 , 37 , x
# 36 , 26 , 69 , x
# 36 , 26 , 77 , x
#
# 76 , 26 , 12 , x
# 76 , 26 , 17 , x
# 76 , 26 , 24 , x
# 76 , 26 , 37 , x
# 76 , 26 , 69 , x
# 76 , 26 , 77 , x
#
# 96 , 26 , 12 , x
# 96 , 26 , 17 , x
# 96 , 26 , 24 , x
# 96 , 26 , 37 , x
# 96 , 26 , 69 , x
# 96 , 26 , 77 , x
This solution should work for you. I simply joined the two dataframes you provided and used your distance calculation. Then filtered the columns to get the desired result.
test <- left_join(p36_home,all_away,by="time_id")
test$dist <- round( sqrt((test$x.x-test$x.y)^2 +(test$y.x-test$y.y)^2),2)
test <- test[,c(1,2,6,10)]
names(test) <- c("time_id",'homePlayer','awayPlayer','distance')
test
The result looks something like this:
time_id homePlayer awayPlayer distance
36 26 37 26.43
36 26 17 28.55
36 26 24 20.44
36 26 69 24.92
36 26 77 11.22
36 26 12 22.65
.
.
.
Related
Suppose, I have a dataframe, df, and I want to create a new column called "c" based on the addition of two existing columns, "a" and "b". I would simply run the following code:
df$c <- df$a + df$b
But I also want to do this for many other columns. So why won't my code below work?
# Reproducible data:
martial_arts <- data.frame(gym_branch=c("downtown_a", "downtown_b", "uptown", "island"),
day_boxing=c(5,30,25,10),day_muaythai=c(34,18,20,30),
day_bjj=c(0,0,0,0),day_judo=c(10,0,5,0),
evening_boxing=c(50,45,32,40), evening_muaythai=c(50,50,45,50),
evening_bjj=c(60,60,55,40), evening_judo=c(25,15,30,0))
# Creating a list of the new column names of the columns that need to be added to the martial_arts dataframe:
pattern<-c("_boxing","_muaythai","_bjj","_judo")
d<- expand.grid(paste0("martial_arts$total",pattern))
# Creating lists of the columns that will be added to each other:
e<- names(martial_arts %>% select(day_boxing:day_judo))
f<- names(martial_arts %>% select(evening_boxing:evening_judo))
# Writing a function and using mapply:
kick_him <- function(d,e,f){d <- rowSums(martial_arts[ , c(e, f)], na.rm=T)}
mapply(kick_him,d,e,f)
Now, mapply produces the correct results in terms of the addition:
> mapply(ff,d,e,f)
Var1 <NA> <NA> <NA>
[1,] 55 84 60 35
[2,] 75 68 60 15
[3,] 57 65 55 35
[4,] 50 80 40 0
But it doesn't add the new columns to the martial_arts dataframe. The function in theory should do the following
martial_arts$total_boxing <- martial_arts$day_boxing + martial_arts$evening_boxing
...
...
martial_arts$total_judo <- martial_arts$day_judo + martial_arts$evening_judo
and add four new total columns to martial_arts.
So what am I doing wrong?
The assignment is wrong here i.e. instead of having martial_arts$total_boxing as a string, it should be "total_boxing" alone and this should be on the lhs of the Map/mapply. As the OP already created the 'martial_arts$' in 'd' dataset as a column, we are removing the prefix part and do the assignment
kick_him <- function(e,f){rowSums(martial_arts[ , c(e, f)], na.rm=TRUE)}
martial_arts[sub(".*\\$", "", d$Var1)] <- Map(kick_him, e, f)
-check the dataset now
> martial_arts
gym_branch day_boxing day_muaythai day_bjj day_judo evening_boxing evening_muaythai evening_bjj evening_judo total_boxing total_muaythai total_bjj total_judo
1 downtown_a 5 34 0 10 50 50 60 25 55 84 60 35
2 downtown_b 30 18 0 0 45 50 60 15 75 68 60 15
3 uptown 25 20 0 5 32 45 55 30 57 65 55 35
4 island 10 30 0 0 40 50 40 0 50 80 40 0
The following tree:
has been obtained from the following matrix
> mat
7 23 47 41 31
7 23 53 41 31
7 23 53 41 37
7 29 47 41 31
7 29 47 41 37
7 29 53 41 31
7 29 53 41 37
11 29 53 41 31
11 29 53 41 37
taking each columns of 'mat' as a level of the tree. If 'data' is the dataframe where the matrix 'mat' is stored
V1 V2 V3 V4 V5
7 23 47 41 31
7 23 53 41 31
7 23 53 41 37
7 29 47 41 31
7 29 47 41 37
7 29 53 41 31
7 29 53 41 37
11 29 53 41 31
11 29 53 41 37
the code that produces above tree is the following
> data$pathString<-paste("0", data$V1,data$V2,data$V3,data$V4,data$V5,sep = "/")
> p_tree <- as.Node(data)
> export_graph(ToDiagrammeRGraph(p_tree), "tree.png")
I would like to modify the tree as follows: (1) if a node at level 'n', labelled by number x, has only one child node at level 'n+1', labelled by number y, then the program brings together these two nodes in one node labelled by the result of the product x*y; 2) if the node at level 'n+1' does not have child nodes, the program does nothing and starts again from another branch; 3) if the node at level 'n+1' has more than one child node, the program apply point (1) and starts again from each of child nodes.
For example, for the tree of our example, the code should:
replace the nodes circled in red with a node labelled by 31*41*47=59737
replace the nodes circled in orange with a node labelled by 53*41=2173
replace the nodes circled in green with a node labelled by 47*41=1927
replace the nodes circled in blue with a node labelled by 11*29*53*41=693187
Try this:
freq <- sapply(1:ncol(data), function(x) {
df <- data[, 1:x, drop = FALSE]
cc <- aggregate(df[, 1], as.list(df), FUN = length)
merge(df, cc, by = colnames(df), sort = FALSE)[, "x"]
})
data$pathString <- sapply(1:nrow(data), function(x) {
g <- 1
for(i in 2:ncol(freq)) g <- c(g,
if(freq[x, i] == freq[x, i - 1]) g[i - 1] else g[i - 1] + 1)
paste0(c("0", tapply(unlist(data[x, , drop = TRUE]), g, prod)), collapse = "/")
})
p_tree <- as.Node(data)
plot(p_tree)
Say I have a matrix with 1000 columns. I want to create a new matrix with every other n columns from the original matrix, starting from column i.
So let say that n=3 and i=5, then the columns I need from the old matrix are 5,6,7,11,12,13,17,18,19 and so on.
Using two seq()s to create the start and stop bounds, then using a mapply() on those to build your true column index intervals. Then just normal bracket notation to extract from your matrix.
set.seed(1)
# using 67342343's test case
M <- matrix(runif(100^2), ncol = 100)
n <- 3
i <- 5
starts <- seq(i, ncol(M), n*2)
stops <- seq(i+(n-1), ncol(M), n*2)
col_index <- c(mapply(seq, starts, stops)) # thanks Jaap and Sotos
col_index
[1] 5 6 7 11 12 13 17 18 19 23 24 25 29 30 31 35 36 37 41 42 43 47 48 49 53 54 55 59 60 61 65 66 67 71 72 73 77 78
[39] 79 83 84 85 89 90 91 95 96 97
M[, col_index]
Another solution is based on the fact that R uses index recycling:
i <- 5; n <- 3
M <- matrix(runif(100^2), ncol = 100)
id <- seq(i, ncol(M), by = 1)[rep(c(TRUE, FALSE), each = n)]
M_sub <- M[, id]
I would write a function that determines the indices of the columns you want, and then call that function as needed.
col_indexes <- function(mat, start = 1, by = 1){
n <- ncol(mat)
inx <- seq(start, n, by = 2*by)
inx <- c(sapply(inx, function(i) i:(i + by -1)))
inx[inx <= n]
}
m <- matrix(0, nrow = 1, ncol = 20)
icol <- col_indexes(m, 5, 3)
icol
[1] 5 6 7 11 12 13 17 18 19
Here is a method using outer.
c(outer(5:7, seq(0L, 95L, 6L), "+"))
[1] 5 6 7 11 12 13 17 18 19 23 24 25 29 30 31 35 36 37 41 42 43 47 48 49 53
[26] 54 55 59 60 61 65 66 67 71 72 73 77 78 79 83 84 85 89 90 91 95 96 97
To generalize this, you could do
idx <- c(outer(seq(i, i + n), seq(0L, ncol(M) - i, 2 * n), "+"))
The idea is to construct the initial set of columns (5:7 or seq(i, i + n)), calculate the starting points for every subsequent set (seq(0L, 95L, 6L) or seq(0L, ncol(M) - i, 2 * n)) then use outer to calculate the sum of every combination of these two vectors.
you can subset the matrix using [ like M[, idx].
I have a difficulty with application of the data frame on my function in R. I have a data.frame with three columns ID of a point, its location on x axis and its location on y axis. All I need to do is to find for a given point IDs of points that lies in its neighborhood. I've made the function that shows whether the point lies within a circle where the center is a location of observed point and returns it's ID if true.
Here is my code:
point_id <- locationdata$point_id
x_loc <- locationdata$x_loc
y_loc <- locationdata$y_loc
locdata <- data.frame(point_id, x_loc, y_loc)
#radius set to1km
incircle3 <- function(x_loc, y_loc, center_x, center_y, pointid, r = 1000000){
dx = (x_loc-center_x)
dy = (y_loc-center_y)
if (b <- dx^2 + dy^2 <= r^2){
print(shopid)} ##else {print('')}
}
Unfortunately I don't know how to apply this function on the whole data frame. So once I enter the locations of the observed point it would return me IDs of all points that lies in the neighborhood. Ideally I would need to find this relation for all the points automatically. So it would return me the points that lies in the neighborhood of each point from the dataset. Previously I have been inserting the center_x and center_y manually.
Thank you very much for your advices in advance!
You can tackle this with R's dist function:
# set the random seed and create some dummy data
set.seed(101)
dummy <- data.frame(id=1:100, x=runif(100), y=runif(100))
> head(dummy)
id x y
1 1 0.37219838 0.12501937
2 2 0.04382482 0.02332669
3 3 0.70968402 0.39186128
4 4 0.65769040 0.85959857
5 5 0.24985572 0.71833452
6 6 0.30005483 0.33939503
Call the dist function which returns a dist object. The default distance metric is Euclidean which is what you have coded in your question.
dists <- dist(dummy[,2:3])
Loop over the distance matrix and return the indices for each id that are within some constant distance:
neighbors <- apply(as.matrix(dists), 1, function(x) which(x < 0.33))
> neighbors[[1]]
1 6 7 8 19 23 30 32 33 34 42 44 46 51 55 87 88 91 94 99
Here's a modification to handle volatile ids:
set.seed(101)
dummy <- data.frame(id=sample(1:100, 100), x=runif(100), y=runif(100))
> head(dummy)
id x y
1 38 0.12501937 0.60567568
2 5 0.02332669 0.56259740
3 70 0.39186128 0.27685556
4 64 0.85959857 0.22614243
5 24 0.71833452 0.98355758
6 29 0.33939503 0.09838715
dists <- dist(dummy[,2:3])
neighbors <- apply(as.matrix(dists), 1, function(x) {
dummy$id[which(x < 0.33)]
})
names(neighbors) <- dummy$id
> neighbors[['38']]
[1] 38 5 55 80 63 76 17 71 47 11 88 13 41 21 36 31 73 61 99 59 39 89 94 12 18 3
I try to apply a function over all rows and columns of two dataframes but I don't know how to solve it with apply.
I think the following script explains what I intend to do and the way i tried to solve it. Any advice would be warmly appreciated! Please note, that the simplefunction is only intended to be an example function to keep it simple.
# some data and a function
df1<-data.frame(name=c("aa","bb","cc","dd","ee"),a=sample(1:50,5),b=sample(1:50,5),c=sample(1:50,5))
df2<-data.frame(name=c("aa","bb","cc","dd","ee"),a=sample(1:50,5),b=sample(1:50,5),c=sample(1:50,5))
simplefunction<-function(a,b){a+b}
# apply on a single row
simplefunction(df1[1,2],df2[1,2])
# apply over all colums
apply(?)
## apply over all columns and rows
# create df to receive results
df3<-df2
# loop it
for (i in 2:5)df3[i]<-apply(?)
My first mapply answer!! For your simple example you have...
mapply( FUN = `+` , df1[,-1] , df2[,-1] )
# a b c
# [1,] 60 35 75
# [2,] 57 39 92
# [3,] 72 71 48
# [4,] 31 19 85
# [5,] 47 66 58
You can extend it like so...
mapply( FUN = function(x,y,z,etc){ simplefunctioncodehere} , df1[,-1] , df2[,-1] , ... other dataframes here )
The dataframes will be passed in order to the function, so in this example df1 would be x, df2 would be y and z and etc would be some other dataframes that you specify in that order. Hopefully that makes sense. mapply will take the first row, first column values of all dataframes and apply the function, then the first row, second column of all data frames and apply the function and so on.
You can also use Reduce:
set.seed(45) # for reproducibility
Reduce(function(x,y) { x + y}, list(df1[, -1], df2[,-1]))
# a b c
# 1 53 22 23
# 2 64 28 91
# 3 19 56 51
# 4 38 41 53
# 5 28 42 30
You can just do :
df1[,-1] + df2[,-1]
Which gives :
a b c
1 52 24 37
2 65 63 62
3 31 90 89
4 90 35 33
5 51 33 45