Problem
I looking for a way to efficiently order randomly selected sampling transects occur around a fixed object. These transects, once generated, need to be ordered in a way that makes sense spatially such that the distance traveled is minimized. This would be done by ensuring that the end point of the current transect is as close as possible to the start point of the next transect. Also, none of the transects can be repeated.
Because there are thousands of transects to order, and this is a very tedious task to do manually, and I am trying to use R to automate this process. I have already generated the transects, each having a start and end point whose location is indicated using a 360-degree system (e.g., 0 is North, 90 is East, 180 is South, and 270 is West). I have also generated some code that seems to indicate the start point and the ID for the next transect, but there are a few problems with this code: (1) it can generate errors depending on the start and end points being considered, (2) it doesn't achieve what I ultimately need it to achieve, and (3) as is, the code itself seems overly complicated and I can't help but wonder if there is a more straightforward way to do this.
Ideally, the code would result in the transects being reordered such they match the order that they should be flown rather than the order that they were initially input.
The Data
For simplicity, let's pretend there are just 10 transects to order.
# Transect ID for the start point
StID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
StPt <- c(342.1, 189.3, 116.5, 67.9, 72, 208.4, 173.2, 97.8, 168.7, 138.2)
# Transect ID for the end point
EndID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
EndPt <- c(122.3, 313.9, 198.7, 160.4, 166, 26.7, 312.7, 273.7, 288.8, 287.5)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
What I Have Tried
Please feel free to ignore this code, there has to be a better way and it does not really achieve the intended outcome. Right now I am using a nested for-loop that is very difficult to intuitively follow but represents my best attempt thus far.
# Create two new columns that will be populated using a loop
df$StPt_Next <- NA
df$ID_Next <- NA
# Also create a list to be populated as end and start points are matched
used <- c(df$StPt[1]) #puts the start point of transect #1 into the used vector since we will start with 1 and do not want to have it used again
# Then, for every row in the dataframe...
for (i in seq(1,length(df$EndPt)-1, 1)){ # Selects all rows except the last one as the last transect should have no "next" transect
# generate some print statements to indicate that the script is indeed running while you wait....
print(paste("######## ENDPOINT", i, ":", df$EndPt[i], " ########"))
print(paste("searching for a start point that fits criteria to follow this endpoint",sep=""))
# sequentially select each end point
valueEndPt <- df[i,1]
# and order the index by taking the absolute difference of end and start points and, if this value is greater than 180, also subtract from 360 so all differences are less than 180, then order differences from smallest to largest
orderx <- order(ifelse(360-abs(df$StPt-valueEndPt) > 180,
abs(df$StPt-valueEndPt),
360-abs(df$StPt-valueEndPt)))
tmp <- as.data.frame(orderx)
# specify index value
index=1
# for as long as there is an "NA" present in the StPt_Next created before for loop...
while (is.na(df$StPt_Next[i])) {
#select the value of the ordered index in sequential order
j=orderx[index]
# if the start point associated with a given index is present in the list of used values...
if (df$StPt[j] %in% used){
# then have R print a statement indicate this is the case...
print(paste("passing ",df$StPt[j], " as it has already been used",sep=""))
# and move onto the next index
index=index+1
# break statement intended to skip the remainder of the code for values that have already been used
next
# if the start point associated with a given index is not present in the list of used values...
} else {
# then identify the start point value associated with that index ID...
valueStPt <- df$StPt[j]
# and have R print a statement indicating an attempt is being made to use the next value
print(paste("trying ",df$StPt[j],sep=""))
# if the end transect number is different from the start end transect number...
if (df$EndID[i] != df$StID[j]) {
# then put the start point in the new column...
df$StPt_Next[i] <- df$StPt[j]
# note which record this start point came from for ease of reference/troubleshooting...
df$ID_Next[i] <- j
# have R print a statement that indicates a value for the new column has beed selected...
print(paste("using ",df$StPt[j],sep=""))
# and add that start point to the list of used ones
used <- c(used,df$StPt[j])
# otherwise, if the end transect number matches the start end transect number...
} else {
# keep NA in this column and try again
df$StPt_Next[i] <- NA
# and indicate that this particular matched pair can not be used
print(paste("cant use ",valueStPt," as the column EndID (related to index in EndPt) and StID (related to index in StPt) values are matching",sep=""))
}# end if else statement to ensure that start and end points come from different transects
# and move onto the next index
index=index+1
}# end if else statement to determine if a given start point still needs to be used
}# end while loop to identify if there are still NA's in the new column
}# end for loop
The Output
When the code does not produce an explicit error, such as for the example data provided, the output is as follows:
StPt StID EndPt EndID StPt_Next ID_Next
1 342.1 1 122.3 1 67.9 4
2 189.3 2 313.9 2 173.2 7
3 116.5 3 198.7 3 97.8 8
4 67.9 4 160.4 4 72.0 5
5 72.0 5 166.0 5 116.5 3
6 208.4 6 26.7 6 189.3 2
7 173.2 7 312.7 7 168.7 9
8 97.8 8 273.7 8 138.2 10
9 168.7 9 288.8 9 208.4 6
10 138.2 10 287.5 10 NA NA
The last two columns were generated by the code and added to the original dataframe. StPt_Next has the location of the next closest start point and ID_Next indicates the transectID associated with that next start point location. The ID_Next column indicates that the order transects should be flown is as follows 1,4,5,3,8,10,NA (aka. the end), and 2,7,9,6 form their own loop that goes back to 2.
There are two specific problems that I can't solve:
(1) There is a problem of forming one continuous chain of sequence. I think this is related to having 1 be the starting transect and 10 being the last transect no matter what, but not knowing how to indicate in the code that the second to last transect must match up with 10 so that the sequence includes all 10 transects before terminating at an "NA" representing the final end point.
(2) To really automate this process, after fixing the early termination of the sequence due to the premature introduction of the "NA" as the ID_next, a new column would be made that would allow the transects to be reordered based on the most efficient progression rather than the original order of their EndID/StartID.
Intended Outcome
If we pretend that we only had 6 transects to order and ignore the 4 that were not able to be ordered due to the premature introduction of the "NA", this would be the intended outcome:
StPt StID EndPt EndID StPt_Next ID_Next TransNum
1 342.1 1 122.3 1 67.9 4 1
4 67.9 4 160.4 4 72.0 5 2
5 72.0 5 166.0 5 116.5 3 3
3 116.5 3 198.7 3 97.8 8 4
8 97.8 8 273.7 8 138.2 10 5
10 138.2 10 287.5 10 NA NA 6
EDIT: A Note About the Error Message Explicitly Produced by the Code
As indicated earlier, the code has a few flaws. Another flaw is that it will often produce an error when trying to order a larger number of transects. I am not entirely sure at what step in the process the error is generated, but I am guessing that it is related to the inability to match up the last few transects, possibly due to not meeting the criteria set forth by "orderx". The print statements say "trying NA" instead of a start point in the database, which results in this error: "Error in if (df$EndID[i] != df$StID[j]) { : missing value where TRUE/FALSE needed". I am guessing that there would need to be another if-else statement that somehow indicates "if the remaining points do not meet the orderx criteria, then just force them to match up with whatever transect remains so that everything is assigned a StPt_Next and ID_Next".
Here is a larger dataset that will generate the error:
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
Any advice would be greatly appreciated!
As #chinsoon12 points out hidden in your problem you have an (Asymmetric) Traveling Salesman Problem. The asymmetry arises because the start and end points of your transecs are different.
ATSP is a renowned NP-complete problem. So exact solutions are very difficult even for medium sized problems (see wikipedia for more info). Hence the best we can do in most cases is approximations or heuristics. As you mention there are thousands of transects this is at least a medium sized problem.
Rather than code an ATSP approximation algorithm from the start, there is an existing TSP library for R. This includes several approximation algorithms. Reference documentation is here.
The follow is my use of the TSP package applied to your problem. Beginning with setup (assume I have run StPt, StID, EndPt, and EndID as in your question.
install.packages("TSP")
library(TSP)
library(dplyr)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
# filter to 6 example nodes for requested comparison
df = df %>% filter(StID %in% c(1,3,4,5,8,10))
We shall use ATSP from a distance matrix. Position [row,col] in the matrix is the cost/distance of going from (the end of) transect row to (the start of) transect col. This code creates the entire distance matrix.
# distance calculation
transec_distance = function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# distance matrix
matrix_distance = matrix(data = NA, nrow = nrow(df), ncol = nrow(df))
for(start_id in 1:nrow(df)){
start_point = df[start_id,'StPt']
for(end_id in 1:nrow(df)){
end_point = df[end_id,'EndPt']
matrix_distance[end_id,start_id] = transec_distance(end_point, start_point)
}
}
Note that there are more effective ways to construct a distance matrix. However, I have chosen this approach for its clarity. Depending on your computer and the exact number of transects this code may run very slowly.
Also, note that the size of this matrix is quadratic to the number of transects. So for a large number of transects, you will discover there is not enough memory.
The solving is very unexciting. The distance matrix gets turned into a ATSP object, and the ATSP object gets passed to the solver. We then proceed to add the ordering/traveling information to the original df.
answer = solve_TSP(as.ATSP(matrix_distance))
# get length of cycle
print(answer)
# sort df to same order as solution
df_w_answer = df[as.numeric(answer),]
# add info about next transect to each transect
df_w_answer = df_w_answer %>%
mutate(visit_order = 1:nrow(df_w_answer)) %>%
mutate(next_StID = lead(StID, order_by = visit_order),
next_StPt = lead(StPt, order_by = visit_order))
# add info about next transect to each transect (for final transect)
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StID'] =
df_w_answer[df_w_answer$visit_order == 1,'StID']
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StPt'] =
df_w_answer[df_w_answer$visit_order == 1,'StPt']
# compute distance between end of each transect and start of next
df_w_answer = df_w_answer %>% mutate(dist_between = transec_distance(EndPt, next_StPt))
At this point we have a cycle. You can pick any node as the starting point, follow the order given in the df: from EndID to next_StID, and you will cover every transect in (a good approximation to) the minimum distance.
However in your 'intended outcome' you have a path solution (e.g. start at transect 1 and finish at transect 10). We can turn the cycle into a path by excluding the single most expensive transition:
# as path (without returning to start)
min_distance = sum(df_w_answer$dist_between) - max(df_w_answer$dist_between)
path_start = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'next_StID']
path_end = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'EndID']
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
Running all the above gives me a different, but superior, answer to your intended outcome. I get the following order: 1 --> 5 --> 8 --> 4 --> 3 --> 10 --> 1.
You path from transect 1 to transect 10 has a total distance of 428, if we also returned from transect 10 to transect 1, making this a cycle, the total distance would be 483.
Using the TSP package in R we get a path from 1 to 10 with total distance 377, and as a cycle 431.
If we instead start at node 4 and end at node 8, we get a total distance of 277.
Some additional nodes:
Not all TSP solvers are deterministic, hence you may get some variation in your answer if you run again, or run with the input rows in a different order.
TSP is a much more general problem that the transect problem you described. It is possible that your problem has enough additional/special features that means it can be solved perfectly in a reasonable length of time. But this moves your problem into the realm of mathematics.
If you are running out of memory to create the distance matrix, take a look at the documentation for the TSP package. It contains several examples that use geo-coordinates as inputs rather than a distance matrix. This is a much smaller input size (presumably the package calculates the distances on the fly) so if you convert the start and end points to coordinates and specify euclidean (or some other common distance function) you could get around (some) computer memory limits.
Another version of using the TSP package...
Here is the setup.
library(TSP)
planeDim = 15
nTransects = 26
# generate some random transect beginning points in a plane, the size of which
# is defined by planeDim
b = cbind(runif(nTransects)*planeDim, runif(nTransects)*planeDim)
# generate some random transect ending points that are a distance of 1 from each
# beginning point
e = t(
apply(
b,
1,
function(x) {
bearing = runif(1)*2*pi
x + c(cos(bearing), sin(bearing))
}
)
)
For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-1, planeDim + 1), ylim = c(-1, planeDim + 1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
# with a thick grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 4)
points(xs, ys, col = c("green", "red"), pch = 20, cex = 1.5)
text(mean(xs), mean(ys), letters[i])
}
So given a matrix of x,y pairs ("b") for beginning points and a matrix of x,y
pairs ("e") for end points of each transect, the solution is...
# a function that calculates the distance from all endpoints in the ePts matrix
# to the single beginning point in bPt
dist = function(ePts, bPt) {
# apply pythagorean theorem to calculate the distance between every end point
# in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
# apply the "dist" function to all begining points to create the distance
# matrix. since the distance from the end of transect "foo" to the beginning of
# "bar" is not the same as from the end of "bar" to the beginning of "foo," we
# have an asymmetric travelling sales person problem. Therefore, distance
# matrix is directional. The distances at any position in the matrix must be
# the distance from the transect shown in the row label and to the transect
# shown in the column label.
distMatrix = apply(b, 1, FUN = dist, ePts = e)
# for covenience, we'll labels the trasects a to z
dimnames(distMatrix) = list(letters, letters)
# set the distance between the beginning and end of each transect to zero so
# that there is no "cost" to walking the transect
diag(distMatrix) = 0
Here is the upper left corner of the distance matrix:
> distMatrix[1:6, 1:6]
a b c d e f
a 0.00000 15.4287270 12.637979 12.269356 15.666710 12.3919715
b 13.58821 0.0000000 5.356411 13.840444 1.238677 12.6512352
c 12.48161 6.3086852 0.000000 8.427033 6.382304 7.1387840
d 10.69748 13.5936114 7.708183 0.000000 13.718517 0.9836146
e 14.00920 0.7736654 5.980220 14.470826 0.000000 13.2809601
f 12.24503 12.8987043 6.984763 2.182829 12.993283 0.0000000
Now three lines of code from the TSP package solves the problem.
atsp = as.ATSP(distMatrix)
tour = solve_TSP(atsp)
# assume we want to start our circuit at transect "a".
path = cut_tour(tour, "a", exclude_cut = F)
The variable path shows the order in which you should visit the transects:
> path
a w x q i o l d f s h y g v t k c m e b p u z j r n
1 23 24 17 9 15 12 4 6 19 8 25 7 22 20 11 3 13 5 2 16 21 26 10 18 14
We can add the path to the visualization:
for(i in 1:(length(path)-1)) {
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = "dotted")
}
Thanks everyone for the suggestions, #Simon's solution was most tailored to my OP. #Geoffrey's actual approach of using x,y coordinates was great as it allows for the plotting of the transects and the travel order. Thus, I am posting a hybrid solution that was generated using code by both of them and well as additional comments and code to detail the process and get to the actual end result I was aiming for. I am not sure if this will help anyone in the future but, since there was no answer that provided a solution that solved my problem 100% of the way, I thought I'd share what I came up with.
As others have noted, this is a type of traveling salesperson problem. It is asymmetric because the distance from the end of transect "t" to the beginning of transect "t+1" is not the same as the distance from the end transect "t+1" to the start of transect "t". Also, it is a "path" solution rather than a "cycle" solution.
#=========================================
# Packages
#=========================================
library(TSP)
library(useful)
library(dplyr)
#=========================================
# Full dataset for testing
#=========================================
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
#=========================================
# Convert polar coordinates to cartesian x,y data
#=========================================
# Area that the transect occupy in space only used for graphing
planeDim <- 1
# Number of transects
nTransects <- 100
# Convert 360-degree polar coordinates to x,y cartesian coordinates to facilitate calculating a distance matrix based on the Pythagorean theorem
EndX <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["x"])
EndY <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["y"])
StX <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["x"])
StY <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["y"])
# Matrix of x,y pairs for the beginning ("b") and end ("e") points of each transect
b <- cbind(c(StX), c(StY))
e <- cbind(c(EndX), c(EndY))
#=========================================
# Function to calculate the distance from all endpoints in the ePts matrix to a single beginning point in bPt
#=========================================
dist <- function(ePts, bPt) {
# Use the Pythagorean theorem to calculate the hypotenuse (i.e., distance) between every end point in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
#=========================================
# Distance matrix
#=========================================
# Apply the "dist" function to all beginning points to create a matrix that has the distance between every start and endpoint
## Note: because this is an asymmetric traveling salesperson problem, the distance matrix is directional, thus, the distances at any position in the matrix must be the distance from the transect shown in the row label and to the transect shown in the column label
distMatrix <- apply(b, 1, FUN = dist, ePts = e)
## Set the distance between the beginning and end of each transect to zero so that there is no "cost" to walking the transect
diag(distMatrix) <- 0
#=========================================
# Solve asymmetric TSP
#=========================================
# This creates an instance of the asymmetric traveling salesperson (ASTP)
atsp <- as.ATSP(distMatrix)
# This creates an object of Class Tour that travels to all of the points
## In this case, the repetitive_nn produces the smallest overall and transect-to-transect
tour <- solve_TSP(atsp, method = "repetitive_nn")
#=========================================
# Create a path by cutting the tour at the most "expensive" transition
#=========================================
# Sort the original data frame to match the order of the solution
dfTour = df[as.numeric(tour),]
# Add the following columns to the original dataframe:
dfTour = dfTour %>%
# Assign visit order (1 to 100, ascending)
mutate(visit_order = 1:nrow(dfTour)) %>%
# The ID of the next transect to move to
mutate(next_StID = lead(StID, order_by = visit_order),
# The angle of the start point for the next transect
next_StPt = lead(StPt, order_by = visit_order))
# lead() generates the NA's in the last record for next_StID, next_StPt, replace these by adding that information
dfTour[dfTour$visit_order == nrow(dfTour),'next_StID'] <-
dfTour[dfTour$visit_order == 1,'StID']
dfTour[dfTour$visit_order == nrow(dfTour),'next_StPt'] <-
dfTour[dfTour$visit_order == 1,'StPt']
# Function to calculate distance for 360 degrees rather than x,y coordinates
transect_distance <- function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# Compute distance between end of each transect and start of next using polar coordinates
dfTour = dfTour %>% mutate(dist_between = transect_distance(EndPt, next_StPt))
# Identify the longest transition point for breaking the cycle
min_distance <- sum(dfTour$dist_between) - max(dfTour$dist_between)
path_start <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'next_StID']
path_end <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'EndID']
# Make a statement about the least cost path
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
# The variable path shows the order in which you should visit the transects
path <- cut_tour(tour, path_start, exclude_cut = F)
# Arrange df from smallest to largest travel distance
tmp1 <- dfTour %>%
arrange(dist_between)
# Change dist_between and visit_order to NA for transect with the largest distance to break cycle
# (e.g., we will not travel this distance, this represents the path endpoint)
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),8] <- NA
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),5] <- NA
# Set df order back to ascending by visit order
tmp2 <- tmp1 %>%
arrange(visit_order)
# Detect the break in a sequence of visit_order introduced by the NA (e.g., 1,2,3....5,6) and mark groups before the break with 0 and after the break with 1 in the "cont_per" column
tmp2$cont_per <- cumsum(!c(TRUE, diff(tmp2$visit_order)==1))
# Sort "cont_per" such that the records following the break become the beginning of the path and the ones following the break represent the middle orders and the point with the NA being assigned the last visit order, and assign a new visit order
tmp3 <- tmp2%>%
arrange(desc(cont_per))%>%
mutate(visit_order_FINAL=seq(1, length(tmp2$visit_order), 1))
# Datframe ordered by progression of transects
trans_order <- cbind.data.frame(tmp3[2], tmp3[1], tmp3[4], tmp3[3], tmp3[6], tmp3[7], tmp3[8], tmp3[10])
# Insert NAs for "next" info for final transect
trans_order[nrow(trans_order),'next_StPt'] <- NA
trans_order[nrow(trans_order), 'next_StID'] <- NA
#=========================================
# View data
#=========================================
head(trans_order)
#=========================================
# Plot
#=========================================
#For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-planeDim-0.1, planeDim+0.1), ylim = c(-planeDim-0.1, planeDim+0.1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
and a grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 1, lty = 1)
points(xs, ys, col = c("green", "red"), pch = 1, cex = 1)
#text((xs), (ys), i)
}
# Add the path to the visualization
for(i in 1:(length(path)-1)) {
# This makes a line between the x coordinates for the end point of path i and beginning point of path i+1
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = 1, lwd=1)
}
This is what the end result looks like
With SOM I experimented a little. First I used MiniSOM in Python but I was not impressed and changed to the kohonen package in R, which offers more features than the previous one. Basically, I applied SOM for three use cases: (1) clustering in 2D with generated data, (2) clustering with more-dimensional data: built-in wine data set, and (3) outlier detection. I solved all the three use cases but I would like to raise a question in connection with the outlier detection I applied. For this purpose I used the vector som$distances, which contains a distance for each row of the input data set. The values with excelling distances can be outliers. However, I do not know how this distance is computed. The package description (https://cran.r-project.org/web/packages/kohonen/kohonen.pdf) states for this metric : "distance to the closest unit".
Could you please tell how this distance is computed?
Could you please comment the outlier detection I used? How would you have done it? (In the generated data set it really finds the outliers. In
the real wine data set there are four relatively excelling values among the 177 wine sorts. See
the charts below. The idea that crossed my mind to use bar charts for depicting this I really like.)
Charts:
Generated data, 100 point in 2D in 5 distinct clusters and 2
outliers (Category 6 shows the outliers):
Distances shown for all the 102 data points, the last two ones are
the outliers which were correctly identified. I repeated the test
with 500, and 1000 data points and added solely 2 outliers. The
outliers were also found in those cases.
Distances for the real wine data set with potential outliers:
The row id of the potential outliers:
# print the row id of the outliers
# the threshold 10 can be taken from the bar chart,
# below which the vast majority of the values fall
df_wine[df_wine$value > 10, ]
it produces the following output:
index value
59 59 12.22916
110 110 13.41211
121 121 15.86576
158 158 11.50079
My annotated code snippet:
data(wines)
scaled_wines <- scale(wines)
# creating and training SOM
som.wines <- som(scaled_wines, grid = somgrid(5, 5, "hexagonal"))
summary(som.wines)
#looking for outliers, dist = distance to the closest unit
som.wines$distances
len <- length(som.wines$distances)
index_in_vector <- c(1:len)
df_wine<-data.frame(cbind(index_in_vector, som.wines$distances))
colnames(df_wine) <-c("index", "value")
po <-ggplot(df_wine, aes(index, value)) + geom_bar(stat = "identity")
po <- po + ggtitle("Outliers?") + theme(plot.title = element_text(hjust = 0.5)) + ylab("Distances in som.wines$distances") + xlab("Number of Rows in the Data Set")
plot(po)
# print the row id of the outliers
# the threshold 10 can be taken from the bar chart,
# below which the vast majority of the values fall
df_wine[df_wine$value > 10, ]
Further Code Samples
With regard to the discussion in the comments I also post the code snippets asked for. As far as I remember, the code lines responsible for clustering I constructed based on samples I found in the description of the Kohonen package (https://cran.r-project.org/web/packages/kohonen/kohonen.pdf). However, I am not completely sure, it was more than a year ago. The code is provided as is without any warranty :-). Please bear in mind that a particular clustering approach may perform with different accuracy on different data. I would also recommend to compare it with t-SNE on the wine data set (data(wines) available in R). Moreover, implement the heat-maps to demonstrate how the data with regard to individual variables are located. (In the case of the above example with 2 variables it is not important but it would be nice for the wine data set).
Data Generation with Five Clusters and 2 Outliers and Plotting
library(stats)
library(ggplot2)
library(kohonen)
generate_data <- function(num_of_points, num_of_clusters, outliers=TRUE){
num_of_points_per_cluster <- num_of_points/num_of_clusters
cat(sprintf("#### num_of_points_per_cluster = %s, num_of_clusters = %s \n", num_of_points_per_cluster, num_of_clusters))
arr<-array()
standard_dev_y <- 6000
standard_dev_x <- 2
# for reproducibility setting the random generator
set.seed(10)
for (i in 1:num_of_clusters){
centroid_y <- runif(1, min=10000, max=200000)
centroid_x <- runif(1, min=20, max=70)
cat(sprintf("centroid_x = %s \n, centroid_y = %s", centroid_x, centroid_y ))
vector_y <- rnorm(num_of_points_per_cluster, mean=centroid_y, sd=standard_dev_y)
vector_x <- rnorm(num_of_points_per_cluster, mean=centroid_x, sd=standard_dev_x)
cluster <- array(c(vector_y, vector_x), dim=c(num_of_points_per_cluster, 2))
cluster <- cbind(cluster, i)
arr <- rbind(arr, cluster)
}
if(outliers){
#adding two outliers
arr <- rbind(arr, c(10000, 30, 6))
arr <- rbind(arr, c(150000, 70, 6))
}
colnames(arr) <-c("y", "x", "Cluster")
# WA to remove the first NA row
arr <- na.omit(arr)
return(arr)
}
scatter_plot_data <- function(data_in, couloring_base_indx, main_label){
df <- data.frame(data_in)
colnames(df) <-c("y", "x", "Cluster")
pl <- ggplot(data=df, aes(x = x,y=y)) + geom_point(aes(color=factor(df[, couloring_base_indx])))
pl <- pl + ggtitle(main_label) + theme(plot.title = element_text(hjust = 0.5))
print(pl)
}
##################
# generating data
data <- generate_data(100, 5, TRUE)
print(data)
scatter_plot_data(data, couloring_base_indx<-3, "Original Clusters without Outliers \n 102 Points")
Preparation, Clustering and Plotting
I used the hierarchical clustering approach with the Kohonen Map (SOM).
normalising_data <- function(data){
# normalizing data points not the cluster identifiers
mtrx <- data.matrix(data)
umtrx <- scale(mtrx[,1:2])
umtrx <- cbind(umtrx, factor(mtrx[,3]))
colnames(umtrx) <-c("y", "x", "Cluster")
return(umtrx)
}
train_som <- function(umtrx){
# unsupervised learning
set.seed(7)
g <- somgrid(xdim=5, ydim=5, topo="hexagonal")
#map<-som(umtrx[, 1:2], grid=g, alpha=c(0.005, 0.01), radius=1, rlen=1000)
map<-som(umtrx[, 1:2], grid=g)
summary(map)
return(map)
}
plot_som_data <- function(map){
par(mfrow=c(3,2))
# to plot some charactristics of the SOM map
plot(map, type='changes')
plot(map, type='codes', main="Mapping Data")
plot(map, type='count')
plot(map, type='mapping') # how many data points are held by each neuron
plot(map, type='dist.neighbours') # the darker the colours are, the closer the point are; the lighter the colours are, the more distant the points are
#to switch the plot config to the normal
par(mfrow=c(1,1))
}
plot_disstances_to_the_closest_point <- function(map){
# to see which neuron is assigned to which value
map$unit.classif
#looking for outliers, dist = distance to the closest unit
map$distances
max(map$distances)
len <- length(map$distances)
index_in_vector <- c(1:len)
df<-data.frame(cbind(index_in_vector, map$distances))
colnames(df) <-c("index", "value")
po <-ggplot(df, aes(index, value)) + geom_bar(stat = "identity")
po <- po + ggtitle("Outliers?") + theme(plot.title = element_text(hjust = 0.5)) + ylab("Distances in som$distances") + xlab("Number of Rows in the Data Set")
plot(po)
return(df)
}
###################
# unsupervised learning
umtrx <- normalising_data(data)
map<-train_som(umtrx)
plot_som_data(map)
#####################
# creating the dendogram and then the clusters for the neurons
dendogram <- hclust(object.distances(map, "codes"), method = 'ward.D')
plot(dendogram)
clusters <- cutree(dendogram, 7)
clusters
length(clusters)
#visualising the clusters on the map
par(mfrow = c(1,1))
plot(map, type='dist.neighbours', main="Mapping Data")
add.cluster.boundaries(map, clusters)
Plots with the Clusters
You can also create nice heat-maps for selected variables but I had not implemented them for clustering with 2 variables it does not really make sense. If you implement it for the wine data set, please add the code and the charts to this post.
#see the predicted clusters with the data set
# 1. add the vector of the neuron ids to the data
mapped_neurons <- map$unit.classif
umtrx <- cbind(umtrx, mapped_neurons)
# 2. taking the predicted clusters and adding them the the original matrix
# very good description of the apply functions:
# https://www.guru99.com/r-apply-sapply-tapply.html
get_cluster_for_the_row <- function(x, cltrs){
return(cltrs[x])
}
predicted_clusters <- sapply (umtrx[,4], get_cluster_for_the_row, cltrs<-clusters)
mtrx <- cbind(mtrx, predicted_clusters)
scatter_plot_data(mtrx, couloring_base_indx<-4, "Predicted Clusters with Outliers \n 100 points")
See the predicted clusters below in case there were outliers and in case there were not.
I am not quite sure though, but I often find that the distance measurement of two objects reside in a particular dimensional space uses mostly Euclidean distance. For example, two points A and B in a two dimensional space having location of A(x=3, y=4) and B(x=6, y=8) are 5 distance unit apart. It is a result of performing calculation of squareroot((3-6)^2 + (4-8)^2). This is also applied to the data whose greater dimension, by adding trailing power of two of the difference of the two point's value in a particular dimension. If A(x=3, y=4, z=5) and B(x=6, y=8, z=7) then the distance is squareroot((3-6)^2 + (4-8)^2 + (5-7)^2), and so on. In kohonen, I think that after the model has finished the training phase, the algorithm then calculates the distances of each datum to all nodes and then assign it to the nearest node (a node which has the smallest distance to it). Eventually, the values inside the variable 'distances' returned by the model is the distance of every datum to its nearest node. One thing to note from your script is that the algorithm does not measure the distance directly from the original property values that the data have, because they have been scaled prior to feeding the data to the model. The distance measurement is applied to the scaled version of the data. The scaling is a standard procedure to eliminate the dominance of a variable on top of another.
I believe that your method is acceptable, because the values inside the 'distances' variable are the distance of each datum to its nearest node. So if a value of the distance between a datum and its nearest node is high, then this also means: the distance of the datum to other nodes are obviously much much higher.
I am trying to write my first own kmeans algorithm in R. I am new in this field, so please don't judge me for don't seeing the obvious.
In its current state, the algorithm takes two vectors x, y, calculates the distance of each data point to the cluster centers and assigns the cluster with minimal distance from its center to the data point. The algorithm stops when there is no change in the assignment and thus no change in the cluster centers.
# Sample data
set.seed(100)
xval <- rnorm(12, mean = rep(1:3, each = 4), sd = 0.2)
yval <- rnorm(12, mean = rep(c(1,2,1), each = 4), sd = 0.2)
# Kmeans function
kclus <- function(x, y, nclus) {
# start with random cluster centers
xcen <- runif(n = nclus, min = min(x), max = max(x))
ycen <- runif(n = nclus, min = min(y), max = max(y))
# data points and cluster assignment in "data"
# cluster coordinates in "clus"
data <- data.frame(xval = x, yval = y, clus = NA)
clus <- data.frame(name = 1:nclus, xcen = xcen, ycen = ycen)
finish <- FALSE
while(finish == FALSE) {
# assign cluster with minimum distance to each data point
for(i in 1:length(x)) {
dist <- sqrt((x[i]-clus$xcen)^2 + (y[i]-clus$ycen)^2)
data$clus[i] <- which.min(dist)
}
xcen_old <- clus$xcen
ycen_old <- clus$ycen
# calculate new cluster centers
for(i in 1:nclus) {
clus[i,2] <- mean(subset(data$xval, data$clus == i))
clus[i,3] <- mean(subset(data$yval, data$clus == i))
}
# stop the loop if there is no change in cluster coordinates
if(identical(xcen_old, clus$xcen) & identical(ycen_old, clus$ycen)) finish <- TRUE
}
data
}
# apply kmeans function to sample data
cluster <- kclus(xval, yval, 4)
# plot the result
ggplot(cluster, aes(xval, yval, color = as.factor(clus))) + geom_point()
This is working relatively good so far. But I have no clue, how I can force the algorithm to a specific number of clusters. It is already implemented as the parameter nclus in my kclus() function, but I don't know how to make use of it.
For the given sample data, the algorithm just gives me three clusters. I want to force him to give me four clusters back.
Anybody here who can give me an advice on that?
Thank you so much,
Marcus
This is not true that the algorithm you implemented always gives you 3 clusters, probably you have not run it for sufficiently large number of times. Here is slight modification of your code where we shall be able to see that the number of clusters output depends on the initialization of the cluster centroids (that are randomly chosen and can be controlled with random.seed):
# Sample data
set.seed(100)
xval <- rnorm(12, mean = rep(1:3, each = 4), sd = 0.2)
yval <- rnorm(12, mean = rep(c(1,2,1), each = 4), sd = 0.2)
# Kmeans function with random.seed for initialization
kclus <- function(x, y, nclus, random.seed=123) {
set.seed(random.seed)
# start with random cluster centers
xcen <- runif(n = nclus, min = min(x), max = max(x))
ycen <- runif(n = nclus, min = min(y), max = max(y))
# data points and cluster assignment in "data"
# cluster coordinates in "clus"
data <- data.frame(xval = x, yval = y, clus = NA)
clus <- data.frame(name = 1:nclus, xcen = xcen, ycen = ycen)
finish <- FALSE
while(finish == FALSE) {
# assign cluster with minimum distance to each data point
for(i in 1:length(x)) {
dist <- sqrt((x[i]-clus$xcen)^2 + (y[i]-clus$ycen)^2)
data$clus[i] <- which.min(dist)
}
xcen_old <- clus$xcen
ycen_old <- clus$ycen
# calculate new cluster centers
for(i in 1:nclus) {
clus[i,2] <- mean(subset(data$xval, data$clus == i))
clus[i,3] <- mean(subset(data$yval, data$clus == i))
}
# stop the loop if there is no change in cluster coordinates
if(identical(xcen_old, clus$xcen) & identical(ycen_old, clus$ycen)) finish <- TRUE
}
data
}
# with default random seed 123, you should be able to reproduce the result
# as you can see, in this case, no data points were assigned to the 4th cluster
cluster <- kclus(xval, yval, 4)
cluster.centers <- aggregate(.~clus, cluster, mean)
ggplot(cluster, aes(xval, yval, color = as.factor(clus))) +
geom_point(size=5) +
geom_point(data=cluster.centers, aes(xval, yval, col=as.factor(clus)), pch=8, size=5)
# run with a different random seed = 12
# as you can see, in this case, the algorithm outputs 4 clusters, with the 2nd cluster having a single datapoint assigned to
cluster <- kclus(xval, yval, 4, 12)
cluster.centers <- aggregate(.~clus, cluster, mean)
ggplot(cluster, aes(xval, yval, color = as.factor(clus))) +
geom_point(size=5) +
geom_point(data=cluster.centers, aes(xval, yval, col=as.factor(clus)), pch=8, size=5)
# run with a different random seed = 12345
# as you can see, in this case, the algorithm outputs 2 clusters, with the all the datapoints assigned to the 1st and the 2nd cluster
cluster <- kclus(xval, yval, 4, 12345)
cluster.centers <- aggregate(.~clus, cluster, mean)
ggplot(cluster, aes(xval, yval, color = as.factor(clus))) +
geom_point(size=5) +
geom_point(data=cluster.centers, aes(xval, yval, col=as.factor(clus)), pch=8, size=5)
As we can see from above examples, whether or not a clusters ends up with no points assigned to it at convergence depends upon the initial center positions and also the data distribution. In general, if kmeans ends up with one cluster centroid as empty, it means that if you try to forcefully assign one point to the empty cluster, it will likely result in poorer quality clusters, something that you don't want to do.
There are several things that you can try at this point.
First is you can run your algorithm multiple times, each time with different randomly initialized centers and then choose the result with the highest cluster qualities (measured by SSE etc.).
Second thing that you can try is smarter initialization with
Kmeans++.
A not-so-good-choice could be to modify your algorithm to
ensure while reassignment of clusters it guarantees that each of the
k (=4) clusters has at least one point asigned to it (if not then
don't reassign).
Finally you could try some other algorithm such as
hierarchical clustering that gives you more flexibility via
dendograms to choose as many clusters as you want.
That is just the way that k-means works. You have two main choices. Either live with getting fewer clusters or whenever the number of clusters falls below the requested number of clusters, start a new one. To start a new one, one might find the point that is farthest from its cluster center and change it to be a new cluster. However, there are problems with this. Suppose that you have 20 points and the user asks for 25 clusters. You just can't satisfy some people.
The problem is your initialization.
Initializing with random numbers is the worst possible choice unless your data is uniformly random distributed (and then you don't have clusters).
Now if you generate a center in the top left corner, it may have 0 points, and your code likely then generates a NaN mean next.
Instead, try choosing k points from your data as centers. This is much less likely to go bad (although it can).