R radarchart: free axis to enhance records display? - r

I am trying to display my data using radarchart {fmsb}. The values of my records are highly variable. Therefore, low values are not visible on final plot.
Is there a was to "free" axis per each record, to visualize data independently of their scale?
Dummy example:
df<-data.frame(n = c(100, 0,0.3,60,0.3),
j = c(100,0, 0.001, 70,7),
v = c(100,0, 0.001, 79, 3),
z = c(100,0, 0.001, 80, 99))
n j v z
1 100.0 100.0 100.000 100.000 # max
2 0.0 0.0 0.000 0.000 # min
3 0.3 0.001 0.001 0.001 # small values -> no visible on final chart!!
4 60.0 0.001 79.000 80.000
5 0.3 0.0 3.000 99.000
Create radarchart
require(fmsb)
radarchart(df, axistype=0, pty=32, axislabcol="grey",# na.itp=FALSE,
seg = 5, centerzero = T)
Result: (only rows #2 and #3 are visible, row #1 with low values is not visible !!)
How to make visible all records (rows), i.e. how to "free" axis for any of my records? Thank you a lot,

If you want to be sure to see all 4 dimensions whatever the differences, you'll need a logarithmic scale.
As by design of the radar chart we cannot have negative values we are restricted on our choice of base by the range of values and by our number of segments (axis ticks).
If we want an integer base the minimum we can choose is:
seg0 <- 5 # your initial choice, could be changed
base <- ceiling(
max(apply(df[-c(1,2),],MARGIN = 1,max) / apply(df[-c(1,2),],MARGIN = 1,min))
^(1/(seg0-1))
)
Here we have a base 5.
Let's normalize and transform our data.
First we normalize the data by setting the maximum to 1 for all series,then we apply our logarithmic transformation, that will set the maximum of each series to seg0 (n for black, z for others) and the minimum among all series between 1 and 2 (here the v value of the black series).
df_normalized <- as.data.frame(df[-c(1,2),]/apply(df[-c(1,2),],MARGIN = 1,max))
df_transformed <- rbind(rep(seg0,4),rep(0,4),log(df_normalized,base) + seg0)
radarchart(df_transformed, axistype=0, pty=32, axislabcol="grey",# na.itp=FALSE,
seg = seg0, centerzero = T,maxmin=T)
If we look at the green series we see:
j and v have same order of magnitude
n is about 5^2 = 25 times smaller than j (5 i the value of the base, ^2 because 2 segments)
v is about 5^2 = 25 times (again) smaller than z
If we look at the black series we see that n is about 3.5^5 times bigger than the other dimensions.
If we look at the red series we see that the order of magnitude is the same among all dimensions.

Maybe a workaround for your problem:
If you would transform your data before running radarchart
(e.g. logarithm, square root ..) then you could also visualise small values.

Here an example using a cubic root transformation:
library(specmine)
df.c<-data.frame(cubic_root_transform(df)) # transform dataset
radarchart(df.c, axistype=0, pty=32, axislabcol="grey",# na.itp=FALSE,
seg = 5, centerzero = T)`
and the result will look like this:
EDIT:
If you want to zoom the small values even more you can do that with a higher order of the root.
e.g.
t<-5 # for fifth order root
df.t <- data.frame(apply(df, 2, function(x) FUN=x^(1/t))) # transform dataset
radarchart(df.t, axistype=0, pty=32, axislabcol="grey",# na.itp=FALSE,
seg = 5, centerzero = T)
You can adjust the "zoom" as you want by changing the value of t
So you should find a visualization that is suitable for you.

Here is an example using 10-th root transformation:
library(specmine)
df.c<-data.frame((df)^(1/10)) # transform dataset
radarchart(df.c, axistype=0, pty=32, axislabcol="grey",# na.itp=FALSE,
seg = 5, centerzero = T)`
and the result will look like this:
You can try n-th root for find the one that is best for you. N grows, the root of a number nearby zero grows faster.

Related

Computing the Tukey median

I am trying to compute the data depth of two variables with the following function:
library(depth)
x <- data.frame(data$`math score`, data$`reading score`)
depth(1000, x, method = "Tukey", approx = FALSE, eps = 1e-8, ndir = 1000)
the first variable after depth is u which stands for Numerical vector whose depth is to be calculated. Dimension has to be the same as that of the observations.
I have 1000 observations however I get the following error message:
Error in depth(1000, x, method = "Tukey", approx = FALSE, eps = 1e-08, :
Dimension mismatch between the data and the point u.
Does someone know how to solve this issue?
Thank you in advance!
If you look at the documentation for the function depth, it says:
u    Numerical vector whose depth is to be calculated. Dimension has to be the same as that of the observations.
So u has to be a point in multidimensional space represented by a vector with n components, whereas x has to be a matrix or data frame of m by n components, (m rows for m points). You are comparing u to all the other multidimensional points in the set x to find the minimum number of points that could share a half-space with u.
Let's create a very example in two dimensional space:
library(depth)
set.seed(100)
x <- data.frame(x = c(rnorm(10, -5, 2), rnorm(10, 5, 2)), y = rnorm(20, 0, 2))
plot(x)
The depth function calculates the depth of a particular point relative to the data. So let's use the origin:
u <- data.frame(x = 0, y = 0)
points(u, col = "red", pch = 16)
Naively we might think that the origin here has a depth of 10/20 points (i.e. the most obvious way to partition this dataset is a vertical line through the origin with 10 points on each side, but instead we find:
depth(u, x)
#> [1] 0.35
This indicates that there is a half-space including the origin that only contains 0.35 of the points, i.e. 7 points out of 20:
depth(u, x) * nrow(x)
#> [1] 7
And we can see that visually like this:
abline(0, -0.07)
points(x[x$y < (-0.07 * x$x),], col = "blue", pch = 16)
Where we have coloured these 7 points blue.
So it's not clear what result you expect from the depth function, but you will need to give it a value of c(math_score, reading_score) where math_score and reading_score are test values for which you want to know the depth.

Use R to Efficiently Order Randomly Generated Transects

Problem
I looking for a way to efficiently order randomly selected sampling transects occur around a fixed object. These transects, once generated, need to be ordered in a way that makes sense spatially such that the distance traveled is minimized. This would be done by ensuring that the end point of the current transect is as close as possible to the start point of the next transect. Also, none of the transects can be repeated.
Because there are thousands of transects to order, and this is a very tedious task to do manually, and I am trying to use R to automate this process. I have already generated the transects, each having a start and end point whose location is indicated using a 360-degree system (e.g., 0 is North, 90 is East, 180 is South, and 270 is West). I have also generated some code that seems to indicate the start point and the ID for the next transect, but there are a few problems with this code: (1) it can generate errors depending on the start and end points being considered, (2) it doesn't achieve what I ultimately need it to achieve, and (3) as is, the code itself seems overly complicated and I can't help but wonder if there is a more straightforward way to do this.
Ideally, the code would result in the transects being reordered such they match the order that they should be flown rather than the order that they were initially input.
The Data
For simplicity, let's pretend there are just 10 transects to order.
# Transect ID for the start point
StID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
StPt <- c(342.1, 189.3, 116.5, 67.9, 72, 208.4, 173.2, 97.8, 168.7, 138.2)
# Transect ID for the end point
EndID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
EndPt <- c(122.3, 313.9, 198.7, 160.4, 166, 26.7, 312.7, 273.7, 288.8, 287.5)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
What I Have Tried
Please feel free to ignore this code, there has to be a better way and it does not really achieve the intended outcome. Right now I am using a nested for-loop that is very difficult to intuitively follow but represents my best attempt thus far.
# Create two new columns that will be populated using a loop
df$StPt_Next <- NA
df$ID_Next <- NA
# Also create a list to be populated as end and start points are matched
used <- c(df$StPt[1]) #puts the start point of transect #1 into the used vector since we will start with 1 and do not want to have it used again
# Then, for every row in the dataframe...
for (i in seq(1,length(df$EndPt)-1, 1)){ # Selects all rows except the last one as the last transect should have no "next" transect
# generate some print statements to indicate that the script is indeed running while you wait....
print(paste("######## ENDPOINT", i, ":", df$EndPt[i], " ########"))
print(paste("searching for a start point that fits criteria to follow this endpoint",sep=""))
# sequentially select each end point
valueEndPt <- df[i,1]
# and order the index by taking the absolute difference of end and start points and, if this value is greater than 180, also subtract from 360 so all differences are less than 180, then order differences from smallest to largest
orderx <- order(ifelse(360-abs(df$StPt-valueEndPt) > 180,
abs(df$StPt-valueEndPt),
360-abs(df$StPt-valueEndPt)))
tmp <- as.data.frame(orderx)
# specify index value
index=1
# for as long as there is an "NA" present in the StPt_Next created before for loop...
while (is.na(df$StPt_Next[i])) {
#select the value of the ordered index in sequential order
j=orderx[index]
# if the start point associated with a given index is present in the list of used values...
if (df$StPt[j] %in% used){
# then have R print a statement indicate this is the case...
print(paste("passing ",df$StPt[j], " as it has already been used",sep=""))
# and move onto the next index
index=index+1
# break statement intended to skip the remainder of the code for values that have already been used
next
# if the start point associated with a given index is not present in the list of used values...
} else {
# then identify the start point value associated with that index ID...
valueStPt <- df$StPt[j]
# and have R print a statement indicating an attempt is being made to use the next value
print(paste("trying ",df$StPt[j],sep=""))
# if the end transect number is different from the start end transect number...
if (df$EndID[i] != df$StID[j]) {
# then put the start point in the new column...
df$StPt_Next[i] <- df$StPt[j]
# note which record this start point came from for ease of reference/troubleshooting...
df$ID_Next[i] <- j
# have R print a statement that indicates a value for the new column has beed selected...
print(paste("using ",df$StPt[j],sep=""))
# and add that start point to the list of used ones
used <- c(used,df$StPt[j])
# otherwise, if the end transect number matches the start end transect number...
} else {
# keep NA in this column and try again
df$StPt_Next[i] <- NA
# and indicate that this particular matched pair can not be used
print(paste("cant use ",valueStPt," as the column EndID (related to index in EndPt) and StID (related to index in StPt) values are matching",sep=""))
}# end if else statement to ensure that start and end points come from different transects
# and move onto the next index
index=index+1
}# end if else statement to determine if a given start point still needs to be used
}# end while loop to identify if there are still NA's in the new column
}# end for loop
The Output
When the code does not produce an explicit error, such as for the example data provided, the output is as follows:
StPt StID EndPt EndID StPt_Next ID_Next
1 342.1 1 122.3 1 67.9 4
2 189.3 2 313.9 2 173.2 7
3 116.5 3 198.7 3 97.8 8
4 67.9 4 160.4 4 72.0 5
5 72.0 5 166.0 5 116.5 3
6 208.4 6 26.7 6 189.3 2
7 173.2 7 312.7 7 168.7 9
8 97.8 8 273.7 8 138.2 10
9 168.7 9 288.8 9 208.4 6
10 138.2 10 287.5 10 NA NA
The last two columns were generated by the code and added to the original dataframe. StPt_Next has the location of the next closest start point and ID_Next indicates the transectID associated with that next start point location. The ID_Next column indicates that the order transects should be flown is as follows 1,4,5,3,8,10,NA (aka. the end), and 2,7,9,6 form their own loop that goes back to 2.
There are two specific problems that I can't solve:
(1) There is a problem of forming one continuous chain of sequence. I think this is related to having 1 be the starting transect and 10 being the last transect no matter what, but not knowing how to indicate in the code that the second to last transect must match up with 10 so that the sequence includes all 10 transects before terminating at an "NA" representing the final end point.
(2) To really automate this process, after fixing the early termination of the sequence due to the premature introduction of the "NA" as the ID_next, a new column would be made that would allow the transects to be reordered based on the most efficient progression rather than the original order of their EndID/StartID.
Intended Outcome
If we pretend that we only had 6 transects to order and ignore the 4 that were not able to be ordered due to the premature introduction of the "NA", this would be the intended outcome:
StPt StID EndPt EndID StPt_Next ID_Next TransNum
1 342.1 1 122.3 1 67.9 4 1
4 67.9 4 160.4 4 72.0 5 2
5 72.0 5 166.0 5 116.5 3 3
3 116.5 3 198.7 3 97.8 8 4
8 97.8 8 273.7 8 138.2 10 5
10 138.2 10 287.5 10 NA NA 6
EDIT: A Note About the Error Message Explicitly Produced by the Code
As indicated earlier, the code has a few flaws. Another flaw is that it will often produce an error when trying to order a larger number of transects. I am not entirely sure at what step in the process the error is generated, but I am guessing that it is related to the inability to match up the last few transects, possibly due to not meeting the criteria set forth by "orderx". The print statements say "trying NA" instead of a start point in the database, which results in this error: "Error in if (df$EndID[i] != df$StID[j]) { : missing value where TRUE/FALSE needed". I am guessing that there would need to be another if-else statement that somehow indicates "if the remaining points do not meet the orderx criteria, then just force them to match up with whatever transect remains so that everything is assigned a StPt_Next and ID_Next".
Here is a larger dataset that will generate the error:
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
Any advice would be greatly appreciated!
As #chinsoon12 points out hidden in your problem you have an (Asymmetric) Traveling Salesman Problem. The asymmetry arises because the start and end points of your transecs are different.
ATSP is a renowned NP-complete problem. So exact solutions are very difficult even for medium sized problems (see wikipedia for more info). Hence the best we can do in most cases is approximations or heuristics. As you mention there are thousands of transects this is at least a medium sized problem.
Rather than code an ATSP approximation algorithm from the start, there is an existing TSP library for R. This includes several approximation algorithms. Reference documentation is here.
The follow is my use of the TSP package applied to your problem. Beginning with setup (assume I have run StPt, StID, EndPt, and EndID as in your question.
install.packages("TSP")
library(TSP)
library(dplyr)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
# filter to 6 example nodes for requested comparison
df = df %>% filter(StID %in% c(1,3,4,5,8,10))
We shall use ATSP from a distance matrix. Position [row,col] in the matrix is the cost/distance of going from (the end of) transect row to (the start of) transect col. This code creates the entire distance matrix.
# distance calculation
transec_distance = function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# distance matrix
matrix_distance = matrix(data = NA, nrow = nrow(df), ncol = nrow(df))
for(start_id in 1:nrow(df)){
start_point = df[start_id,'StPt']
for(end_id in 1:nrow(df)){
end_point = df[end_id,'EndPt']
matrix_distance[end_id,start_id] = transec_distance(end_point, start_point)
}
}
Note that there are more effective ways to construct a distance matrix. However, I have chosen this approach for its clarity. Depending on your computer and the exact number of transects this code may run very slowly.
Also, note that the size of this matrix is quadratic to the number of transects. So for a large number of transects, you will discover there is not enough memory.
The solving is very unexciting. The distance matrix gets turned into a ATSP object, and the ATSP object gets passed to the solver. We then proceed to add the ordering/traveling information to the original df.
answer = solve_TSP(as.ATSP(matrix_distance))
# get length of cycle
print(answer)
# sort df to same order as solution
df_w_answer = df[as.numeric(answer),]
# add info about next transect to each transect
df_w_answer = df_w_answer %>%
mutate(visit_order = 1:nrow(df_w_answer)) %>%
mutate(next_StID = lead(StID, order_by = visit_order),
next_StPt = lead(StPt, order_by = visit_order))
# add info about next transect to each transect (for final transect)
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StID'] =
df_w_answer[df_w_answer$visit_order == 1,'StID']
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StPt'] =
df_w_answer[df_w_answer$visit_order == 1,'StPt']
# compute distance between end of each transect and start of next
df_w_answer = df_w_answer %>% mutate(dist_between = transec_distance(EndPt, next_StPt))
At this point we have a cycle. You can pick any node as the starting point, follow the order given in the df: from EndID to next_StID, and you will cover every transect in (a good approximation to) the minimum distance.
However in your 'intended outcome' you have a path solution (e.g. start at transect 1 and finish at transect 10). We can turn the cycle into a path by excluding the single most expensive transition:
# as path (without returning to start)
min_distance = sum(df_w_answer$dist_between) - max(df_w_answer$dist_between)
path_start = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'next_StID']
path_end = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'EndID']
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
Running all the above gives me a different, but superior, answer to your intended outcome. I get the following order: 1 --> 5 --> 8 --> 4 --> 3 --> 10 --> 1.
You path from transect 1 to transect 10 has a total distance of 428, if we also returned from transect 10 to transect 1, making this a cycle, the total distance would be 483.
Using the TSP package in R we get a path from 1 to 10 with total distance 377, and as a cycle 431.
If we instead start at node 4 and end at node 8, we get a total distance of 277.
Some additional nodes:
Not all TSP solvers are deterministic, hence you may get some variation in your answer if you run again, or run with the input rows in a different order.
TSP is a much more general problem that the transect problem you described. It is possible that your problem has enough additional/special features that means it can be solved perfectly in a reasonable length of time. But this moves your problem into the realm of mathematics.
If you are running out of memory to create the distance matrix, take a look at the documentation for the TSP package. It contains several examples that use geo-coordinates as inputs rather than a distance matrix. This is a much smaller input size (presumably the package calculates the distances on the fly) so if you convert the start and end points to coordinates and specify euclidean (or some other common distance function) you could get around (some) computer memory limits.
Another version of using the TSP package...
Here is the setup.
library(TSP)
planeDim = 15
nTransects = 26
# generate some random transect beginning points in a plane, the size of which
# is defined by planeDim
b = cbind(runif(nTransects)*planeDim, runif(nTransects)*planeDim)
# generate some random transect ending points that are a distance of 1 from each
# beginning point
e = t(
apply(
b,
1,
function(x) {
bearing = runif(1)*2*pi
x + c(cos(bearing), sin(bearing))
}
)
)
For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-1, planeDim + 1), ylim = c(-1, planeDim + 1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
# with a thick grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 4)
points(xs, ys, col = c("green", "red"), pch = 20, cex = 1.5)
text(mean(xs), mean(ys), letters[i])
}
So given a matrix of x,y pairs ("b") for beginning points and a matrix of x,y
pairs ("e") for end points of each transect, the solution is...
# a function that calculates the distance from all endpoints in the ePts matrix
# to the single beginning point in bPt
dist = function(ePts, bPt) {
# apply pythagorean theorem to calculate the distance between every end point
# in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
# apply the "dist" function to all begining points to create the distance
# matrix. since the distance from the end of transect "foo" to the beginning of
# "bar" is not the same as from the end of "bar" to the beginning of "foo," we
# have an asymmetric travelling sales person problem. Therefore, distance
# matrix is directional. The distances at any position in the matrix must be
# the distance from the transect shown in the row label and to the transect
# shown in the column label.
distMatrix = apply(b, 1, FUN = dist, ePts = e)
# for covenience, we'll labels the trasects a to z
dimnames(distMatrix) = list(letters, letters)
# set the distance between the beginning and end of each transect to zero so
# that there is no "cost" to walking the transect
diag(distMatrix) = 0
Here is the upper left corner of the distance matrix:
> distMatrix[1:6, 1:6]
a b c d e f
a 0.00000 15.4287270 12.637979 12.269356 15.666710 12.3919715
b 13.58821 0.0000000 5.356411 13.840444 1.238677 12.6512352
c 12.48161 6.3086852 0.000000 8.427033 6.382304 7.1387840
d 10.69748 13.5936114 7.708183 0.000000 13.718517 0.9836146
e 14.00920 0.7736654 5.980220 14.470826 0.000000 13.2809601
f 12.24503 12.8987043 6.984763 2.182829 12.993283 0.0000000
Now three lines of code from the TSP package solves the problem.
atsp = as.ATSP(distMatrix)
tour = solve_TSP(atsp)
# assume we want to start our circuit at transect "a".
path = cut_tour(tour, "a", exclude_cut = F)
The variable path shows the order in which you should visit the transects:
> path
a w x q i o l d f s h y g v t k c m e b p u z j r n
1 23 24 17 9 15 12 4 6 19 8 25 7 22 20 11 3 13 5 2 16 21 26 10 18 14
We can add the path to the visualization:
for(i in 1:(length(path)-1)) {
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = "dotted")
}
Thanks everyone for the suggestions, #Simon's solution was most tailored to my OP. #Geoffrey's actual approach of using x,y coordinates was great as it allows for the plotting of the transects and the travel order. Thus, I am posting a hybrid solution that was generated using code by both of them and well as additional comments and code to detail the process and get to the actual end result I was aiming for. I am not sure if this will help anyone in the future but, since there was no answer that provided a solution that solved my problem 100% of the way, I thought I'd share what I came up with.
As others have noted, this is a type of traveling salesperson problem. It is asymmetric because the distance from the end of transect "t" to the beginning of transect "t+1" is not the same as the distance from the end transect "t+1" to the start of transect "t". Also, it is a "path" solution rather than a "cycle" solution.
#=========================================
# Packages
#=========================================
library(TSP)
library(useful)
library(dplyr)
#=========================================
# Full dataset for testing
#=========================================
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
#=========================================
# Convert polar coordinates to cartesian x,y data
#=========================================
# Area that the transect occupy in space only used for graphing
planeDim <- 1
# Number of transects
nTransects <- 100
# Convert 360-degree polar coordinates to x,y cartesian coordinates to facilitate calculating a distance matrix based on the Pythagorean theorem
EndX <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["x"])
EndY <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["y"])
StX <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["x"])
StY <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["y"])
# Matrix of x,y pairs for the beginning ("b") and end ("e") points of each transect
b <- cbind(c(StX), c(StY))
e <- cbind(c(EndX), c(EndY))
#=========================================
# Function to calculate the distance from all endpoints in the ePts matrix to a single beginning point in bPt
#=========================================
dist <- function(ePts, bPt) {
# Use the Pythagorean theorem to calculate the hypotenuse (i.e., distance) between every end point in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
#=========================================
# Distance matrix
#=========================================
# Apply the "dist" function to all beginning points to create a matrix that has the distance between every start and endpoint
## Note: because this is an asymmetric traveling salesperson problem, the distance matrix is directional, thus, the distances at any position in the matrix must be the distance from the transect shown in the row label and to the transect shown in the column label
distMatrix <- apply(b, 1, FUN = dist, ePts = e)
## Set the distance between the beginning and end of each transect to zero so that there is no "cost" to walking the transect
diag(distMatrix) <- 0
#=========================================
# Solve asymmetric TSP
#=========================================
# This creates an instance of the asymmetric traveling salesperson (ASTP)
atsp <- as.ATSP(distMatrix)
# This creates an object of Class Tour that travels to all of the points
## In this case, the repetitive_nn produces the smallest overall and transect-to-transect
tour <- solve_TSP(atsp, method = "repetitive_nn")
#=========================================
# Create a path by cutting the tour at the most "expensive" transition
#=========================================
# Sort the original data frame to match the order of the solution
dfTour = df[as.numeric(tour),]
# Add the following columns to the original dataframe:
dfTour = dfTour %>%
# Assign visit order (1 to 100, ascending)
mutate(visit_order = 1:nrow(dfTour)) %>%
# The ID of the next transect to move to
mutate(next_StID = lead(StID, order_by = visit_order),
# The angle of the start point for the next transect
next_StPt = lead(StPt, order_by = visit_order))
# lead() generates the NA's in the last record for next_StID, next_StPt, replace these by adding that information
dfTour[dfTour$visit_order == nrow(dfTour),'next_StID'] <-
dfTour[dfTour$visit_order == 1,'StID']
dfTour[dfTour$visit_order == nrow(dfTour),'next_StPt'] <-
dfTour[dfTour$visit_order == 1,'StPt']
# Function to calculate distance for 360 degrees rather than x,y coordinates
transect_distance <- function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# Compute distance between end of each transect and start of next using polar coordinates
dfTour = dfTour %>% mutate(dist_between = transect_distance(EndPt, next_StPt))
# Identify the longest transition point for breaking the cycle
min_distance <- sum(dfTour$dist_between) - max(dfTour$dist_between)
path_start <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'next_StID']
path_end <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'EndID']
# Make a statement about the least cost path
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
# The variable path shows the order in which you should visit the transects
path <- cut_tour(tour, path_start, exclude_cut = F)
# Arrange df from smallest to largest travel distance
tmp1 <- dfTour %>%
arrange(dist_between)
# Change dist_between and visit_order to NA for transect with the largest distance to break cycle
# (e.g., we will not travel this distance, this represents the path endpoint)
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),8] <- NA
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),5] <- NA
# Set df order back to ascending by visit order
tmp2 <- tmp1 %>%
arrange(visit_order)
# Detect the break in a sequence of visit_order introduced by the NA (e.g., 1,2,3....5,6) and mark groups before the break with 0 and after the break with 1 in the "cont_per" column
tmp2$cont_per <- cumsum(!c(TRUE, diff(tmp2$visit_order)==1))
# Sort "cont_per" such that the records following the break become the beginning of the path and the ones following the break represent the middle orders and the point with the NA being assigned the last visit order, and assign a new visit order
tmp3 <- tmp2%>%
arrange(desc(cont_per))%>%
mutate(visit_order_FINAL=seq(1, length(tmp2$visit_order), 1))
# Datframe ordered by progression of transects
trans_order <- cbind.data.frame(tmp3[2], tmp3[1], tmp3[4], tmp3[3], tmp3[6], tmp3[7], tmp3[8], tmp3[10])
# Insert NAs for "next" info for final transect
trans_order[nrow(trans_order),'next_StPt'] <- NA
trans_order[nrow(trans_order), 'next_StID'] <- NA
#=========================================
# View data
#=========================================
head(trans_order)
#=========================================
# Plot
#=========================================
#For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-planeDim-0.1, planeDim+0.1), ylim = c(-planeDim-0.1, planeDim+0.1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
and a grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 1, lty = 1)
points(xs, ys, col = c("green", "red"), pch = 1, cex = 1)
#text((xs), (ys), i)
}
# Add the path to the visualization
for(i in 1:(length(path)-1)) {
# This makes a line between the x coordinates for the end point of path i and beginning point of path i+1
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = 1, lwd=1)
}
This is what the end result looks like

R: Sample a matrix for cells close to a specified position

I'm trying to find sites to collect snails by using a semi-random selection method. I have set a 10km2 grid around the region I want to collect snails from, which is broken into 10,000 10m2 cells. I want to randomly this grid in R to select 200 field sites.
Randomly sampling a matrix in R is easy enough;
dat <- matrix(1:10000, nrow = 100)
sample(dat, size = 200)
However, I want to bias the sampling to pick cells closer to a single position (representing sites closer to the research station). It's easier to explain this with an image;
The yellow cell with a cross represents the position I want to sample around. The grey shading is the probability of picking a cell in the sample function, with darker cells being more likely to be sampled.
I know I can specify sampling probabilities using the prob argument in sample, but I don't know how to create a 2D probability matrix. Any help would be appreciated, I don't want to do this by hand.
I'm going to do this for a 9 x 6 grid (54 cells), just so it's easier to see what's going on, and sample only 5 of these 54 cells. You can modify this to a 100 x 100 grid where you sample 200 from 10,000 cells.
# Number of rows and columns of the grid (modify these as required)
nx <- 9 # rows
ny <- 6 # columns
# Create coordinate matrix
x <- rep(1:nx, each=ny);x
y <- rep(1:ny, nx);y
xy <- cbind(x, y); xy
# Where is the station? (edit: not snails nest)
Station <- rbind(c(x=3, y=2)) # Change as required
# Determine distance from each grid location to the station
library(SpatialTools)
D <- dist2(xy, Station)
From the help page of dist2
dist2 takes the matrices of coordinates coords1 and coords2 and
returns the inter-Euclidean distances between coordinates.
We can visualize this using the image function.
XY <- (matrix(D, nr=nx, byrow=TRUE))
image(XY) # axes are scaled to 0-1
# Create a scaling function - scales x to lie in [0-1)
scale_prop <- function(x, m=0)
(x - min(x)) / (m + max(x) - min(x))
# Add the coordinates to the grid
text(x=scale_prop(xy[,1]), y=scale_prop(xy[,2]), labels=paste(xy[,1],xy[,2],sep=","))
Lighter tones indicate grids closer to the station at (3,2).
# Sampling probabilities will be proportional to the distance from the station, which are scaled to lie between [0 - 1). We don't want a 1 for the maximum distance (m=1).
prob <- 1 - scale_prop(D, m=1); range (prob)
# Sample from the grid using given probabilities
sam <- sample(1:nrow(xy), size = 5, prob=prob) # Change size as required.
xy[sam,] # Thse are your (**MY!**) 5 samples
x y
[1,] 4 4
[2,] 7 1
[3,] 3 2
[4,] 5 1
[5,] 5 3
To confirm the sample probabilities are correct, you can simulate many samples and see which coordinates were sampled the most.
snail.sam <- function(nsamples) {
sam <- sample(1:nrow(xy), size = nsamples, prob=prob)
apply(xy[sam,], 1, function(x) paste(x[1], x[2], sep=","))
}
SAMPLES <- replicate(10000, snail.sam(5))
tab <- table(SAMPLES)
cols <- colorRampPalette(c("lightblue", "darkblue"))(max(tab))
barplot(table(SAMPLES), horiz=TRUE, las=1, cex.names=0.5,
col=cols[tab])
If using a 100 x 100 grid and the station is located at coordinates (60,70), then the image would look like this, with the sampled grids shown as black dots:
There is a tendency for the points to be located close to the station, although the sampling variability may make this difficult to see. If you want to give even more weight to grids near the station, then you can rescale the probabilities, which I think is ok to do, to save costs on travelling, but these weights need to be incorporated into the analysis when estimating the number of snails in the whole region. Here I've cubed the probabilities just so you can see what happens.
sam <- sample(1:nrow(xy), size = 200, prob=prob^3)
The tendency for the points to be located near the station is now more obvious.
There may be a better way than this but a quick way to do it is to randomly sample on both x and y axis using a distribution (I used the normal - bell shaped distribution, but you can really use any). The trick is to make the mean of the distribution the position of the research station. You can change the bias towards the research station by changing the standard deviation of the distribution.
Then use the randomly selected positions as your x and y coordinates to select the positions.
dat <- matrix(1:10000, nrow = 100)
#randomly selected a position for the research station
rs <- c(80,30)
# you can change the sd to change the bias
x <- round(rnorm(400,mean = rs[1], sd = 10))
y <- round(rnorm(400, mean = rs[2], sd = 10))
position <- rep(NA, 200)
j = 1
i = 1
# as some of the numbers sampled can be outside of the area you want I oversampled # and then only selected the first 200 that were in the area of interest.
while (j <= 200) {
if(x[i] > 0 & x[i] < 100 & y[i] > 0 & y [i]< 100){
position[j] <- dat[x[i],y[i]]
j = j +1
}
i = i +1
}
plot the results:
plot(x,y, pch = 19)
points(x =80,y = 30, col = "red", pch = 19) # position of the station

Drawing a sample that changes the shape of the mother sample

Background:
I'm trying to modify the shape of a histogram resulted from an "Initial" large sample obtained using Initial = rbeta(1e5, 2, 3). Specifically, I want the modified version of the Initial large sample to have 2 additional smaller (in height) "humps" (i.e., another 2 smaller-height peaks in addition to the one that existed in the Initial large sample).
Coding Question:
I'm wondering how to manipulate sample() (maybe using its prob argument) in R base so that this command samples in a manner that the two additional humps be around ".5" and ".6" on the X-Axis?
Here is my current R code:
Initial = rbeta(1e5, 2, 3) ## My initial Large Sample
hist (Initial) ## As seen, here there is only one "hump" say near
# less than ".4" on the X-Axis
Modified.Initial = sample(Initial, 1e4 ) ## This is meant to be the modified version of the
# the Initial with two additional "humps"
hist(Modified.Initial) ## Here, I need to see two additional "humps" near
# ".5" and ".6" on the X-Axis
You can adjust the density distribution by combining it with beta distributions with the desired modes for a smoothed adjustment.
set.seed(47)
Initial = rbeta(1e5, 2, 3)
d <- density(Initial)
# Generate densities of beta distribution. Parameters determine center (0.5) and spread.
b.5 <- dbeta(seq(0, 1, length.out = length(d$y)), 50, 50)
b.5 <- b.5 / (max(b.5) / max(d$y)) # Scale down to max of original density
# Repeat centered at 0.6
b.6 <- dbeta(seq(0, 1, length.out = length(d$y)), 60, 40)
b.6 <- b.6 / (max(b.6) / max(d$y))
# Collect maximum densities at each x to use as sample probability weights
p <- pmax(d$y, b.5, b.6)
plot(p, type = 'l')
# Sample from density breakpoints with new probability weights
Final <- sample(d$x, 1e4, replace = TRUE, prob = p)
Effects on histogram are subtle...
hist(Final)
...but are more obvious in the density plot.
plot(density(Final))
Obviously all adjustments are arbitrary. Please don't do terrible things with your power.

Hexbin: apply function for every bin

I would like to build the hexbin plot where for every bin is the "ratio between class 1 and class2 points falling into this bin" is plotted (either log or not).
x <- rnorm(10000)
y <- rnorm(10000)
h <- hexbin(x,y)
plot(h)
l <- as.factor(c( rep(1,2000), rep(2,8000) ))
Any suggestions on how to implement this? Is there a way to introduce function to every bin based on bin statistics?
#cryo111's answer has the most important ingredient - IDs = TRUE. After that it's just a matter of figuring out what you want to do with Inf's and how much do you need to scale the ratios by to get integers that will produce a pretty plot.
library(hexbin)
library(data.table)
set.seed(1)
x = rnorm(10000)
y = rnorm(10000)
h = hexbin(x, y, IDs = TRUE)
# put all the relevant data in a data.table
dt = data.table(x, y, l = c(1,1,1,2), cID = h#cID)
# group by cID and calculate whatever statistic you like
# in this case, ratio of 1's to 2's,
# and then Inf's are set to be equal to the largest ratio
dt[, list(ratio = sum(l == 1)/sum(l == 2)), keyby = cID][,
ratio := ifelse(ratio == Inf, max(ratio[is.finite(ratio)]), ratio)][,
# scale up (I chose a scaling manually to get a prettier graph)
# and convert to integer and change h
as.integer(ratio*10)] -> h#count
plot(h)
You can determine the number of class 1 and class 2 points in each bin by
library(hexbin)
library(plyr)
x=rnorm(10000)
y=rnorm(10000)
#generate hexbin object with IDs=TRUE
#the object includes then a slot with a vector cID
#cID maps point (x[i],y[i]) to cell number cID[i]
HexObj=hexbin(x,y,IDs = TRUE)
#find count statistics for first 2000 points (class 1) and the rest (class 2)
CountDF=merge(count(HexObj#cID[1:2000]),
count(HexObj#cID[2001:length(x)]),
by="x",
all=TRUE
)
#replace NAs by 0
CountDF[is.na(CountDF)]=0
#check if all points are included
sum(CountDF$freq.x)+sum(CountDF$freq.y)
But printing them is another story. For instance, what if there are no class 2 points in one bin? The fraction is not defined then.
In addition, as far as I understand hexbin is just a two dimensional histogram. As such, it counts the number of points that fall into a given bin. I do not think that it can handle non-integer data as in your case.

Resources