R, creating a knights tour plot with a matrix indicating the path - r

I need to create a knight tour plot out of such an exemplary matrix:
Mat = matrix(c(1, 38, 55, 34, 3, 36, 19, 22,
54, 47, 2, 37, 20, 23, 4, 17,
39, 56, 33, 46, 35, 18, 21, 10,
48, 53, 40, 57, 24, 11, 16, 5,
59, 32, 45, 52, 41, 26, 9, 12,
44, 49, 58, 25, 62, 15, 6, 27,
31, 60, 51, 42, 29, 8, 13, 64,
50, 43, 30, 61, 14, 63, 28, 7), nrow=8, ncol=8, byrow=T)
Numbers indicate the order in which knight moves to create a path.
I have a lot of these kind of results with chessboard up to 75 in size, however I have no way of presenting them in a readable way, I found out that R, given the matrix, is capable of creating a plot like this:
link (this one is 50x50 in size)
So for the matrix I presented the lines between two points occur between the numbers like: 1 - 2 - 3 - 4 - 5 - ... - 64, in the end creating a path presented in the link, but for the 8x8 chessboard, instead of 50x50
However, I have a very limited time to learn R good enough to accomplish it, I am desperate for any kind of direction. How hard does creating such code in R, that tranforms any matrix into such plot, is going to be ? Or is it something trivial ? Any code samples would be a blessing

You can use geom_path as described here: ggplot2 line plot order
In order to do so you need to convert the matrix into a tibble.
coords <- tibble(col = rep(1:8, 8),
row = rep(1:8, each = 8))
coords %>%
mutate(order = Mat[8 * (col - 1) + row]) %>%
arrange(order) %>%
ggplot(aes(x = col, y = row)) +
geom_path() +
geom_text(aes(y = row + 0.25, label = order)) +
coord_equal() # Ensures a square board.
You can subtract .5 from the col and row positions to give a more natural chess board feel.

Related

Choose a subsample of random numbers

I will play in the Brazilian Lottery with my friends. I requested every one of them to choose seven numbers. I create a variable for all of them.
pestana = c(04, 15, 29, 36, 54, 25, 07)
carol = c(7, 22, 30, 35, 44, 51, 57)
davi = c(8, 13, 21, 29, 37, 42, 55)
valerio = c(30, 20, 33, 14, 7, 41, 54)
victor = c(09, 11, 26, 33, 38, 52, 57)
Then, I created a list with all of the numbers, and a list with unique numbers (in order to avoid repeated numbers)
list = c(carol, davi, pestana, valerio, victor, diuli, cynara)
list2 = unique(list)
Finally, I made a sample() for the list2
sample(list2, 7)
After that, I was wondering. Is it possible for me not to use the unique and not have repeated numbers? Because for instance, that way, repeated numbers have the same probability of appearing, when in fact, they have more (for instance, seven appeared three times).
How about this:
pestana = c(04, 15, 29, 36, 54, 25, 07)
carol = c(7, 22, 30, 35, 44, 51, 57)
davi = c(8, 13, 21, 29, 37, 42, 55)
valerio = c(30, 20, 33, 14, 7, 41, 54)
victor = c(09, 11, 26, 33, 38, 52, 57)
list = c(carol, davi, pestana, valerio, victor)
l <- c(unlist(list))
nums <- table(l)
probs <- nums/sum(nums)
sample(names(probs), 7, prob = probs, replace=FALSE)
#> [1] "4" "33" "44" "11" "29" "52" "8"
Created on 2022-12-14 by the reprex package (v2.0.1)
Using the prob argument, you can make some values more likely to show up than others.

Adding colors to network according to the status of respondent

I am a complete beginner when it comes to R but I am trying to analyze a friendship network and I want to color in the nodes according to what the persons status is, so either 0 = high status (maybe in yellow) or 1= low status (maybe in red).
I created the status variable with the data from wave 1, removed the NA and those that did not participate in both waves and created the network with the data of the second wave
w1$Status <- NA
w1$Status[!is.na(w1$diplomamother)|!is.na(w1$diplomafather)]<-1
w1$Status[w1$diplomamother!=2&w1$diplomafather!=2] <-0
#Is this neccessary?
statusw1 <- w1 %>%
select(Status, NW_ID)
statusw1[-c(1, 10, 12, 13, 17, 44, 50, 51, 52, 53, 54, 15, 16, 20, 22, 25, 33, 38, 57, 58, 59, 60, 62),]
statusw1 <- statusw1[-c(1, 10, 12, 13, 17, 44, 50, 51, 52, 53, 54, 15, 16, 20, 22, 25, 33, 38, 57, 58, 59, 60, 62),]
friendsw2 <- friendsw2 %>%
select(!grep(c("_15_|_16_|_20_|_22_|_25_|_33_|_38_|_57_|_58_|_59_|_60_|_62_"), colnames(friendsw2)))
friends_ma2 <- as.matrix(data.frame(friendsw2, row.names = "NW_ID"))
colnames(friends_ma2) <- rownames(friends_ma2)
friends_ma2[is.na(friends_ma2)] <- 0
friends_ma2[friends_ma2 == ""] <- 0
storage.mode(friends_ma2) <- "numeric"
friends_gr2 <- graph.adjacency(friends_ma2, mode = "directed", diag = FALSE)
plot(friends_gr2)
What do I have to do next to visualize the status?
I hope what I explained is understandable! Thank you for your help!

Multidimensional random draw without replacement with 'predrawn' samples in pytorch

I have an (N, I) tensor of N rows with I indices beween 0 and Z, e.g.,
N=5, I=3, Z=100:
foo = tensor([[83, 5, 85],
[ 7, 60, 66],
[89, 25, 63],
[58, 67, 47],
[12, 46, 40]], device='cuda:0')
Now I want to efficiently add X random additional new indices (i.e., not yet included in the tensor!) between 0 and Z to the tensor, e.g.:
foo_new = tensor([[83, 5, 85, 9, 43, 53, 42],
[ 7, 60, 66, 85, 64, 22, 1],
[89, 25, 63, 38, 24, 4, 75],
[58, 67, 47, 83, 43, 29, 55],
[12, 46, 40, 74, 21, 11, 52]], device='cuda:0')
The tensor would in the end have in each row I+X unique indices between 0 and Z, where I indices are the ones from the initial tensor, and X indices are uniform randomly drawn without replacement from the remaining indices {0...Z}\{I(n)}, where {I(n)} are the inidices of the nth row.
So it's like a multidimensional random draw without replacement from indices 0 to Z, where the first I draws (in each row) are enforced to result in the indices given by the initial tensor.
How would I do this efficiently, especially with potentially large Z?
What I tried so far (which was quite slow):
device = torch.cuda.current_device()
notinfoo = torch.ones((N,I), device=device).byte()
N_row = torch.arange(N, device=device).unsqueeze(dim=-1)
notinfoo[N_row, foo] = 0
foo_new = torch.stack([torch.cat((f, torch.arange(Z, device=device)[nf][torch.randperm(Z-I, device=device)][:X])) for f,nf in zip(foo,notinfoo)])
Use first numpy numpy.random.choice to get samples with replace=False for without replacement sampling.
and then concat both using torch.cat
import numpy as np
foo_new = torch.tensor(np.random.choice(100 , (5,4), replace=False)) # Z = 100
foo_new = torch.cat((foo, foo_new), 1)
foo_new
tensor([[83, 5, 85, 56, 83, 16, 20],
[ 7, 60, 66, 43, 31, 75, 67],
[89, 25, 63, 96, 3, 13, 11],
[58, 67, 47, 55, 92, 70, 35],
[12, 46, 40, 79, 61, 58, 76]])

R rewriting a for loop

I've got a loop in my code that I would like to rewrite so running the code takes a little less time to compete. I know you allways have to avoid loops in the code but I can't think of an another way to accomplice my goal.
So I've got a dataset "df_1531" containing a lot of data that I need to cut into pieces by using subset() (if anyone knows a better way, let me know ;) ). I've got a vector with 21 variable names on which I like assign a subset of df_1531. Furthermore the script contains 22 variables with constrains (shift_XY_time).
So, this is my code now...
# list containing different slots
shift_time_list<- c(startdate, shift_1m_time, shift_1a_time, shift_1n_time,
shift_2m_time, shift_2a_time, shift_2n_time,
shift_3m_time, shift_3a_time, shift_3n_time,
shift_4m_time, shift_4a_time, shift_4n_time,
shift_5m_time, shift_5a_time, shift_5n_time,
shift_6m_time, shift_6a_time, shift_6n_time,
shift_7m_time, shift_7a_time, shift_7n_time)
# List with subset names
shift_sub_list <- c("shift_1m_sub", "shift_1a_sub", "shift_1n_sub",
"shift_2m_sub", "shift_2a_sub", "shift_2n_sub",
"shift_3m_sub", "shift_3a_sub", "shift_3n_sub",
"shift_4m_sub", "shift_4a_sub", "shift_4n_sub",
"shift_5m_sub", "shift_5a_sub", "shift_5n_sub",
"shift_6m_sub", "shift_6a_sub", "shift_6n_sub",
"shift_7m_sub", "shift_7a_sub", "shift_7n_sub")
# The actual loop that I'd like to rewrite
for (i in 1:21) {
assign(shift_sub_list[i], subset(df_1531, df_1531$'PLS FFM' >= shift_time_list[i] & df_1531$'PLS FFM' < shift_time_list[i+1]))
}
Running the loop takes approximately 6 or 7 seconds. So, if anyone knows a better/cleaner or quicker way to write my code, I desperately like to hear your suggestion/opinion.
**Reproducible example **
mydata <- cars
dput(cars)
structure(list(speed = c(4, 4, 7, 7, 8, 9, 10, 10, 10, 11, 11,
12, 12, 12, 12, 13, 13, 13, 13, 14, 14, 14, 14, 15, 15, 15, 16,
16, 17, 17, 17, 18, 18, 18, 18, 19, 19, 19, 20, 20, 20, 20, 20,
22, 23, 24, 24, 24, 24, 25), dist = c(2, 10, 4, 22, 16, 10, 18,
26, 34, 17, 28, 14, 20, 24, 28, 26, 34, 34, 46, 26, 36, 60, 80,
20, 26, 54, 32, 40, 32, 40, 50, 42, 56, 76, 84, 36, 46, 68, 32,
48, 52, 56, 64, 66, 54, 70, 92, 93, 120, 85)), class = "data.frame", row.names = c(NA,
-50L))
dist_interval_list <- c( 0, 5, 10, 15,
20, 25, 30, 35,
40, 45, 50, 55,
60, 65, 70, 75,
80, 85, 90, 95,
100, 105, 110, 115, 120)
var_name_list <- c("var_name_1a", "var_name_1b", "var_name_1c", "var_name_1d",
"var_name_2a", "var_name_2b", "var_name_2c", "var_name_2d",
"var_name_3a", "var_name_3b", "var_name_3c", "var_name_3d",
"var_name_4a", "var_name_4b", "var_name_4c", "var_name_4d",
"var_name_5a", "var_name_5b", "var_name_5c", "var_name_5d",
"var_name_6a", "var_name_6b", "var_name_6c", "var_name_6d")
for (i in 1:24){
assign(var_name_list[i], subset(mydata,
mydata$dist >= dist_interval_list[i] &
mydata$dist < dist_interval_list[i+1]))
}
Starting with the 'reproducible' part and the information that the final aim is to summarize another column, it is possible to exploit the fact that the intervals are non-overlapping and simply use the cut function.
library(tidyverse)
mydata %>%
mutate(interval = cut(dist, breaks = dist_interval_list)) %>%
group_by(interval) %>%
summarise(sum = sum(speed))
This should be much faster and will also help you not to get lost in a messy environment full of variables (which are actually part of your data). You want to keep all your data in a single data frame as long as possible;) You probably want to follow with something like purrrlyr::invoke_rows at the final modeling step, if your function does not work with data frames.

predict with glmer where new data is a Raster Stack of fixed efefcts

I have constructed models in glmer and would like to predict these on a rasterStack representing the fixed effects in my model. my glmer model is in the form of:
m1<-glmer(Severity ~ x1 + x2 + x3 + (1 | Year) + (1 | Ecoregion), family=binomial( logit ))
As you can see, I have random effects which I don't have as spatial layer - for example 'year'. Therefore the problem is really predicting glmer on rasterStacks when you don't have the random effects data random effects layers. If I use it out of the box without adding my random effects I get an error.
m1.predict=predict(object=all.var, model=m1, type='response', progress="text", format="GTiff")
Error in predict.averaging(model, blockvals, ...) :
Your question is very brief, and does not indicated what, if any, trouble you have encountered. This seems to work 'out of the box', but perhaps not in your case. See ?raster::predict for options.
library(raster)
# example data. See ?raster::predict
logo <- brick(system.file("external/rlogo.grd", package="raster"))
p <- matrix(c(48, 48, 48, 53, 50, 46, 54, 70, 84, 85, 74, 84, 95, 85,
66, 42, 26, 4, 19, 17, 7, 14, 26, 29, 39, 45, 51, 56, 46, 38, 31,
22, 34, 60, 70, 73, 63, 46, 43, 28), ncol=2)
a <- matrix(c(22, 33, 64, 85, 92, 94, 59, 27, 30, 64, 60, 33, 31, 9,
99, 67, 15, 5, 4, 30, 8, 37, 42, 27, 19, 69, 60, 73, 3, 5, 21,
37, 52, 70, 74, 9, 13, 4, 17, 47), ncol=2)
xy <- rbind(cbind(1, p), cbind(0, a))
v <- data.frame(cbind(pa=xy[,1], extract(logo, xy[,2:3])))
v$Year <- sample(2000:2001, nrow(v), replace=TRUE)
library(lme4)
m <- lmer(pa ~ red + blue + (1 | Year), data=v)
# here adding Year as a constant, as it is not a variable (RasterLayer) in the RasterStack object
x <- predict(logo, m, const=(data.frame(Year=2000)))
If you don't have the random effects, just use re.form=~0 in your predict call to predict at the population level:
x <- predict(logo, m, re.form=~0)
works without complaint for me with #RobertH's example (although I don't know if correctly)

Resources