Related
Just for fun and to train R, I tried to proof the Monty Hall Game rule (changing your choice after one gate opened gives you more probability to win), I made this reproducible code (The explanation of every step is within the code):
## First I set the seed
set.seed(4)
## Then I modelize the presence of the prize as a random variable between gates 1,2,3
randomgates <- ceiling(runif(10000, min = 0, max = 3))
## so do I with the random choice.
randomchoice <- ceiling(runif(10000, min = 0, max = 3))
## As the opening of a gate is dependent from the gate you chose (the gate you chose cannot be opened)
## I modelize the opening of the gate as a variable which cannot be equal to the choice.
options <- c(1:3)
randomopen <- rep(1,10000)
for (i in 1:length(randomgates)) {
realoptions <- options[options != randomchoice[i]]
randomopen[i] <- realoptions[ceiling(runif(1,min = 0, max = 2))]
}
##Just to make data more easy to handle, I make a dataset
dataset <- cbind(randomgates, randomchoice, randomopen)
## Then I creat a dataset which only keeps the realization of the games in which we carry on (
## the opened gate wasn't the one with the price within)
steptwo <- dataset[randomopen != randomgates,]
## The next step is just to check if the probability of carry on is 2/3, which indeed is
carryon <- randomopen != randomgates
sum(carryon)/length(randomgates)
## I format the dataset as a data frame
steptwo <- as.data.frame(steptwo)
## Now we check what happens if we hold our initial choice when game carries on
prizesholding <- steptwo$randomgates == steptwo$randomchoice
sum(prizesholding)
## creating a vector of changing option, dependant on the opened gate, in the dataset that
## keeps only the cases in which we carried on playing (the opened gate wasn't the one with the prize)
switchedchoice <- rep(1,length(steptwo$randomgates))
for (i in 1:length(steptwo$randomgates)) {
choice <- options[options != steptwo$randomchoice[i]]
switchedchoice[i] <- choice[ceiling(runif(1,min = 0, max = 2))]
}
## Now we check how many times you guess the prize gate when you switch your initial choice
prizesswitching <- steptwo$randomgates == switchedchoice
sum(prizesswitching)/length(steptwo$randomgates)
When I check the probability without changing my initial choice in the cases in which the game carried on (the gate opening didn't match the one with the prize) I obtain what I exepected (close 1/3 of probability of winning the prize), which refers to the following instruction:
carryon <- randomopen != randomgates
sum(carryon)/length(randomgates)
My problem arises when I check the probability of winning the prize after changing my choice (conditionate, obviously to not having opened the door which holds the prize), instead of getting 1/2 as Monty Hall states, I get 1/4, it refers to the following instruction:
prizesswitching <- steptwo$randomgates == switchedchoice
sum(prizesswitching)/length(steptwo$randomgates)
I know that I am doing something bad because it is already more than proofed that Monty Hall holds, but I am not able to detect the flaw. Does anyone know what it is?
If you don't know what Monty Hall problem is, you can find easy-to-read information at wikipedia:
Monty Hall Game
Edit: As #Dason pointed out, one of the problem was I was introducing some kind of randomness in the changing of the initial choice, which doesn't makes sense as there is only one option left.
Other problem was that I was not approaching the problem under the assumption of Monty Hall knowing where the prize is. I changed my code from the initial to this, and the problem is solved:
# Prepare each variable for 10000 experiments
## First I set the seed
set.seed(4)
## Then I modelize the presence of the prize as a random variable between gates 1,2,3
randomgates <- ceiling(runif(10000, min = 0, max = 3))
## so do I with the random choice.
randomchoice <- ceiling(runif(10000, min = 0, max = 3))
## As the opening of a gate is dependent from the gate you chose (the gate you chose cannot be opened
##, neither the one with the prize does), I modelize the opening of the gate as a variable which cannot be equal to the choice.
options <- c(1:3)
randomopen <- rep(1,10000)
for (i in 1:length(randomgates)) {
randomopen[i] <- options[options != randomchoice[i] & options != randomgates[i]]
}
##Just to make data more easy to handle, I make a dataset
dataset <- cbind(randomgates, randomchoice, randomopen)
## I format the dataset as a data frame
steptwo <- as.data.frame(dataset)
## Now we check what happens if we hold our initial choice when game carries on
steptwo$prizesholding <- steptwo$randomgates == steptwo$randomchoice
with(steptwo, sum(prizesholding))
## creating a vector of changing option, dependant on the opened gate, in the dataset that
## keeps only the cases in which we carried on playing (the opened gate wasn't the one with the prize)
steptwo$switchedchoice <- rep(1,length(steptwo$randomgates))
for (i in 1:length(steptwo$randomgates)) {
steptwo$switchedchoice[i] <- options[options != steptwo$randomchoice[i] & options != steptwo$randomopen[i]]
}
## Now we check how many times you guess the prize gate when you switch your initial choice
steptwo$prizesswitching <- steptwo$randomgates == steptwo$switchedchoice
with(steptwo, sum(prizesswitching)/length(randomgates))
Each round, there is a prize_door and a chosen_door. Monty Hall will open a door that is not a prize_door or chosen_door (setdiff between 1:3 and the vector (prize_door, chosen_door), with a random choice between the two if the setdiff is two elements). Then the switch door is the door not chosen or opened.
n <- 1e4
set.seed(2020)
df <-
data.frame(
prize_door = sample(1:3, n, replace = TRUE),
chosen_door = sample(1:3, n, replace = TRUE))
df$opened_door <-
mapply(function(x, y){
available <- setdiff(1:3, c(x, y))
available[sample(length(available), 1)]
}, df$prize_door, df$chosen_door)
df$switch_door <-
mapply(function(x, y) setdiff(1:3, c(x, y)),
df$chosen_door, df$opened_door)
with(df, mean(prize_door == chosen_door))
# [1] 0.3358
with(df, mean(prize_door == switch_door))
# [1] 0.6642
Plot of probabilities as n increases
probs <-
data.frame(
chosen_p = with(df, cumsum(prize_door == chosen_door))/(1:n),
switch_p = with(df, cumsum(prize_door == switch_door))/(1:n))
plot(probs$switch_p, type = 'l', ylim = c(0, 1))
lines(probs$chosen_p, col = 'red')
abline(h = 1/3)
abline(h = 2/3)
This seems to do the trick:
n_iter <- 10000
set.seed(4)
doors <- 1:3
prizes <- sample.int(n = 3, size = n_iter, replace = TRUE)
your_pick <- sample.int(n = 3, size = n_iter, replace = TRUE)
open_door <- rep(0, n_iter)
switched_door <- rep(0, n_iter)
for (i in 1:n_iter) {
remaining_choices <- setdiff(doors, c(your_pick[i], prizes[i]))
if (length(remaining_choices) > 1) {
open_door[i] <- sample(remaining_choices, size = 1)
} else {
open_door[i] <- remaining_choices
}
switched_door[i] <- setdiff(doors, c(your_pick[i], open_door[i]))
}
> mean(your_pick == prizes)
[1] 0.3305
> mean(switched_door == prizes)
[1] 0.6695
The sample.int and sample base functions help simplify things a bit. The remaining_choices item contains the possible doors that can be opened by the game show host, which has a length of 1 or 2 depending on your original choice. If the length is 2, we sample from that vector, and if it's 1, that door is automatically opened.
I have a question from a book on Monte Carlos Methods that I am working through and I can not figure it out. The question is as follows:
Obtain random shuffles of the cards: club
2, 3, 4, 5, 6; diamond 2, 3, 4, 5, 6; heart 2, 3, 4, 5, 6; and spade 2, 3, 4; in such
a way that no clubs or spades appear in positions 1, 4, 7, . . ., no hearts
appear in positions 2, 5, 8, . . ., and no diamonds or spades appear in positions
3, 6, 9, . . . .
My current best solution is constructing a matrix of possible cards to draw where each row is a turn and each column a card and to iterate down the rows. However I am having problems with the dimensions of the problem, where by some of the later draws I will run out of possible cards meeting the restraints of the question.
# 1-5 club, 6-10 diamond, 10-15 heart, 16-18 spade
#no spade, club
no_s_c <- matrix(1,nrow = 18, ncol = 18)
no_s_c [,1:5] <- 0
no_s_c[,16:18] <- 0
#no spade no diamond
no_d_s<- matrix(1,nrow = 18, ncol = 18)
no_d_s [,6:10] <- 0
no_d_s[,16:18] <- 0
#no hearts
no_h <- matrix(1,nrow = 18, ncol = 18)
no_h[,10:15] <- 0
turn_no_s_c <- c(1,4,7,10,13,16)
turn_no_d_s <- c(3,6,9,12,15,18)
turn_no_h <- c(2,5,8,11,14,17)
#psudotransition matrix
M <- zeros(18)
for(i in turn_no_s_c){M[i,] <- no_s_c[i,]}
for(i in turn_no_d_s){M[i,] <- no_d_s[i,]}
for(i in turn_no_h){M[i,] <- no_h[i,]}
random_w_contraint <- function(){ # there are problems with the dimension of
this problem
card_order <- rep(0,dim(M)[1])
for(i in 1:dim(M)[1]){
x <- sample(which(M[i,] !=0),1)
card_order[i] <- x
M[,x] <- 0
}
card_order
}
Thanks for your help!
I'd recommend a two-step approach: writing helper functions for drawing cards from a deck, and then calling these functions in an order that meets your constraints.
Heads-up as you read: I'm naming the cards differently than you do (I call the two-of-clubs "2C" instead of 1), but the general advice still stands.
Helper Functions for Card Decks
You can deal with card-based problems by creating a list or data.frame to represent the deck of cards you're working with.
make_deck <- function(){
list(club = paste0('C', 2:6),
diamond = paste0('D', 2:6),
heart = paste0('H', 2:6),
spade = paste0('S', 2:6))
}
Then, you can write functions to draw a random card from particular suits in a deck:
draw_from_suits <- function(deck, suits){
cards <- unlist(deck[suits], use.names = FALSE)
# If there are no cards in the requested suits, return NA
if (length(cards) == 0) { return(NA) }
# Otherwise, grab a random card
sample(cards, 1)
}
Once you know what card you've picked, you can remove it from the deck with another helper function:
get_suit <- function(card){
switch(substr(card, 1, 1),
C = 'club',
D = 'diamond',
H = 'heart',
S = 'spade')
}
remove_from_deck <- function(deck, card){
suit <- get_suit(card)
deck[[suit]] <- setdiff(deck[[suit]], card)
return(deck)
}
Now, if we want to sample a card from the hearts suite, we'd have this three-step process:
deck <- make_deck()
card <- draw_from_suits(deck, 'heart')
deck <- remove_from_deck(deck, card)
Sampling With Constraints
The second challenge in this problem that you identify is that you can run into dead ends partway through. You could write the sampling function so that it resets itself and starts from scratch every time it hits a dead end.
You can do this many ways. One is to use a while loop to keep trying until you succeed:
sample_with_constraint <- function(){
# The suits we're allowed to draw from at each step
suit_sequence <- list(c('heart', 'diamond'),
c('club', 'diamond', 'spade'),
c('heart', 'club'))
# We'll use this variable to track whether we're done dealing cards
dealt <- FALSE
while (dealt == FALSE) {
deck <- make_deck()
hand <- rep(NA, length(unlist(deck)))
# Step through the hand and build it card-by-card
for (ii in seq_along(hand)) {
# Use the modulo operator to identify the step of the sequence
which_suits <- suit_sequence[[(ii %% 3) + 1]]
card <- draw_from_suits(deck, which_suits)
# If we failed to draw a card, this is a dead end
# So break out of this for-loop
if (is.na(card)) { break }
hand[ii] <- card
deck <- remove_from_deck(deck, card)
}
# If there are no more cards in the deck, we've successfully dealt a hand
# In this case, flip 'dealt' to TRUE. Otherwise it stays FALSE and we try again.
dealt <- length(unlist(deck)) == 0
}
return(hand)
}
sample_with_constraint()
You could also adapt the for loop at the end of your random_w_contraint function to do something similar.
I have a large two-step optimization problem that I've tried to simplify for this question. The first step is to choose 10 elements to maximize utility with certain constraints. I need 200 of these sets, but due to the nature of what I'm trying to do, there needs to be 600 generated so that the correct combinations can manifest.
Wrapping around these mini-optimization problems is a larger constraint where each individual element can only be used within a certain range. The first optimization tunes each element's utility so that each one is relatively close to the bounds, but it's not possible for all of them to be within their bounds. Therefore, the second step is to choose 200 of the 600 sets such that each individual element's min/max usage is satisfied. This is what I need help with.
I made a function using lpSolve that works, but over 80% of the time it freezes RStudio and it's just becoming too much of a hassle - I need to either improve my current approach, or need a completely new approach. I don't know if lpSolve is really the best approach to begin with. While I do have an overall set-score that I can maximize, all I really care about is having each element within the bounds. I've made a simplified example to get at the essence of my problem.
I'm in charge of making 200 meals from a set of 80 different fruits. Each meal uses 10 fruit and cannot have more than 1 of the same fruit. I'm limited in the number of fruits that I have (and my boss is making me use a minimum of each fruit otherwise they'll go bad), so they need to be within certain bounds. I have a list of 600 meals already created (Meals), and each one has it's own unique Health-Score. Ideally I would like to maximize the Health-Score, but obviously the most important piece is that each fruit is used the correct number of times, otherwise the meals can't be made in the first place.
Here's my code to 1) Setup the 600 Meals (random) 2) Set the min/max times each fruit must be used (random) 3) Run a linear optimization to select 200 of the 600 meals such that the individual fruit constraints are fulfilled. The program tries to chose 200 of the 600, but if the constraints don't allow it, then it loosens the constraints (e.g. if the solver doesn't work the first time, then I'll decrease the minimum number of times an Apple can be used, and increase the maximum number of times it can be used). It does this one fruit at a time, rather than all at once. Eventually the constraints should be loosened so much that any 200 of the 600 will work (i.e. when all fruit minPercent is less than 0 and all fruit maxPercent is greater than 100), but it doesn't matter because R freezes up.
library(stringr)
library(dplyr)
library(lpSolve)
# Inputs
MealsNeeded <- 200
Buffer <- 3
# Setup the meals (this is the output of another optimizer in my actual program. Considered "Step 1" as I mentioned above)
Meals <- data.frame()
for(i in 1:(MealsNeeded*Buffer)){
run <- i
meal <- sample(fruit, 10)
healthFactor <- round(runif(1, 10, 30), 0) #(Health factor for the entire meal)
df <- data.frame(Run = run, Fruit = meal, healthFactor = healthFactor, stringsAsFactors = FALSE)
Meals <- rbind(Meals, df)
}
# The minimum/maximum number of times each fruit must be used across all 200 meals (these would be inputs in my program)
set.seed(11)
fruitDF <- data.frame(Name = fruit, minSelectPct = round(runif(length(fruit), .05, .1)*100, 0), stringsAsFactors = FALSE) %>%
mutate(maxSelectPct = round(minSelectPct/2 + runif(length(fruit), .05, .1)*100, 0))
#### Actual Program Start
# Get objective
obj <- Meals %>%
distinct(Run, healthFactor) %>%
ungroup() %>%
select(healthFactor) %>%
pull()
# Dummy LU - for each fruit give 1/0 whether or not they were in the meal
dummyLUInd <- data.frame(FruitName = fruitDF$Name, stringsAsFactors = FALSE)
for(i in unique(Meals$Run)){
selectedFruit <- Meals %>%
filter(Run == i) %>%
select(Fruit) %>%
mutate(Indicator = 1)
dummyLUIndTemp <- fruitDF %>%
left_join(selectedFruit, by = c('Name' = 'Fruit')) %>%
mutate(Indicator = ifelse(is.na(Indicator), 0, Indicator)) %>%
select(Indicator)
dummyLUInd <- cbind(dummyLUInd, dummyLUIndTemp)
}
## Table create
dummyLUInd <- rbind(dummyLUInd, dummyLUInd)[,-1]
dummyLUInd <- as.data.frame(t(dummyLUInd))
dummyLUInd$Total = 1
## Directions
dirLT <- c(rep('<=', (ncol(dummyLUInd)-1)/2))
dirGT <- c(rep('>=', (ncol(dummyLUInd)-1)/2))
## Multiply percentages by total Meals
MinExp = round(fruitDF$minSelectPct/100 * MealsNeeded - 0.499, 0)
MaxExp = round(fruitDF$maxSelectPct/100 * MealsNeeded + 0.499, 0)
# Setup constraints like # of tries
CounterMax <- 10000
LPSum = 0
Counter = 0
# Create DF to make it easier to change constraints for each run
MinExpDF <- data.frame(Place = 1:length(MinExp), MinExp = MinExp)
MaxExpDF <- data.frame(Place = 1:length(MaxExp), MaxExp = MaxExp)
cat('\nStarting\n')
Sys.sleep(2)
# Try to get the 200 of 600 Meals that satisfy the constraints for the individual Fruit.
# If the solution doesn't exist, loosen the constraints for each fruit (one at a time) until it does work
while (LPSum == 0 & Counter <= CounterMax) {
rowUse <- Counter %% length(MaxExp)
# Knock one of minimum, starting with highest exposure, one at a time
MinExpDF <- MinExpDF %>%
mutate(Rank = rank(-MinExp, na.last = FALSE, ties.method = "first"),
MinExp = ifelse(Rank == rowUse, MinExp - 1, MinExp)
)
MinExp <- MinExpDF$MinExp
# Add one of maximum, starting with highest exposure, one at a time
MaxExpDF <- MaxExpDF %>%
mutate(Rank = rank(-MaxExp, na.last = FALSE, ties.method = "first"),
MaxExp = ifelse(Rank == rowUse, MaxExp + 1, MaxExp))
MaxExp <- MaxExpDF$MaxExp
# Solve
dir <- 'max'
f.obj <- obj
f.mat <- t(dummyLUInd)
f.dir <- c(dirGT, dirLT, '==')
f.rhs <- c(MinExp, MaxExp, MealsNeeded)
Sol <- lp(dir, f.obj, f.mat, f.dir, f.rhs, all.bin = T)$solution
LPSum <- sum(Sol)
Counter = Counter + 1
if(Counter %% 10 == 0) cat(Counter, ', ', sep = '')
}
# Get the Run #'s from the lpSolve
if(Counter >= CounterMax){
cat("Unable to find right exposure, returning all Meals\n")
MealsSolved <- Meals
} else {
MealsSolved <- data.frame(Run = unique(Meals$Run))
MealsSolved$selected <- Sol
MealsSolved <- MealsSolved[MealsSolved$selected == 1,]
}
# Final Meals
FinalMeals <- Meals %>%
filter(Run %in% MealsSolved$Run)
If you run this code enough times, eventually RStudio will freeze up on you (at least it does for me, if it doesn't for you I suppose increase the number of Meals). It happens during the actual lp, so there's really not much you can do since it's really C code. This is where I'm lost.
A part of me thinks this isn't really an lpSolve issue since I'm not really trying to maximize anything (Health-Factor isn't all too important). My real "loss function" is the number of times each fruit goes above/below their min/max exposure, but I can't think of how to set something like this up. Can my current approach work, or do I need to do something different completely?
I'm trying to get the shortest paths of a graph but based on its edge ids.
So having the following graph:
library(igraph)
set.seed(45)
g <- erdos.renyi.game(25, 1/10, directed = TRUE)
E(g)$id <- sample(1:3, length(E(g)), replace = TRUE)
The shortest_paths(g, 1, V(g)) function finds all the shortest paths from node 1 to all the other nodes. However, I would like to calculate this, not just by following the geodesic distance, but a mix between the geodesic distance, and the minimum of edge id changes.
For example if this would be a train network, and the edge ids would represent trains. I would like to calculate how to get from node A to all the other nodes using the shortest path, but while changing the least amount of time of trains.
OK I think I have a working solution, although the code is a little ugly. The basic algorithm (lets call it gs(i, j)) goes like this: If we want to find the shortest train journey from i to j (gs(i, j)) we:
find the shortest path from i to j considering all trains. if this path is length 0 or 1 return it (there is either no path or a path on 1 train)
split the graph up by 'trains' (subset graph by edges) so as to consider each train network separately, and find the shortest path between i and j in each individual train network
if a single train will get you from i to j, return the train route with the fewest stops between i and j, else
if no single train runs from i to j then call gs(i, j-1) where (j-1) is the stop before j in the shortest path between i and j on the full network.
So basically, we look to see if a single train can do it, and if it can't we call the function recursively looking if a single train can get you to the stop before the last stop, etc. etc.
library(igraph)
# First your data
set.seed(45)
g <- erdos.renyi.game(25, 1/10, directed = TRUE)
E(g)$id <- sample(1:3, length(E(g)), replace = TRUE)
plot(g, edge.color = E(g)$id)
# The function takes as arguments the graph, and the id of the vertex
# you want to go from/to. It should work for a vector of
# destinations but I have not rigorously tested it so proceed with
# caution!
get.shortest.routes <- function(g, from, to){
train.routes <- lapply(unique(E(g)$id), function(id){subgraph.edges(g, eids = which(E(g)$id==id), delete.vertices = F)})
target.sp <- shortest_paths(g, from = from, to = to, output = 'vpath')$vpath
single.train.paths <- lapply(train.routes, function(gs){shortest_paths(gs, from = from, to = to, output = 'vpath')$vpath})
for (i in length(target.sp)){
if (length(target.sp[[i]]>1)) {
cands <- lapply(single.train.paths, function(l){l[[i]]})
if (sum(unlist(lapply(cands, length)))!=0) {
cands <- cands[lapply(cands, length)!=0]
cands <- cands[lapply(cands, length)==min(unlist(lapply(cands, length)))]
target.sp[[i]] <- cands[[1]]
} else {
target.sp[[i]] <- c(get.shortest.routes(g, from = as.numeric(target.sp[[i]][1]),
to = as.numeric(target.sp[[i]][(length(target.sp[[i]]) - 1)]))[[1]],
get.shortest.routes(g, from = as.numeric(target.sp[[i]][(length(target.sp[[i]]) - 1)]),
to = as.numeric(target.sp[[i]][length(target.sp[[i]])]))[[1]][-1])
}
}
}
target.sp
}
OK now lets run some tests. If you squint at the graph above you can see that the path from vertex 5 to vertex 21 is length-2 if you take two trains, but that you can get there on 1 train if you pass through an extra station. Our new function should return the longer path:
shortest_paths(g, 5, 21)$vpath
#> [[1]]
#> + 3/25 vertices, from b014eb9:
#> [1] 5 13 21
get.shortest.routes(g, 5, 21)
#> Warning in shortest_paths(gs, from = from, to = to, output = "vpath"): At
#> structural_properties.c:745 :Couldn't reach some vertices
#> Warning in shortest_paths(gs, from = from, to = to, output = "vpath"): At
#> structural_properties.c:745 :Couldn't reach some vertices
#> [[1]]
#> + 4/25 vertices, from c22246c:
#> [1] 5 13 15 21
Lets make a really easy graph where we are sure what we want to see: here we should get 1-2-4-5 instead of 1-3-5:
df <- data.frame(from = c(1, 1, 2, 3, 4), to = c(2, 3, 4, 5, 5))
g1 <- graph_from_data_frame(df)
E(g1)$id <- c(1, 2, 1, 3, 1)
plot(g1, edge.color = E(g1)$id)
get.shortest.routes(g1, 1, 5)
#> Warning in shortest_paths(gs, from = from, to = to, output = "vpath"): At
#> structural_properties.c:745 :Couldn't reach some vertices
#> Warning in shortest_paths(gs, from = from, to = to, output = "vpath"): At
#> structural_properties.c:745 :Couldn't reach some vertices
#> [[1]]
#> + 4/5 vertices, named, from c406649:
#> [1] 1 2 4 5
I'm sure there is a more rigorous solution, and you'll probably want to optimize the code a bit. For instance, I just realized that I don't stop the function immediately if the shortest path on the full graph has only two nodes -- doing so would avoid some needless computations! This was a fun problem, I hope some other answers gets posted.
Created on 2018-05-11 by the reprex package (v0.2.0).
Here is my take on the problem. A few notes:
1) all_simple_paths will not scale well with large or highly connected graphs
2) I favored fewest changes above all else, which means a path with two changes and a dist of 40 will beat a path with three changes and a dist of 3.
4) I can imagine an even faster approach if # of changes and distance change priority if there is no path on one id
library(igraph)
# First your data
set.seed(45)
g <- erdos.renyi.game(25, 1/10, directed = TRUE)
E(g)$id <- sample(1:3, length(E(g)), replace = TRUE)
plot(g, edge.color = E(g)$id)
##Option 1:
rst <- all_simple_paths(g, from = 1, to = 18, mode = "out")
rst <- lapply(rst, as_ids)
rst1 <- lapply(rst, function(x) c(x[1], rep(x[2:(length(x)-1)],
each=2), x[length(x)]))
rst2 <- lapply(rst1, function(x) data.frame(eid = get.edge.ids(graph=g, vp = x),
train=E(g)$id[get.edge.ids(graph=g, vp = x)]))
rst3 <- data.frame(pathID=seq_along(rst),
changes=sapply(rst2, function(x) length(rle(x$train)$lengths)),
dist=sapply(rst2, nrow))
spath <- rst3[order(rst3$changes, rst3$dist), ][1,1]
#Vertex IDs
rst[[spath]]
#[1] 1 23 8 18
plot(g, edge.color = E(g)$id, vertex.color=ifelse(V(g) %in% rst[[spath]], "firebrick", "gray80"),
edge.arrow.size=0.5)
I am working with the genalg library for R, and try to save all the generations when I run a binary generic algorithm. It does not seems like there is a built-in method for that in the library, so my attempt was to save each chromosome, x, coming through the evaluation function.
To test this method I have tried to insert print(x) in the evaluation function to be able to see all the evaluated chromosomes. However, the number of printed chromosomes does not always match what I am suspecting.
I thought that the number of printed chromosomes would be equal to the number of iterations times the population size, but it does not seems to be try all the time.
The problem is that I want to know from which generation (or iteration) each chromosome belongs, which I can't tell if the number of chromosomes are different from iter times popSize.
What is the reason for this, and how can I "fix" it. Or is there another way of saving each chromosome and from which iteration it belongs?
Below is an example, where I thought that the evaluation function would print 2x5 chromosomes, but only prints 8.
library(genalg)
library(ggplot2)
dataset <- data.frame(
item = c("pocketknife", "beans", "potatoes", "unions", "sleeping bag", "rope", "compass"),
survivalpoints = c(10, 20, 15, 2, 30, 10, 30),
weight = c(1, 5, 10, 1, 7, 5, 1))
weightlimit <- 20
evalFunc <- function(x) {
print(x)
current_solution_survivalpoints <- x %*% dataset$survivalpoints
current_solution_weight <- x %*% dataset$weight
if (current_solution_weight > weightlimit)
return(0) else return(-current_solution_survivalpoints
}
iter = 2
popSize = 5
set.seed(1)
GAmodel <- rbga.bin(size = 7, popSize = popSize, iters = iter, mutationChance = 0.1,elitism = T, evalFunc = evalFunc)
Looking at the function code, it seems like at each iteration (generation) a subset of chromosomes is chosen from the population (population = 5 chromosomes in your example) with a certain probability (0.1 in your case) and mutated. Evaluation function is called only for the mutated chromosomes at each generation (and of course for all the chromosomes in the first iteration to know their initial value).
Note that, this subset do not include elitists group, which in your example you have defined as 1 element big (you have erroneously passed elitism=TRUE and TRUE is implicitly converted to 1).
Anyway, to know the population at each generation, you can pass a monitor function through the monitorFun parameter e.g. :
# obj contains a lot of informations, try to print it
monitor <- function(obj) {
print(paste(" GENERATION :", obj$iter))
print("POPULATION:")
print(obj$population)
print("VALUES:")
print(obj$evaluations)
}
iter = 2
popSize = 5
set.seed(1)
GAmodel <- rbga.bin(size = 7, popSize = popSize,
iters = iter, mutationChance = 0.1,
elitism = 1, evalFunc = evalFunc, monitorFunc = monitor)