Making a for loop in r - r
I am just getting started with R so I am sorry if I say things that dont make sense.
I am trying to make a for loop which does the following,
l_dtest[[1]]<-vector()
l_dtest[[2]]<-vector()
l_dtest[[3]]<-vector()
l_dtest[[4]]<-vector()
l_dtest[[5]]<-vector()
all the way up till any number which will be assigned as n. for example, if n was chosen to be 100 then it would repeat this all the way to > l_dtest[[100]]<-vector().
I have tried multiple different attempts at doing this and here is one of them.
n<-4
p<-(1:n)
l_dtest<-list()
for(i in p){
print((l_dtest[i]<-vector())<-i)
}
Again I am VERY new to R so I don't know what I am doing or what is wrong with this loop.
The detailed background for why I need to do this is that I need to write an R function that receives as input the size of the population "n", runs a simulation of the model below with that population size, and returns the number of generations it took to reach a MRCA (most recent common ancestor).
Here is the model,
We assume the population size is constant at n. Generations are discrete and non-overlapping. The genealogy is formed by this random process: in each
generation, each individual chooses two parents at random from the previous generation. The choices are made randomly and equally likely over the n possibilities and each individual chooses twice. All choices are made independently. Thus, for example, it is possible that, when an individual chooses his two parents, he chooses the same individual twice, so that in
fact he ends up with just one parent; this happens with probability 1/n.
I don't understand the specific step at the begining of this post or why I need to do it but my teacher said I do. I don't know if this helps but the next step is choosing parents for the first person and then combining the lists from the step I posted with a previous step. It looks like this,
sample(1:5, 2, replace=T)
#[1] 1 2
l_dtemp[[1]]<-union(l_dtemp[[1]], l_d[[1]]) #To my understanding, l_dtem[[1]] is now receiving the listdescandants from l_d[[1]] bcs the ladder chose l_dtemp[[1]] as first parent
l_dtemp[[2]]<-union(l_dtemp[[2]], l_d[[1]]) #Same as ^^ but for l_d[[1]]'s 2nd choice which is l_dtemp[[2]]
sample(1:5, 2, replace=T)
#[1] 1 3
l_dtemp[[1]]<-union(l_dtemp[[1]], l_d[[2]])
l_dtemp[[3]]<-union(l_dtemp[[3]], l_d[[2]])
Related
Optimizing (minimizing) the number of lines in file: an optimization problem in line with permutations and agenda scheduling
I have a calendar, typically a csv file containing a number of lines. Each line corresponds to an individual and is a sequence of consecutive values '0' and '1' where '0' refers to an empty time slot and '1' to an occupied slot. There cannot be two separated sequences in a line (e.g. two sequences of '1' separated by a '0' such as '1,1,1,0,1,1,1,1'). The problem is to minimize the number of lines by combining the individuals and resolving the collisions between time slots. Note the time slots cannot overlap. For example, for 4 individuals, we have the following sequences: id1:1,1,1,0,0,0,0,0,0,0 id2:0,0,0,0,0,0,1,1,1,1 id3:0,0,0,0,1,0,0,0,0,0 id4:1,1,1,1,0,0,0,0,0,0 One can arrange them to end up with two lines, while keeping track of permuted individuals (for the record). In our example it yields: 1,1,1,0,1,0,1,1,1,1 (id1 + id2 + id3) 1,1,1,1,0,0,0,0,0,0 (id4) The constraints are the following: The number of individuals range from 500 to 1000, The length of the sequence will never exceed 30 Each sequence in the file has the exact same length, The algorithm needs to be reasonable in execution time because this task may be repeated up to 200 times. We don't necessarly search for the optimal solution, a near optimal solution would suffice. We need to keep track of the combined individuals (as in the example above) Genetic algorithms seems a good option but how does it scales (in terms of execution time) with the size of this problem? A suggestion in Matlab or R would be (greatly) appreciated. Here is a sample test: id1:1,1,1,0,0,0,0,0,0,0 id2:0,0,0,0,0,0,1,1,1,1 id3:0,0,0,0,1,0,0,0,0,0 id4:1,1,1,1,1,0,0,0,0,0 id5:0,1,1,1,0,0,0,0,0,0 id6:0,0,0,0,0,0,0,1,1,1 id7:0,0,0,0,1,1,1,0,0,0 id8:1,1,1,1,0,0,0,0,0,0 id9:1,1,0,0,0,0,0,0,0,0 id10:0,0,0,0,0,0,1,1,0,0 id11:0,0,0,0,1,0,0,0,0,0 id12:0,1,1,1,0,0,0,0,0,0 id13:0,0,0,1,1,1,0,0,0,0 id14:0,0,0,0,0,0,0,0,0,1 id15:0,0,0,0,1,1,1,1,1,1 id16:1,1,1,1,1,1,1,1,0,0 Solution(s) #Nuclearman provided a working solution in O(NT) (where N is the number of individuals (ids) and T is the number of time slots (columns)) based on the Greedy algorithm.
I study algorithms as a hobby and I agree with Caduchon on this one, that greedy is the way to go. Though I believe this is actually the clique cover problem, to be more accurate: https://en.wikipedia.org/wiki/Clique_cover Some ideas on how to approach building cliques can be found here: https://en.wikipedia.org/wiki/Clique_problem Clique problems are related to independence set problems. Considering the constraints, and that I'm not familiar with matlab or R, I'd suggest this: Step 1. Build the independence set time slot data. For each time slot that is a 1, create a mapping (for fast lookup) of all records that also have a one. None of these can be merged into the same row (they all need to be merged into different rows). IE: For the first column (slot), the subset of the data looks like this: id1 :1,1,1,0,0,0,0,0,0,0 id4 :1,1,1,1,1,0,0,0,0,0 id8 :1,1,1,1,0,0,0,0,0,0 id9 :1,1,0,0,0,0,0,0,0,0 id16:1,1,1,1,1,1,1,1,0,0 The data would be stored as something like 0: Set(id1,id4,id8,id9,id16) (zero indexed rows, we start at row 0 instead of row 1 though probably doesn't matter here). Idea here is to have O(1) lookup. You may need to quickly tell that id2 is not in that set. You can also use nested hash tables for that. IE: 0: { id1: true, id2: true }`. Sets also allow for usage of set operations which may help quite a bit when determining unassigned columns/ids. In any case, none of these 5 can be grouped together. That means at best (given that row) you must have at least 5 rows (if the other rows can be merged into those 5 without conflict). Performance of this step is O(NT), where N is the number of individuals and T is the number of time slots. Step 2. Now we have options of how to attack things. To start, we pick the time slot with the most individuals and use that as our starting point. That gives us the min possible number of rows. In this case, we actually have a tie, where the 2nd and 5th rows both have 7. I'm going with the 2nd, which looks like: id1 :1,1,1,0,0,0,0,0,0,0 id4 :1,1,1,1,1,0,0,0,0,0 id5 :0,1,1,1,0,0,0,0,0,0 id8 :1,1,1,1,0,0,0,0,0,0 id9 :1,1,0,0,0,0,0,0,0,0 id12:0,1,1,1,0,0,0,0,0,0 id16:1,1,1,1,1,1,1,1,0,0 Step 3. Now that we have our starting groups we need to add to them while trying to avoid conflicts between new members and old group members (which would require an additional row). This is where we get into NP-complete territory as there are a ton (roughly 2^N to be more accurately) to assign things. I think the best approach might be a random approach as you can theoretically run it as many times as you have time for to get results. So here is the randomized algorithm: Given the starting column and ids (1,4,5,8,9,12,16 above). Mark this column and ids as assigned. Randomly pick an unassigned column (time slot). If you want a perhaps "better" result. Pick the one with the least (or most) number of unassigned ids. For faster implementation, just loop over the columns. Randomly pick an unassigned id. For a better result, perhaps the one with the most/least groups that could be assigned that ID. For faster implementation, just pick the first unassigned id. Find all groups that unassigned ID could be assigned to without creating conflict. Randomly assign it to one of them. For faster implementation, just pick the first one that doesn't cause a conflict. If there are no groups without conflict, create a new group and assign the id to that group as the first id. The optimal result is that no new groups have to be created. Update the data for that row (make 0s into 1s as needed). Repeat steps 3-5 until no unassigned ids for that column remain. Repeat steps 2-6 until no unassigned columns remain. Example with the faster implementation approach, which is an optimal result (there cannot be less than 7 rows and there are only 7 rows in the result). First 3 columns: No unassigned ids (all have 0). Skipped. 4th Column: Assigned id13 to id9 group (13=>9). id9 Looks like this now, showing that the group that started with id9 now also includes id13: id9 :1,1,0,1,1,1,0,0,0,0 (+id13) 5th Column: 3=>1, 7=>5, 11=>8, 15=>12 Now it looks like: id1 :1,1,1,0,1,0,0,0,0,0 (+id3) id4 :1,1,1,1,1,0,0,0,0,0 id5 :0,1,1,1,1,1,1,0,0,0 (+id7) id8 :1,1,1,1,1,0,0,0,0,0 (+id11) id9 :1,1,0,1,1,1,0,0,0,0 (+id13) id12:0,1,1,1,1,1,1,1,1,1 (+id15) id16:1,1,1,1,1,1,1,1,0,0 We'll just quickly look the next columns and see the final result: 7th Column: 2=>1, 10=>4 8th column: 6=>5 Last column: 14=>4 So the final result is: id1 :1,1,1,0,1,0,1,1,1,1 (+id3,id2) id4 :1,1,1,1,1,0,1,1,0,1 (+id10,id14) id5 :0,1,1,1,1,1,1,1,1,1 (+id7,id6) id8 :1,1,1,1,1,0,0,0,0,0 (+id11) id9 :1,1,0,1,1,1,0,0,0,0 (+id13) id12:0,1,1,1,1,1,1,1,1,1 (+id15) id16:1,1,1,1,1,1,1,1,0,0 Conveniently, even this "simple" approach allowed for us to assign the remaining ids to the original 7 groups without conflict. This is unlikely to happen in practice with as you say "500-1000" ids and up to 30 columns, but far from impossible. Roughly speaking 500 / 30 is roughly 17, and 1000 / 30 is roughly 34. So I would expect you to be able to get down to roughly 10-60 rows with about 15-45 being likely, but it's highly dependent on the data and a bit of luck. In theory, the performance of this method is O(NT) where N is the number of individuals (ids) and T is the number of time slots (columns). It takes O(NT) to build the data structure (basically converting the table into a graph). After that for each column it requires checking and assigning at most O(N) individual ids, they might be checked multiple times. In practice since O(T) is roughly O(sqrt(N)) and performance increases as you go through the algorithm (similar to quick sort), it is likely O(N log N) or O(N sqrt(N)) on average, though really it's probably more accurate to use O(E) where E is the number of 1s (edges) in the table. Each each likely gets checked and iterated over a fixed number of times. So that is probably a better indicator. The NP hard part comes into play in working out which ids to assign to which groups such that no new groups (rows) are created or a lowest possible number of new groups are created. I would run the "fast implementation" and the "random" approaches a few times and see how many extra rows (beyond the known minimum) you have, if it's a small amount.
This problem, contrary to some comments, is not NP-complete due to the restriction that "There cannot be two separated sequences in a line". This restriction implies that each line can be considered to be representing a single interval. In this case, the problem reduces to a minimum coloring of an interval graph, which is known to be optimally solved via a greedy approach. Namely, sort the intervals in descending order according to their ending times, then process the intervals one at a time in that order always assigning each interval to the first color (i.e.: consolidated line) that it doesn't conflict with or assigning it to a new color if it conflicts with all previously assigned colors.
Consider a constraint programming approach. Here is a question very similar to yours: Constraint Programming: Scheduling with multiple workers. A very simple MiniZinc-model could also look like (sorry no Matlab or R): include "globals.mzn"; %int: jobs = 4; int: jobs = 16; set of int: JOB = 1..jobs; %array[JOB] of var int: start = [0, 6, 4, 0]; %array[JOB] of var int: duration = [3, 4, 1, 4]; array[JOB] of var int: start = [0, 6, 4, 0, 1, 8, 4, 0, 0, 6, 4, 1, 3, 9, 4, 1]; array[JOB] of var int: duration = [3, 4, 1, 5, 3, 2, 3, 4, 2, 2, 1, 3, 3, 1, 6, 8]; var int: machines; constraint cumulative(start, duration, [1 | j in JOB], machines); solve minimize machines; This model does not, however, tell which jobs are scheduled on which machines. Edit: Another option would be to transform the problem into a graph coloring problem. Let each line be a vertex in a graph. Create edges for all overlapping lines (the 1-segments overlap). Find the chromatic number of the graph. The vertices of each color then represent a combined line in the original problem. Graph coloring is a well-studied problem, for larger instances consider a local search approach, using tabu search or simulated annealing.
Generating testing and training datasets with replacement in R
I have mirrored some code to perform an analysis, and everything is working correctly (I believe). However, I am trying to understand a few lines of code related to splitting the data up into 40% testing and 60% training sets. To my current understanding, the code randomly assigns each row into group 1 or 2. Subsequently, all the the rows assigned to 1 are pulled into the training set, and the 2's into the testing. Later, I realized that sampling with replacement is not want I wanted for my data analysis. Although in this case I am unsure of what is actually being replaced. Currently, I do not believe it is the actual data itself being replaced, rather the "1" and "2" place holders. I am looking to understand exactly how these lines of code work. Based on my results, it seems as it is working accomplishing what I want. I need to confirm whether or not the data itself is being replaced. To test the lines in question, I created a dataframe with 10 unique values (1 through 10). If the data values themselves were being sampled with replacement, I would expect to see some duplicates in "training1" or "testing2". I ran these lines of code 10 times with 10 different set.seed numbers and the data values were never duplicated. To me, this suggest the data itself is not being replaced. If I set replace= FALSE I get this error: Error in sample.int(x, size, replace, prob) : cannot take a sample larger than the population when 'replace = FALSE' set.seed(8) test <-sample(2, nrow(df), replace = TRUE, prob = c(.6,.4)) training1 <- df[test==1,] testing2 <- df[test==2,] Id like to split up my data into 60-40 training and testing. Although I am not sure that this is actually happening. I think the prob function is not doing what I think it should be doing. I've noticed the prob function does not actually split the data exactly into 60percent and 40percent. In the case of the n=10 example, it can result in 7 training 2 testing, or even 6 training 4 testing. With my actual larger dataset with ~n=2000+, it averages out to be pretty close to 60/40 (i.e., 60.3/39.7).
The way you are sampling is bound to result in a undesired/ random split size unless number of observations are huge, formally known as law of large numbers. To make a more deterministic split, decide on the size/ number of observation for the train data and use it to sample from nrow(df): set.seed(8) # for a 60/40 train/test split train_indx = sample(x = 1:nrow(df), size = 0.6*nrow(df), replace = FALSE) train_df <- df[train_indx,] test_df <- df[-train_indx,]
I recommend splitting the code based on Mankind_008's answer. Since I ran quite a bit of analysis based on the original code, I spent a few hours looking into what it does exactly. The original code: test <-sample(2, nrow(df), replace = TRUE, prob = c(.6,.4)) Answer From ( https://www.datacamp.com/community/tutorials/machine-learning-in-r ): "Note that the replace argument is set to TRUE: this means that you assign a 1 or a 2 to a certain row and then reset the vector of 2 to its original state. This means that, for the next rows in your data set, you can either assign a 1 or a 2, each time again. The probability of choosing a 1 or a 2 should not be proportional to the weights amongst the remaining items, so you specify probability weights. Note also that, even though you don’t see it in the DataCamp Light chunk, the seed has still been set to 1234." One of my main concerns that the data values themselves were being replaced. Rather it seems it allows the 1 and 2 placeholders to be assigned over again based on the probabilities.
TraMineR, Extract all present combination of events as dummy variables
Lets say I have this data. My objective is to extraxt combinations of sequences. I have one constraint, the time between two events may not be more than 5, lets call this maxGap. User <- c(rep(1,3)) # One users Event <- c("C","B","C") # Say this is random events could be anything from LETTERS[1:4] Time <- c(c(1,12,13)) # This is a timeline df <- data.frame(User=User, Event=Event, Time=Time) If want to use these sequences as binary explanatory variables for analysis. Given this dataframe the result should be like this. res.df <- data.frame(User=1, C=1, B=1, CB=0, BC=1, CBC=0) (CB) and (CBC) will be 0 since the maxGap > 5. I was trying to write a function for this using many for-loops, but it becomes very complex if the sequence becomes larger and the different number of evets also becomes larger. And also if the number of different User grows to 100 000. Is it possible of doing this in TraMineR with the help of seqeconstraint?
Here is how you would do that with TraMineR df.seqe <- seqecreate(id=df$User, timestamp=df$Time, event=df$Event) constr <- seqeconstraint(maxGap=5) subseq <- seqefsub(df.seqe, minSupport=0, constraint=constr) (presence <- seqeapplysub(subseq, method="presence")) which gives (B) (B)-(C) (C) 1-(C)-11-(B)-1-(C) 1 1 1 presence is a table with a column for each subsequence that occurs at least once in the data set. So, if you have several individuals (event sequences), the table will have one row per individual and the columns will be the binary variable you are looking for. (See also TraMineR: Can I get the complete sequence if I give an event sub sequence? ) However, be aware that TraMineR works fine only with subsequences of length up to about 4 or 5. We suggest to set maxK=3 or 4 in seqefsub. The number of individuals should not be a problem, nor should the number of different possible events (the alphabet) as long as you restrict the maximal subsequence length you are looking for. Hope this helps
How to run a function to EACH of my observations in R?
My problem is as follows: I have a dataset of 6000 observation containing information from costumers (each observation is one client's information). I'm optimizing a given function (in my case is a profit function) in order to find an optimal for my variable of interest. Particularly I'm looking for the optimal interest rate I should offer in order to maximize my expected profits. I don't have any doubt about my function. The problem is that I don't know how should I proceed in order to apply this function to EACH OBSERVATION in order to obtain an OPTIMAL INTEREST RATE for EACH OF MY 6000 CLIENTS (or observations, as you prefer). Until now, it has been easy to find the UNIQUE optimal (same for all clients) for this variable that would maximize my profits (This is, the global maximum I guess). But what I need to know is how I should proceed in order to apply my optimization problem to EACH of my 6000 observations, INDIVIDUALLY, in order to have the optimal interest rate to offer to each costumer (this is, 6000 optimal interest rates, one for each of them). I guess I should do something similar to a for loop, but my experience in this area is limited, and I'm quite frustrated already. What's more, I've tried to use mapply(myfunction, mydata) as usual, but I only get error messages. This is how my (really) simple code now looks like: profits<- function(Rate) sum((Amount*(Rate-1.2)/100)* (1/(1+exp(0.600002438-0.140799335888812* ((Previous.Rate - Rate)+(Competition.Rate - Rate)))))) And results for ONE optimal for the entire sample: > optimise(profits, lower = 0, upper = 100, maximum = TRUE) $maximum [1] 6.644821 $objective [1] 1347291 So the thing is, how do I rewrite my code in order to maximize this and obtain the optimal of my variable of interest for EACH of my rows? Hope I've been clear! Thank you all in advance!
It appears each of your customers are independent. So you just put lapply() around the optimize() call: lapply(customer_list, function(one_customer){ optimise(profits, lower = 0, upper = 100, maximum = TRUE) }) This will return a very big list, where each list element has a $maximum and a $objective. You can then run lapply to total the $maximums, to find just how rich you have become!
Forming a Wright-Fisher loop with "sample()"
I am trying to create a simple loop to generate a Wright-Fisher simulation of genetic drift with the sample() function (I'm actually not dead-set on using this function, but, in my naivety, it seems like the right way to go). I know that sample() randomly selects values from a vector based on certain probabilities. My goal is to create a system that will keep running making random selections from successive sets. For example, if it takes some original set of values and samples a second set, I'd like the loop to take another random sample from the second set (using the probabilities that were defined earlier). I'd like to just learn how to do this in a very general way. Therefore, the specific probabilities and elements are arbitrary at this point. The only things that matter are (1) that every element can be repeated and (2) the size of the set must stay constant across generations, per Wright-Fisher. For an example, I've been playing with the following: V <- c(1,1,2,2,2,2) sample(V, size=6, replace=TRUE, prob=c(1,1,1,1,1,1)) Regrettably, my issue is that I don't have any code to share yet precisely because I'm not sure of how to start writing this kind of loop. I know that for() loops are used to repeat a function multiple times, so my guess is to start there. However, from what I've researched about these, it seems that you have to start with a variable (typically i). I don't have any variables in this sampling that seem explicitly obvious; which isn't to say one couldn't be made up.
If you wanted to repeatedly sample from a population with replacement for a total of iter iterations, you could use a for loop: set.seed(144) # For reproducibility population <- init.population for (iter in seq_len(iter)) { population <- sample(population, replace=TRUE) } population # [1] 1 1 1 1 1 1 Data: init.population <- c(1, 1, 2, 2, 2, 2) iter <- 100