how to do assignment of numbers in r (one machine n jobs) - r

I am working on assignment problem in R. I have following dataframe in r
cycle_time TAT ready_for_next ITV_no
2 10 12 0
4 12 16 0
6 13 19 0
8 11 19 0
10 15 25 0
12 17 29 0
14 13 27 0
16 13 29 0
18 12 30 0
20 16 36 0
22 13 35 0
24 12 36 0
26 15 41 0
28 14 42 0
30 17 47 0
My desired dataframe would be
cycle_time TAT ready_for_next ITV_no wait_time
2 10 12 1 0
4 12 16 2 0
6 13 19 3 0
8 11 19 4 0
10 15 25 5 0
12 17 29 1 0
14 13 27 6 0
16 13 29 2 0
18 12 30 3 1
20 16 36 4 1
22 13 35 5 3
24 12 36 6 3
26 15 41 2 3
28 14 42 3 2
30 17 47 5 5
cycle_time = crane cycle time
TAT(in mins) = turn around time of truck
ready_for_next(in mins) = ready to take next container
ITV_no = ITV no to be assigned for that job
***There are only 6 unique trucks available***
Idea here is to assign trucks such that waiting time is minimum.
In first five observations all 5 trucks are assigned.
For the next container i.e row number 6 (on 12th min) ITV_no 1 is coming back from its job so that will get assigned to this job.
7th observation(i.e 14th min) there are no trucks available,so we will have to assign new truck (i.e ITV_no 6)
8th observation(16 min) ITV_no 2 is coming back from its job,so that will get assigned to this job and so on.
If there are no trucks available then it has to wait till the nearest truck comes back from job.
How can I implement this in R?
I have build some logic
cycle_time <- c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30)
ITV_no <- c(1,2,3,4,5,6,7)
temp <- c()
TAT <- c(10,12,13,11,15,17,13,13,12,16,13,12,15,14,17)
ready_for_next <- cycle_time + TAT
assignment <- data.frame(cycle_time,TAT,ready_for_next)
assignment$ITV_no <- 0
for(i in 1:nrow(assignment)) {
for(j in 1:length(ITV_no)){
assignment$ITV_no[i] <- ifelse(assignment$cycle_time <= assignment$ready_for_next,ITV_no[j],
ifelse())
## I am not able to update the count of trucks which are already assigned
# and which are free to be assigned
}
}
Logic
1. first row increment ITV_no by 1. directly assign truck to that job
2. check if cycle_time <= previous all ready_for_next(i.e 12), if yes then increment ITV_no by 1,if no then assign previous ITV_no for that job(i.e 1)
e.g
for row 6, cycle time will get compared to all previous ready_for_next column values (25,19,19,16,12) it finds the match at first row then that ITV_no(i.e 2) is assigned to 6th row
for row 7, cycle time will get compared to all previous ready_for_next column values (25,19,19,16) **12 should be removed from comparison because the truck is already assigned to the job** match at first row then that ITV_no(i.e 2) is assigned to 6th row. No match,so new truck is assigned to that job

I have come up with some solution...
It is working with sample data
rm(list=ls())
df <- data.frame(qc_time = seq(2,40,2),itv_tat=c(10,15,12,18,25,19,18,16,14,10,12,15,17,19,13,12,8,15,9,14))
itv_number_vec <- vector()
itv_number_vec <- 0
itvno_time <- list()
for (i in 1:nrow(df))
{
#### Initialisation ####
if (i==1)
{
df$itv_available_time[i] <- sum(df$qc_time[i] + df$itv_tat[i])
itvno_time[[i]] <- df$itv_available_time[i]
df$delay[i] <- 0
df$itv_number[i] <- 1
itv_number_vec <- 1
}
if(i!=1)
{
if (df$qc_time[i] >= min(unlist(itvno_time)))
{
for (j in 1:length(itvno_time))
{
if (itvno_time[[j]] <= df$qc_time[i])
{
df$itv_number[i] <- j
df$itv_available_time[i] <- sum(df$qc_time[i] + df$itv_tat[i])
itvno_time[[j]] <- df$itv_available_time[i]
break
}
}
}else{
if (max(itv_number_vec)<7)
{
df$itv_number[i] <- max(itv_number_vec) + 1
itv_number_vec <- c(itv_number_vec,(max(itv_number_vec) + 1))
df$delay[i] <- 0
df$itv_available_time[i] <- sum(df$qc_time[i] + df$itv_tat[i])
itvno_time[[max(itv_number_vec)]] <- df$itv_available_time[i]
}else{
df$delay[i] <- (min(unlist(itvno_time)) - df$qc_time[i])
df$itv_number[i] <- which.min(itvno_time)
df$itv_available_time[i] <- sum(df$qc_time[i], df$itv_tat[i] ,df$delay[i])
itvno_time[[which.min(itvno_time)]] <- df$itv_available_time[i]
}
}
}
}

Related

Multiple columns in one random effect GLMER

I'm trying to find variance in infectivity trait of animals in different herds. Each herds contains a fixed number of offspring from 5 different sires.
Example of data:
Herd
S
C
DeltaT
I
sire1
I1
sire2
I2
sire3
I3
sire4
I4
sire5
I5
1
20
0
14
1
13
0
26
0
46
0
71
0
91
1
1
1
0
14
5
13
1
26
0
46
2
71
1
91
1
18
4
0
14
13
2
5
52
4
84
2
87
2
98
0
19
11
3
14
27
2
6
13
7
18
3
46
5
85
6
Herd is the herdname. S is the number of susceptible animals in the herd, C is the number of cases in the time interval. DeltaT is the time interval length. Sire# is the ID of the sire in the Herd. I# is the number of infected Ofspring of the corresponding Sire#. This means that a sireID "13" in the first two rows in the column sire1. Refers to the same sire as the "13" in sire2 of the last row. To include these 5 sires into one random effect in a glmer of lme4 is getting me in trouble.
I tried:
glmer(data = GLMM_Data,
cbind(C, S-C) ~ (1 | Herd) + (1| (I1 | sire1) + (I2 | sire2) + (I3 | sire3) + (I4 | sire4) + (I5 | sire5)),
offset = log(GLMM_Data$I/nherds * GLMM_Data$DeltaT),
family = binomial(link="cloglog"))
This gave errors. So any help on combining these 10 columns in a single random factor would be more than welcome. Thanks in advance.
p.s. I know my offset, family and the left side of the formula are working since the analysis of susceptibility is working

How to use apply function instead of for loop if you have multiple if conditions to be excecuted

1st DF:
t.d
V1 V2 V3 V4
1 1 6 11 16
2 2 7 12 17
3 3 8 13 18
4 4 9 14 19
5 5 10 15 20
names(t.d) <- c("ID","A","B","C")
t.d$FinalTime <- c("7/30/2009 08:18:35","9/30/2009 19:18:35","11/30/2009 21:18:35","13/30/2009 20:18:35","15/30/2009 04:18:35")
t.d$InitTime <- c("6/30/2009 9:18:35","6/30/2009 9:18:35","6/30/2009 9:18:35","6/30/2009 9:18:35","6/30/2009 9:18:35")
>t.d
ID A B C FinalTime InitTime
1 1 6 11 16 7/30/2009 08:18:35 6/30/2009 9:18:35
2 2 7 12 17 9/30/2009 19:18:35 6/30/2009 9:18:35
3 3 8 13 18 11/30/2009 21:18:35 6/30/2009 9:18:35
4 4 9 14 19 13/30/2009 20:18:35 6/30/2009 9:18:35
5 5 10 15 20 15/30/2009 04:18:35 6/30/2009 9:18:35
2nd DF:
> s.d
F D E Time
1 10 19 28 6/30/2009 08:18:35
2 11 20 29 8/30/2009 19:18:35
3 12 21 30 9/30/2009 21:18:35
4 13 22 31 01/30/2009 20:18:35
5 14 23 32 10/30/2009 04:18:35
6 15 24 33 11/30/2009 04:18:35
7 16 25 34 12/30/2009 04:18:35
8 17 26 35 13/30/2009 04:18:35
9 18 27 36 15/30/2009 04:18:35
Output to be:
From DF "t.d" I have to calculate the time interval for each row between "FinalTime" and "InitTime" (InitTime will always be less than FinalTime).
Another DF "temp" from "s.d" has to be formed having data only within the above time interval, and then the most recent values of "F","D","E" have to be taken and attached to the 'ith' row of "t.d" from which the time interval was calculated.
Also we have to see if the newly formed DF "temp" has the following conditions true:
here 'j' represents value for each row:
if(temp$F[j] < 35.5) + (temp$D[j] >= 100) >= 1)
{
temp$Flag <- 1
} else{
temp$Flag <- 0
}
Originally I have 3 million rows in the dataframe and 20 columns in each DF.
I have solved the above problem using "for loop" but it obviously takes 2 to 3 days as there are a lot of rows.
(Also if I have to add new columns to the resultant DF if multiple conditions get satisfied on each row?)
Can anybody suggest a different technique? Like using apply functions?
My suggestion is:
use lapply over row indices
handle in the function call your if branches
return either your dataframe or NULL
combine everything with rbind
by replacing lapply with mclapply from the 'parallel' package, your code gets executed in parallel.
resultList <- lapply(1:nrow(t.d), function(i){
do stuff
if(condition){
return(df)
}else{
return(NULL)
}
resultDF <- do.call(rbind, resultList)

How to remove row wise sorting in R dataframes

I have the following code written to take bind two columns and create a data frame.
complete<-function(directory,id){
x<-vector()
y<-vector()
files<-list.files(directory,full.names=TRUE)
for(i in id){
x[i]<-i
y[i]<-sum(complete.cases(read.csv(files[i])))
}
d<-na.omit(data.frame(x,y))
colnames(d)<-c("id","nobs")
rownames(d)<-1:nrow(d)
print(d)
}
I have the following test case :
complete("specdata",30:25)
id nobs
1 25 463
2 26 586
3 27 338
4 28 475
5 29 71
6 30 932
I am not able get the output in the order called by the function. i.e.
id=30 as the first value and id=25 as the last value. How do I get to disable automatic sorting by id?
We can change for(i in id) to for(i in seq_along(id)) to loop by the sequence of 'id'. Also, make some necessary changes in assigning x[i] and y[i].
complete<-function(directory, id){
x<- vector()
y<- vector()
files<-list.files(directory,full.names=TRUE)
for(i in seq_along(id)){
x[i]<- id[i]
y[i]<-sum(complete.cases(read.csv(files[id[i]])))
}
d<-na.omit(data.frame(x,y))
colnames(d)<-c("id","nobs")
rownames(d)<-1:nrow(d)
print(d)
}
Testing
complete('specdata', 25:30)
#id nobs
#1 25 4
#2 26 0
#3 27 1
#4 28 1
#5 29 2
#6 30 13
complete('specdata', 30:25)
# id nobs
#1 30 13
#2 29 2
#3 28 1
#4 27 1
#5 26 0
#6 25 4
NOTE: The values are different because the 'specdata' directory that I have is from a previous coursera link. They might have updated the data

converting arrival time/process to count process in R

I have the data for an arrival process and I want to convert it to count process. This is what I did:
# inter-arrival time in milliseconds
x <- rpareto(100000, location = 10, shape = 1.2)
# arrival time in milliseconds
x.cumsum <- cumsum(x)
# the last arrival
x.max <- max(x.cumsum)
# the time scale for the count data, in this case 1 second
kTimeScale <- 1000
count.length <- ceiling(x.max / kTimeScale)
counts <- rep(0, times = count.length)
for (i in x.cumsum) {
counts[round(i / kTimeScale)] <- counts[round(i / kTimeScale)] + 1
}
This works but for very large dataset (few millions it's slow). I was wondering if there is a better faster way to do this?
You can do this with table:
countsTable<-table(round(x.cumsum/kTimeScale))
counts[1:10]
## [1] 24 41 1 2 33 26 20 45 36 19
countsTable[1:10]
##
## 0 1 2 3 4 5 6 7 8 9
## 5 24 41 1 2 33 26 20 45 36
The difference is that your function misses the 0 values. The table function won't put in 0 for values where there are no observations but you can do something like this to fix that:
counts2<-rep(0,length(counts)+1)
counts2[as.integer(names(countsTable))+1]<-countsTable
identical(counts,counts2[-1])
## [1] TRUE

Maximum Intermediate Volatility

I have two vectors, a and b. See attached.
a is the signal and is a probability.
b is the absolute percentage change the next period.
Signalt <- seq(0, 1, 0.05)
I would like to find the maximum absolute return occuring within each intermediate 5%-tile (Signalt) of the a vector. So if it is
0.01, 0.02, 0.03, 0.06 0.07
then it should calculate the maximum return between
0.01 and 0.02,
0.01 and 0.03,
0.02 and 0.03.
Then move on to
0.06 and 0.07 do it over etc.
Output would then be combined in a matrix or table when the entire sequence has run.
It should follow the index from vector a and b.
i is an index that is updated by one every time that a crosses into a new percentile. t(i) is the bucket associated with the ith cross.
a is the probability vector which has length tao. This vector should be analyzed in its 5% tiles, with the maximum intermediate absolute return being the output. The price change of next period is the vector b. This would be represented by P in the equation below.
l and m are indexes.
Every time Signal moves from one 5% tile to another, we compute the
largest absolute return that occurs between any two intermediate
buckets, until Signal moves to another 5% tile. For example, suppose
that Signal moves into the 85th percentile and 4 volume buckets later
moves into the 90th percentile. We would then calculate absolute
returns between buckets 1 and 2, 1 and 3, 1 and 4, 2 and 3, 2 and 4, 3
and 4. We are interested in the maximum absolute return. We would then
calculate the max return in the following percentile bucket, move on
to the next, which could be an 85th percentile and so on. So we let i
be an index that is updated by 1 every time that Signal moves from one
percentile into another, and τ(i) the bucket associated with the ith
cross.
This is the equation I am using. The notation might vary slightly.
Now my question is how to go about this. Perhaps someone has an intuitive solution to this.
I hope my question is clear.
"a","b"
0,0.013013698630137
0,0.0013522650439487
0,0.00135409614082593
0,0.00203389830508471
0.27804813511593,0.00135317997293627
0.300237801284318,0
0.495965075167796,0.00405405405405412
0.523741892051237,0.000672947510094168
0.558753750296458,0.00202020202020203
0.665762829019002,0.000672043010752743
0.493106479913899,0.000671591672263272
0.344592579573497,0.000672043010752854
0.336263897823707,0.00201748486886366
0.35884763774257,0.00536912751677865
0.23662807979007,0.00133511348464632
0.212636893966841,0.00267379679144386
0.362212830513403,0.000666666666666593
0.319216408413927,0.00333555703802535
0.277670854167344,0
0.310143323100971,0
0.374104373036218,0.00267737617135211
0.190943075221511,0.00268456375838921
0.165770070508112,0.00200803212851386
0.240310208616952,0.00133600534402145
0.212418038918236,0.00200133422281523
0.204282022136019,0.00200534759358306
0.363725074298064,0.000667111407605114
0.451807761954326,0.000666666666666593
0.369296011692801,0.000666222518321047
0.37503495989363,0.0026666666666666
0.323386355686901,0.00132978723404265
0.189216171830472,0.00266311584553924
0.185252052821193,0.00199203187250996
0.174882909380997,0.000662690523525522
0.149291525540782,0.00132625994694946
0.196824215268048,0.00264900662251666
0.164611993131396,0.000660501981505912
0.125470998266484,0.00132187706543285
0.179999532586703,0.00264026402640272
0.368749638521621,0.000658327847267826
0.427799340926225,0
My interpretation of the question
I hope I understand your question correctly. Here is what I understood:
For each row you compute which 5% percentile it belongs to
Whenever that percentile changes, you start a new bucket
All rows from the same bucket result in a single resulting value
If there is only a single row in a bucket, the b value from that row is the resulting value
Otherwise, you compute all abs(b[l]/b[m]-1) where m<l and both belong to the same bucket
Basic answer
Code
This code here does what I describe above:
# read the data (shortened, full data in OP)
d <- read.table(textConnection("a,b
0,0.013013698630137
[…]
0.427799340926225,0
"), sep=",", header=TRUE)
# compute percentile number for each line
d$percentile <- floor(d$a/0.05)*5 + 5
# start a new bucket whenever the percentile changes
d$bucket <- cumsum(c(1, diff(d$percentile) != 0))
# compute a single number for all rows of the same bucket
aggregate(b ~ percentile + bucket, d, function(b) {
if(length(b) == 1) return(b); # special case of only a single row
m <- outer(b, b, function(pm, pl) abs(pl/pm - 1)) # compare all pairs
return(max(m[upper.tri(m)])) # only return pairs with m < l
})
Output
The result will look like this:
percentile bucket b
1 5 1 0.8960891071
2 30 2 0.0013531800
3 35 3 0.0000000000
4 50 4 0.0040540541
5 55 5 0.0006729475
6 60 6 0.0020202020
7 70 7 0.0006720430
8 50 8 0.0006715917
9 35 9 2.0020174849
10 40 10 0.0053691275
11 25 11 1.0026737968
12 40 12 0.0006666667
13 35 13 0.0033355570
14 30 14 0.0000000000
15 35 15 0.0000000000
16 40 16 0.0026773762
17 20 17 0.2520080321
18 25 18 0.5010026738
19 40 19 0.0006671114
20 50 20 0.0006666667
21 40 21 3.0026666667
22 35 22 0.0013297872
23 20 23 0.7511597084
24 15 24 0.0013262599
25 20 25 0.7506605020
26 15 26 0.0013218771
27 20 27 0.0026402640
28 40 28 0.0006583278
29 45 29 0.0000000000
Additional columns
Code
If you also want to know the number of items in each group, then I suggest you use the plyr library:
library(plyr)
aggB <- function(b) {
if(length(b) == 1) return(b)
m <- outer(b, b, function(pm, pl) abs(pl/pm - 1))
return(max(m[upper.tri(m)]))
}
ddply(d, .(bucket), summarise,
percentile = percentile[1], n = length(b), maxr = aggB(b))
Output
This will give you the following result:
bucket percentile n maxr
1 1 5 4 0.8960891071
2 2 30 1 0.0013531800
3 3 35 1 0.0000000000
4 4 50 1 0.0040540541
5 5 55 1 0.0006729475
6 6 60 1 0.0020202020
7 7 70 1 0.0006720430
8 8 50 1 0.0006715917
9 9 35 2 2.0020174849
10 10 40 1 0.0053691275
11 11 25 2 1.0026737968
12 12 40 1 0.0006666667
13 13 35 1 0.0033355570
14 14 30 1 0.0000000000
15 15 35 1 0.0000000000
16 16 40 1 0.0026773762
17 17 20 2 0.2520080321
18 18 25 3 0.5010026738
19 19 40 1 0.0006671114
20 20 50 1 0.0006666667
21 21 40 2 3.0026666667
22 22 35 1 0.0013297872
23 23 20 3 0.7511597084
24 24 15 1 0.0013262599
25 25 20 2 0.7506605020
26 26 15 1 0.0013218771
27 27 20 1 0.0026402640
28 28 40 1 0.0006583278
29 29 45 1 0.0000000000
I am not sure to understand but here an attempt. My idea is to group data by centiles than do calculation on each group using by
To group data I create a new variable split
##dat$split <- cut(dat$a,seq(0, 1, 0.05),include.lowest=T)
dat$split <- c(0,cumsum(diff(dat$a) > 0.05))
Using by, I can performs my function en each group. I remove the singular cases of NULL prob values or one values.
by(dat,dat$split,FUN =function(x){
P <- x$b
if( is.null(P)||length(P) ==1) return(0)
nn <- length(P)
ind <- expand.grid(1:nn,1:nn) ## I generate indexes here
ret <- abs(P[ind[,1]]/P[ind[,2]]-1) ## perfom P_l/P_m-1 (vectorized)
list(P=P,
ret.max = max(ret),
ret.ind = ind[which.max(ret),])
})
Here the result list. For each interval I show ,
P ( Prob values),
The maximum return
The indexes from which this maximum is computed.
For example:
dat$split: 0
$P
[1] 0.0130 0.0014 0.0014 0.0020
$ret.max
[1] 8.6236
$ret.ind
Var1 Var2
5 1 2
---------------------------------------------------------------------------------------------------------------
dat$split: 1
$P
[1] 0.0014 0.0000
$ret.max
[1] 1
$ret.ind
Var1 Var2
2 2 1

Resources