Shift planning with Linear Programming - r

The Modeling and Solving Linear Programming with R book has a nice example on planning shifts in Sec 3.7. I am unable to solve it with R. Also, I am not clear with the solution provided in the book.
Problem
A company has a emergency center which is working 24 hours a day. In
the table below, is detailed the minimal needs of employees for each of the
six shifts of four hours in which the day is divided.
Shift Employees
00:00 - 04:00 5
04:00 - 08:00 7
08:00 - 12:00 18
12:00 - 16:00 12
16:00 - 20:00 15
20:00 - 00:00 10
R solution
I used the following to solve the above.
library(lpSolve)
obj.fun <- c(1,1,1,1,1,1)
constr <- c(1,1,0,0,0,0,
0,1,1,0,0,0,
0,0,1,1,0,0,
0,0,0,1,1,0,
0,0,0,0,1,1,
1,0,0,0,0,1)
constr.dir <- rep(">=",6)
constr.val <-c (12,25,30,27,25,15)
day.shift <- lp("min",obj.fun,constr,constr.dir,constr.val,compute.sens = TRUE)
And, I get the following result.
> day.shift$objval
[1] 1.666667
> day.shift$solution
[1] 0.000000 1.666667 0.000000 0.000000 0.000000 0.000000
This is nowhere close to the numerical solution mentioned in the book.
Numerical solution
The total number of solutions required as per the numerical solution is 38. However, since the problem stated that, there is a defined minimum number of employees in every period, how can this solution be valid?
s1 5
s2 6
s3 12
s4 0
s5 15
s6 0

Your mistake is at the point where you initialize the variable constr, because you don't define it as a matrix. Second fault is your matrix itself. Just look at my example.
I was wondering why you didn't stick to the example in the book because I wanted to check my solution. Mine is based on that.
library(lpSolve)
obj.fun <- c(1,1,1,1,1,1)
constr <- matrix(c(1,0,0,0,0,1,
1,1,0,0,0,0,
0,1,1,0,0,0,
0,0,1,1,0,0,
0,0,0,1,1,0,
0,0,0,0,1,1), ncol = 6, byrow = TRUE)
constr.dir <- rep(">=",6)
constr.val <-c (5,7,18,12,15,10)
day.shift <- lp("min",obj.fun,constr,constr.dir,constr.val,compute.sens = TRUE)
day.shift$objval
# [1] 38
day.shift$solution
# [1] 5 11 7 5 10 0
EDIT based on your question in the comments:
This is the distribution of the shifts on the periods:
shift | 0-4 | 4-8 | 8-12 | 12-16 | 16-20 | 20-24
---------------------------------------------------
20-4 | 5 | 5 | | | |
0-8 | | 11 | 11 | | |
4-12 | | | 7 | 7 | |
8-16 | | | | 5 | 5 |
12-20 | | | | | 10 | 10
18-24 | | | | | |
----------------------------------------------------
sum | 5 | 16 | 18 | 12 | 15 | 10
----------------------------------------------------
need | 5 | 7 | 18 | 12 | 15 | 10
---------------------------------------------------

Related

Subsetting a table in R

In R, I've created a 3-dimensional table from a dataset. The three variables are all factors and are labelled H, O, and S. This is the code I used to simply create the table:
attach(df)
test <- table(H, O, S)
Outputting the flattened table produces this table below. The two values of S were split up, so these are labelled S1 and S2:
ftable(test)
+-----------+-----------+-----+-----+
| H | O | S1 | S2 |
+-----------+-----------+-----+-----+
| Isolation | Dead | 2 | 15 |
| | Sick | 64 | 20 |
| | Recovered | 153 | 379 |
| ICU | Dead | 0 | 15 |
| | Sick | 0 | 2 |
| | Recovered | 1 | 9 |
| Other | Dead | 7 | 133 |
| | Sick | 4 | 20 |
| | Recovered | 17 | 261 |
+-----------+-----------+-----+-----+
The goal is to use this table object, subset it, and produce a second table. Essentially, I want only "Isolation" and "ICU" from H, "Sick" and "Recovered" from O, and only S1, so it basically becomes the 2-dimensional table below:
+-----------+------+-----------+
| | Sick | Recovered |
+-----------+------+-----------+
| Isolation | 64 | 153 |
| ICU | 0 | 1 |
+-----------+------+-----------+
S = S1
I know I could first subset the dataframe and then create the new table, but the goal is to subset the table object itself. I'm not sure how to retrieve certain values from each dimension and produce the reduced table.
Edit: ANSWER
I now found a much simpler method. All I needed to do was reference the specific columns in their respective directions. So a much simpler solution is below:
> test[1:2,2:3,1]
O
H Sick Healed
Isolation 64 153
ICU 0 1
Subset the data before running table, example:
ftable(table(mtcars[, c("cyl", "gear", "vs")]))
# vs 0 1
# cyl gear
# 4 3 0 1
# 4 0 8
# 5 1 1
# 6 3 0 2
# 4 2 2
# 5 1 0
# 8 3 12 0
# 4 0 0
# 5 2 0
# subset then run table
ftable(table(mtcars[ mtcars$gear == 4, c("cyl", "gear", "vs")]))
# vs 0 1
# cyl gear
# 4 4 0 8
# 6 4 2 2

R - create a Panel dataset from 2 Cross sectional datasets

Could you please help me with the following task of creating a panel dataset from two cross-sectional datasets?
Specifically, a small portion of the cross-sectional datasets are:
1) - data1
ID| Yr | D | X
-------------------
1 | 2002 | F | 25
2 | 2002 | T | 27
& 2) - data2
ID | Yr | D | X
---------------------
1 | 2003 | T | 45
2 | 2003 | F | 35
And would like to create a panel of the form:
ID | Yr | D | X
-----------------------
1 | 2002 | F | 25
1 | 2003 | T | 45
2 | 2002 | T | 27
2 | 2003 | F | 35
The codes I have tried so far are:
IDvec<-data1[,1]
ID_panel=c()
for (i in 1:length(IDvec)) {
x<-rep(IDvec[i],2)
ID_panel<-append(ID_panel,x)
}
Years_panel<-rep(2002:2003,length(IDvec))
But cannot quite figure out how to link the 3rd and 4th columns. Any help would be greatly appreciated. Thank you.
Assuming that you want to concatenate data frames, then order by ID and Yr. Here's a dplyr approach:
library(dplyr)
data1 %>%
bind_rows(data2) %>%
arrange(ID, Yr)
ID Yr D X
1 1 2002 F 25
2 1 2003 T 45
3 2 2002 T 27
4 2 2003 F 35

Time Series Forecast in R - Weekly Sparse Data with Variable Starting Point

I am working on building a time series forecast model for a problem which involves dataset of manufacturers and their product offerings in a large retail outlet.
The problem is as follows:
Lets say you have thousands of manufacturers. (M1 to Mn)
There is a retail outlet who would take goods from these
manufacturers to sell.
The manufacturers supply product offerings to the store on weekly
basis.(same product or new product with same price or different
price, but for simplicity lets say they supply new distinct
products) (from week W1 to Wn)
Each manufacturer would have started working with the retail outlet
on different dates.(which can be anything in past)
From the time they started working with the retail outlet, the
history suggests that some manufacturers have been constantly
supplying products weekly and some have been sparse.
In Dataset Below (N/A means they hadn't started doing business with
the retail outlet e.g manufacturer M1 joined the retailer on Week3 in
our time window with their first offering count to be 10)
W1 - W2 - W3 - W4 - W5 - W6 - W7 - W8 - . - . - . - Wn
M1 - | N/A | N/A | 10 | 0 | 5 | 0 | 10 | 15 | 12 | . | . | 23 |
M2 - | 10 | 5 | 12 | 8 | 4 | 9 | 0 | 0 | 0 | . | . | 4 |
M3 - | 9 | N/A | 0 | 0 | 45 | 45 | 45 | 38 | 12 | . | . | 11 |
. - | N/A | N/A | N/A | N/A | 12 | 0 | 10 | 15 | 12 | . | . | 28 |
. - | N/A | N/A | N/A | 0 | 5 | 0 | 8 | 15 | 12 | . | . | 12 |
. - | 5 | N/A | 60 | 0 | 5 | 0 | 40 | 67 | 23 | . | . | 46 |
. - | N/A | N/A | 12 | 9 | 12 | 15 | 10 | 15 | 43 | . | . | 9 |
Mn - | 0 | N/A | 90 | 78 | 65 | 0 | 10 | 15 | 12 | . | . | 65 |
Now assuming that for all the manufacturers, I want to forecast, from the point Wn their next 8 weeks of supply they would do from this historical data ( i.e. from Wn for next 8 weeks i..e Wn+1 to Wn+8).
I am trying to forecast using the Auto ARIMA and seasonal naive models in R.
tsfull<- ts(tsdata, start=c(ts_series_strt_dt,weeknum_strt), freq=52)
tswindow<- window(tsdata, start=c(ts_series_strt_dt,weeknum_strt), end=c(ts_series_end_dt,weeknum_end))
SN<-snaive(tswindow, 8)
AR<-forecast.Arima(auto.arima(tswindow), h=8)
etsfr<-forecast(ets(tswindow), h=8)
stlffr<-forecast(stlf(tswindow), h=8)
Since the data contains a lot of Zeros, also since the starting point of the series for every manufacturer is totally different, also we have thousands of manufacturers and I see the RMSE error varies erratically since each series is unique. Also I have tried grouping the manufacturer based on their age with the outlet. I am not able to decide on the best forecast model that would fit this problem.
I am not an expert in this field. Any thoughts and opinion would be really helpful.

Time dependent data in the coxph function of survival package

When considering time dependent data in survival analysis, you have multiple start-stop times for an individual subject with measurements for the covariates as each start-stop time. How does the coxph function keeps track of which subject it is associating the start and stop times along with the covariates?
The function looks as follows
coxph(Surv(start, stop, event, type) ~ X)
Your data may look as follows
subject | start | stop | event | covariate |
--------+---------+--------+--------+-----------+
1 | 1 | 7 | 0 | 2 |
1 | 7 | 14 | 0 | 3 |
1 | 14 | 17 | 1 | 6 |
2 | 1 | 7 | 0 | 1 |
2 | 7 | 14 | 0 | 1 |
2 | 14 | 21 | 0 | 2 |
3 | 1 | 3 | 1 | 8 |
How can the function get away without an individual subject specifier?
My understanding is that survival analysis is not interested in individuals through time, it is looking at total counts for each time point, so the subject specifier is irrelevant. Instead, based on the counts, probabilities can be estimated that any particular subject will be alive/dead at a certain time given certain treatments.

Interpolate variables on subsets of dataframe

I have a large dataframe which has observations from surveys from multiple states for several years. Here's the data structure:
state | survey.year | time1 | obs1 | time2 | obs2
CA | 2000 | 1 | 23 | 1.2 | 43
CA | 2001 | 2 | 43 | 1.4 | 52
CA | 2002 | 5 | 53 | 3.2 | 61
...
CA | 1998 | 3 | 12 | 2.3 | 20
CA | 1999 | 4 | 14 | 2.8 | 25
CA | 2003 | 5 | 19 | 4.3 | 29
...
ND | 2000 | 2 | 223 | 3.2 | 239
ND | 2001 | 4 | 233 | 4.2 | 321
ND | 2003 | 7 | 256 | 7.9 | 387
For each state/survey.year combination, I would like to interpolate obs2 so that it's time-location is lined up with (time1,obs1).
ie I would like to break up the dataframe into state/survey.year chunks, perform my linear interpolation, and then stitch the individual state/survey.year dataframes back together into a master dataframe.
I have been trying to figure out how to use the plyr and Hmisc packages for this. But keeping getting myself in a tangle.
Here's the code that I wrote to do the interpolation:
require(Hmisc)
df <- new.obs2 <- NULL
for (i in 1:(0.5*(ncol(indirect)-1))){
df[,"new.obs2"] <- approxExtrap(df[,"time1"],
df[,"obs1"],
xout = df[,"obs2"],
method="linear",
rule=2)
}
But I am not sure how to unleash plyr on this problem. Your generous advice and suggestions would be much appreciated. Essentially - I am just trying to interpolate "obs2", within each state/survey.year combination, so it's time references line up with those of "obs1".
Of course if there's a slick way to do this without invoking plyr functions, then I'd be open to that...
Thank you!
This should be as simple as,
ddply(df,.(state,survey.year),transform,
new.obs2 = approxExtrap(time1,obs1,xout = obs2,
method = "linear",
rule = 2))
But I can't promise you anything, since I haven't the foggiest idea what the point of your for loop is. (It's overwriting df[,"new.obs2"] each time through the loop? You initialize the entire data frame df to NULL? What's indirect?)

Resources