I am trying to run build a knn model using caret with my dataset where True (real sales), DOW (Day of the week), and D1 to D10 (historic sales) are available.
library(caret)
library(reshape2)
library(dplyr)
library(tibble)
library(dummies)
#data
rm = matrix(rnorm(100*10, 10, 5), nrow = 100) %>% as.data.frame()
wide = cbind(
rnorm(100, 100, 1),
weekdays(seq(as.Date('2019/1/1'), by='day', length.out = 100)),
rm
)
colnames(wide) = c('true', 'DOW', paste0('D',1:10))
#preprocessing for knn
train.true = train[,1]
dow.tr = dummy(train$DOW, sep='.')
dow.te = dummy(test$DOW, sep='.')
k.train = cbind(train[, -c(2, nearZeroVar(train))], dow.tr)
k.test = cbind(test, dow.te)[,-2]
seq.knn.pre1 = rep(0, nrow(test))
for (i in 1:10){
this.train = k.train[, c((i+1):ncol(k.train))]
this.test = k.test[i, c((i+1):ncol(k.test))]
train.control = trainControl(method='repeatedcv', number=10, repeats = 1)
k = train(train.true~., method='knn', tuneLength = 8,
trControl=train.control, preProcess='scale',
data=data.frame(train.true, this.train))
seq.knn.pre1[i] = predict(k, this.test)
}
seq.knn.pre1 = cbind(true = test[,1], k.pred1 = seq.knn.pre1) %>% data.frame()
However, when I am knitting the file, it gives me error object 'X.Rachel.Documents.Research.file.Rmd.Friday' not found Calls: <Anonymous> ... predict.train -> model.frame -> model.frame.default -> eval -> eval Execution halted.
I am guessing the problem might come from the DOW dummy variables. When my simulated dataset does not include categorical variables, the code knitted well. Is there any possibility that I can fix it there?
Any suggestion is highly appreciated!
Related
I am trying to save a ggsurvplot with risk.table using ggsave. However, the output off ggsave is always just the risk.table. I also tried this and this. None is working.
library(data.table)
library(survival)
library(survminer)
OS <- c(c(1:100), seq(1, 75, length = 50), c(1:50))
dead <- rep(1, times = 200)
variable <- c(rep(0, times = 100), rep(1, times = 50), rep(2, times = 50))
dt <- data.table(OS = OS,
dead = dead,
variable = variable)
survfit <- survfit(Surv(OS, dead) ~ variable, data = dt)
ggsurvplot(survfit, data = dt,
risk.table = TRUE)
ggsave("test.png")
The main issue is that a ggsurvplot object is a list of plots. Hence, when using ggsave only the last plot or element of the list is saved.
There is already a GitHub issue on that topic with several workarounds, e.g. using one of the more recent suggestions this works fine for me
library(survival)
library(survminer)
OS <- c(c(1:100), seq(1, 75, length = 50), c(1:50))
dead <- rep(1, times = 200)
variable <- c(rep(0, times = 100), rep(1, times = 50), rep(2, times = 50))
dt <- data.frame(OS = OS,
dead = dead,
variable = variable)
survfit <- survfit(Surv(OS, dead) ~ variable, data = dt)
# add method to grid.draw
grid.draw.ggsurvplot <- function(x){
survminer:::print.ggsurvplot(x, newpage = FALSE)
}
p <- ggsurvplot(survfit, data = dt, risk.table = TRUE)
ggsave("test.png", p, height = 6, width = 6)
I am using the Prophet model to forecast revenue for my company and one of the challenges i currently face is being able to modify the code in order to leverage the hyperparameter tuning features for monthly data. From my understanding, the code on the FB prophet site is designed to tune on daily data, not monthly. However, I have read somewhere (can't seem to find the post) where it can be tweaked for monthly data.
Has anyone been able to figure this out? Would love some help! I'm not a programmer and have been leveraging low code platforms to build this out so would really appreciate a fellow coder's help in solving this issue!
Here's the code that I'm using:
# Conditional Install
cond.install <- function(package.name){
options(repos = "http://cran.rstudio.com") #set repo
#check for package in library, if package is missing install
if(package.name%in%rownames(installed.packages())==FALSE) {
install.packages(package.name, .libPaths()[2])}else{require(package.name, character.only = TRUE)}}
# conditionally install package
cond.install('forecast')
cond.install('prophet')
cond.install('rBayesianOptimization')
cond.install('dplyr')
cond.install('lubridate')
library(dplyr)
library(lubridate)
library(forecast)
library(prophet)
library(rBayesianOptimization)
#reading data
cv_set <- read.Alteryx("#1", mode="data.frame")
valid <- read.Alteryx("#2", mode="data.frame")
#make sure the date format is defined
cv_set$ds <- as.Date(cv_set$ds)
date_seq <- as.Date(valid$ds)
#define hyper search parameter
rand_search_grid = data.frame(
changepoint_prior_scale = sort(runif(10, 0.01, 20)),
seasonality_prior_scale = c(sort(sample(c(runif(5, 0.01, 0.05), runif(5, 1, 20)), 5, replace = F)),
sort(sample(c(runif(5, 0.01, 0.05), runif(5, 1, 20)), 5, replace = F))),
n_changepoints = sample(5:50, 10, replace = F)
)
#Define deafult function for prophet. Change Linear to Logistic cap setting
prophet_fit_bayes = function(changepoint_prior_scale, seasonality_prior_scale, n_changepoints) {
error = c()
for (d in date_seq) {
train = subset(cv_set, ds < d)
test = subset(cv_set, ds == d)
m = prophet(train, growth = 'linear',
seasonality.prior.scale = seasonality_prior_scale,
changepoint.prior.scale = changepoint_prior_scale,
n.changepoints = n_changepoints,
weekly.seasonality = F,
daily.seasonality = F)
future = make_future_dataframe(m, periods = 1)
# NOTE: There's a problem in function names with library(caret)
forecast = predict(m, future)
forecast$ds = as.Date(forecast$ds)
error_d = forecast::accuracy(forecast[forecast$ds %in% test$ds, 'yhat'], test$y)[ , 'MAPE']
error = c(error, error_d)
}
## The function wants to _maximize_ the outcome so we return
## the negative of the resampled MAPE value. `Pred` can be used
## to return predicted values but we'll avoid that and use zero
list(Score = -mean(error), Pred = 0)
}
changepoint_bounds = range(rand_search_grid$changepoint_prior_scale)
n_changepoint_bounds = as.integer(range(rand_search_grid$n_changepoints))
seasonality_bounds = range(rand_search_grid$seasonality_prior_scale)
bayesian_search_bounds = list(changepoint_prior_scale = changepoint_bounds,
seasonality_prior_scale = seasonality_bounds,
n_changepoints = as.integer(n_changepoint_bounds))
#rBayesian parameters. Assume n_iteration is 1 for demo purpose
ba_search = BayesianOptimization(prophet_fit_bayes,
bounds = bayesian_search_bounds,
init_grid_dt = rand_search_grid,
init_points = 1,
n_iter = %Question.iteration.var%,
acq = 'ucb',
kappa = 1,
eps = 0,
verbose = TRUE)
best_params_ba = c(ba_search$Best_Par)
#Start Prophet
# Holiday Setting
custom1 <- data_frame(
holiday = 'custom1',
ds = as.Date(c('1991-12-31')))
custom2 <- data_frame(
holiday = 'custom2',
ds = as.Date(c('1992-12-31', '1993-01-01')))
holidays <- bind_rows(custom1, custom2)
if ('%Question.noholiday.var%' == "True") {
m = prophet(cv_set, growth = 'linear',
seasonality.prior.scale = best_params_ba[['seasonality_prior_scale']],
changepoint.prior.scale = best_params_ba[['changepoint_prior_scale']],
n.changepoints = best_params_ba[['n_changepoints']])
}
if ('%Question.holiday.var%' == "True") {
m <- prophet(holidays = holidays, growth = 'linear',
seasonality.prior.scale = best_params_ba[['seasonality_prior_scale']],
changepoint.prior.scale = best_params_ba[['changepoint_prior_scale']],
n.changepoints = best_params_ba[['n_changepoints']])
m <- add_country_holidays(m, country_name = '%Question.country.var%')
m <- fit.prophet(m, cv_set)
}
future <- make_future_dataframe(m, periods = %Question.forecast.var%)
forecast <- predict(m, future)
yhat <- as.data.frame(forecast$yhat)
yhat_l <- as.data.frame(forecast$yhat_lower)
yhat_u <-as.data.frame(forecast$yhat_upper)
trend <- as.data.frame(forecast$trend)
df1 <- cbind(yhat, yhat_l, yhat_u, trend)
write.Alteryx(df1, 1)
AlteryxGraph(3, width=576, height=576)
plot(m, forecast) + add_changepoints_to_plot(m)
invisible(dev.off())
AlteryxGraph(4, width=576, height=576)
prophet_plot_components(m, forecast)
invisible(dev.off())
#Output best params for reference
df5 <- best_params_ba
write.Alteryx(df5, 5)
You can specify custom seasonality. So you would just define a custom seasonality called monthly and define the period length. You can view the documentation here.
# R
m <- prophet(weekly.seasonality=FALSE)
m <- add_seasonality(m, name='monthly', period=30.5, fourier.order=5)
m <- fit.prophet(m, df)
forecast <- predict(m, future)
prophet_plot_components(m, forecast)
I am using R Studio and trying to knit a file. the code chunk below will run as the chunk but throws an error when I try to knit the file.
tree.corolla <- rpart(Price ~ ., data = toyota.corolla.df, control = rpart.control(maxdepth = 5), method = "anova")
The error I am getting is:
Error in -train : invalid argument to unary operator
Calls: ... eval -> predict -> predict.rpart -> [ -> [.data.frame
I am using the ToyotaCorolla.csv dataset that is available here:
https://pitt.box.com/s/e0rhjtba8az85epqus9xu85e4q6zxuts
The entire code chunk is below:
#install.packages("rpart")
#install.packages("rpart.plot")
#install.packages("gbm")
#install.packages("randomForest")
#install.packages("dummies")
library(randomForest)
library(gbm)
library(rpart)
library(rpart.plot)
library(tree)
library(ISLR)
library(dummies)
library(adabag)
library(rpart)
library(caret)
toyota.corolla.df <- read.csv("ToyotaCorolla.csv")
#View(toyota.corolla.df)
# randomly generate training and validation sets
toyota.corolla.df <- toyota.corolla.df[ , -c(1, 2, 5, 6)]
toyota.corolla.df <- cbind(toyota.corolla.df, dummy(toyota.corolla.df$Fuel_Type, sep = "_"))
toyota.corolla.df <- cbind(toyota.corolla.df, dummy(toyota.corolla.df$Color, sep = "_"))
toyota.corolla.df <- toyota.corolla.df[ , -c(4, 7)]
set.seed(123)
inTraining <- createDataPartition(toyota.corolla.df$Price, p = .60, list = FALSE)
training <- toyota.corolla.df[ inTraining,]
testing <- toyota.corolla.df[-inTraining,]
tree.corolla <- rpart(Price ~ ., data = toyota.corolla.df, control = rpart.control(maxdepth = 5), method = "anova")
summary(tree.corolla)
plot(tree.corolla)
text(tree.corolla,pretty=0)
cv.corolla=trainControl(method = "repeatedcv", number = 10, repeats = 10)
prp(tree.corolla, type = 1, extra = 1, split.font = 1, varlen = -10)
yhat=predict(tree.corolla,newdata=toyota.corolla.df[-train,])
corolla.test=toyota.corolla.df[-train,"Price"]
plot(yhat,corolla.test)
abline(0,1)
dI'm new to R and ML but have a focused question that I am trying to answer.
I'm using my own data but following Matt Dancho's example here to predict attrition: http://www.business-science.io/business/2017/09/18/hr_employee_attrition.html
I have removed zero variance and scaled variables as per his update.
My issue is running the explain() on explainer step. I get variations of both errors below (in bold) when I run the former original code and the latter variation. Everything else runs up to that point.
explanation <- lime::explain(
as.data.frame(test_h2o[1:10,-1]),
explainer = explainer,
n_labels = 1,
n_features = 4,
kernel_width = 0.5)
gives:
Error during wrapup: arguments imply differing number of rows: 50000, 0
While
explanation <- lime::explain(
as.data.frame(test_h2o[1:500,-1]),
explainer = explainer,
n_labels = 1,
n_features = 5,
kernel_width = 1)
Gives:
ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://localhost:54321/3/PostFile?destination_frame=C%3A%2FUsers%2Fsim.s%2FAppData%2FLocal%2FTemp%2FRtmpykNkl1%2Ffileb203a8d4a58.csv_sid_afd3_26)
Error: lexical error: invalid char in json text.
<html> <head> <meta http-equiv=
(right here) ------^
Please let me know if you have any ideas or insights for this problem, or need additional info from me.
Try this and let me know what you get. Note that this assumes your excel file is stored in a folder called "data" in your working directory. Use getwd() and setwd() to get/set the working directory (or use Projects in RStudio IDE).
library(h2o) # Professional grade ML pkg
library(tidyquant) # Loads tidyverse and several other pkgs
library(readxl) # Super simple excel reader
library(lime) # Explain complex black-box ML models
library(recipes) # Preprocessing for machine learning
hr_data_raw_tbl <- read_excel(path = "data/WA_Fn-UseC_-HR-Employee-Attrition.xlsx")
hr_data_organized_tbl <- hr_data_raw_tbl %>%
mutate_if(is.character, as.factor) %>%
select(Attrition, everything())
recipe_obj <- hr_data_organized_tbl %>%
recipe(formula = Attrition ~ .) %>%
step_rm(EmployeeNumber) %>%
step_zv(all_predictors()) %>%
step_center(all_numeric()) %>%
step_scale(all_numeric()) %>%
prep(data = hr_data_organized_tbl)
hr_data_bake_tbl <- bake(recipe_obj, newdata = hr_data_organized_tbl)
h2o.init()
hr_data_bake_h2o <- as.h2o(hr_data_bake_tbl)
hr_data_split <- h2o.splitFrame(hr_data_bake_h2o, ratios = c(0.7, 0.15), seed = 1234)
train_h2o <- h2o.assign(hr_data_split[[1]], "train" ) # 70%
valid_h2o <- h2o.assign(hr_data_split[[2]], "valid" ) # 15%
test_h2o <- h2o.assign(hr_data_split[[3]], "test" ) # 15%
y <- "Attrition"
x <- setdiff(names(train_h2o), y)
automl_models_h2o <- h2o.automl(
x = x,
y = y,
training_frame = train_h2o,
validation_frame = valid_h2o,
leaderboard_frame = test_h2o,
max_runtime_secs = 15
)
automl_leader <- automl_models_h2o#leader
explainer <- lime::lime(
as.data.frame(train_h2o[,-1]),
model = automl_leader,
bin_continuous = FALSE
)
explanation <- lime::explain(
x = as.data.frame(test_h2o[1:10,-1]),
explainer = explainer,
n_labels = 1,
n_features = 4,
n_permutations = 500,
kernel_width = 1
)
explanation
I am trying to reproduce an example from ND Lewis: Neural Networks for time series forecasting with R. If I include the device argument I get the error:
Error in mx.opt.sgd(...) :
unused argument (device = list(device = "cpu", device_id = 0, device_typeid = 1))
In addition: Warning message:
In mx.model.select.layout.train(X, y) :
Auto detect layout of input matrix, use rowmajor..
If I remove this parameter, I still get this warning:
Warning message:
In mx.model.select.layout.train(X, y) :
Auto detect layout of input matrix, use rowmajor..
The code is:
library(zoo)
library(quantmod)
library(mxnet)
# data
data("ecoli", package = "tscount")
data <- ecoli$cases
data <- as.zoo(ts(data, start = c(2001, 1), end = c(2013, 20), frequency = 52))
xorig <- do.call(cbind, lapply((1:4), function(x) as.zoo(Lag(data, k = x))))
xorig <- cbind(xorig, data)
xorig <- xorig[-(1:4), ]
# normalization
range_data <- function(x) {
(x - min(x))/(max(x) - min(x))
}
xnorm <- data.matrix(xorig)
xnorm <- range_data(xnorm)
# test/train
y <- xnorm[, 5]
x <- xnorm[, -5]
n_train <- 600
x_train <- x[(1:n_train), ]
y_train <- y[(1:n_train)]
x_test <- x[-(1:n_train), ]
y_test <- y[-(1:n_train)]
# mxnet:
mx.set.seed(2018)
model1 <- mx.mlp(x_train,
y_train,
hidden_node = c(10, 2),
out_node = 1,
activation = "sigmoid",
out_activation = "rmse",
num.round = 100,
array.batch.size = 20,
learning.rate = 0.07,
momentum = 0.9
#, device = mx.cpu()
)
pred1_train <- predict(model1, x_train, ctx = mx.cpu())
How can I fix this?
Regarding the second warning message, MXNet is trying to detect the row/column major based on the shape of your inputs: https://github.com/apache/incubator-mxnet/blob/424143ac47ab3a38ae8aedaeb3319379887de0bc/R-package/R/model.R#L329
For the unused argument device = mx.cpu(), should the argument name be corrected to ctx instead of device?