R: make nomogram plot label italic - r

I am drawing a nomogram based on the dataset PimaIndiansDiabetes using the package rms:
library(mlbench)
data(PimaIndiansDiabetes)
library(rms)
ddist <- datadist(PimaIndiansDiabetes)
options(datadist='ddist')
lrm_model <- lrm (diabetes ~ .,
data = PimaIndiansDiabetes,
x = TRUE, y = TRUE)
nom <- nomogram(lrm_model,
fun=plogis,
fun.at=c(0.01, 0.05, seq(0.1, 0.9, by=0.2), 0.95, 0.99),
abbrev = TRUE,
lp=F,
vnames = "labels",
varname.label=TRUE,
funlabel = "Diabetes")
plot(nom,
fun.side=c(1,3,1,3,1,3,1,3,1),
label.every=1,
lmgp = 0.15,
xfrac=.4
)
Below is the plot:
How do I make the axis names italic? (I actually prefer if only the 8 predictors' names are italic)

Related

R: converting grob objects to ggplot/plotly [duplicate]

This question already exists:
R: Convert "grob" (graphical object) to "ggplot" [duplicate]
Closed 2 years ago.
I working with the R programming language. I am trying to convert a "grob" object into a "ggplot" object (the goal is eventually to convert the ggplot object into a "plotly" object).
I am looking for "the most simple" way to convert "grob" to "ggplot" - the computer I am using does not have a USB port or an internet connection, it only has R with some preloaded libraries (e.g. ggplot2, ggpubr)
In my example: I generated some data, ran a statistical model ("random forest") and plotted the results using "compressed" axis ("Tsne"). The code below can be copy/pasted into R, and the resulting "plot" ("final_plot") is the object that I want to convert to "ggplot":
library(cluster)
library(Rtsne)
library(dplyr)
library(randomForest)
library(caret)
library(ggplot2)
library(plotly)
#PART 1 : Create Data
#generate 4 random variables : response_variable ~ var_1 , var_2, var_3
var_1 <- rnorm(10000,1,4)
var_2<-rnorm(10000,10,5)
var_3 <- sample( LETTERS[1:4], 10000, replace=TRUE, prob=c(0.1, 0.2, 0.65, 0.05) )
response_variable <- sample( LETTERS[1:2], 10000, replace=TRUE, prob=c(0.4, 0.6) )
#put them into a data frame called "f"
f <- data.frame(var_1, var_2, var_3, response_variable)
#declare var_3 and response_variable as factors
f$response_variable = as.factor(f$response_variable)
f$var_3 = as.factor(f$var_3)
#create id
f$ID <- seq_along(f[,1])
#PART 2: random forest
#split data into train set and test set
index = createDataPartition(f$response_variable, p=0.7, list = FALSE)
train = f[index,]
test = f[-index,]
#create random forest statistical model
rf = randomForest(response_variable ~ var_1 + var_2 + var_3, data=train, ntree=20, mtry=2)
#have the model predict the test set
pred = predict(rf, test, type = "prob")
labels = as.factor(ifelse(pred[,2]>0.5, "A", "B"))
confusionMatrix(labels, test$response_variable)
#PART 3: Visualize in 2D (source: https://dpmartin42.github.io/posts/r/cluster-mixed-types)
gower_dist <- daisy(test[, -c(4,5)],
metric = "gower")
gower_mat <- as.matrix(gower_dist)
labels = data.frame(labels)
labels$ID = test$ID
tsne_obj <- Rtsne(gower_dist, is_distance = TRUE)
tsne_data <- tsne_obj$Y %>%
data.frame() %>%
setNames(c("X", "Y")) %>%
mutate(cluster = factor(labels$labels),
name = labels$ID)
plot = ggplot(aes(x = X, y = Y), data = tsne_data) +
geom_point(aes(color = labels$labels))
plotly_plot = ggplotly(plot)
a = tsne_obj$Y
a = data.frame(a)
data = a
data$class = labels$labels
decisionplot <- function(model, data, class = NULL, predict_type = "class",
resolution = 100, showgrid = TRUE, ...) {
if(!is.null(class)) cl <- data[,class] else cl <- 1
data <- data[,1:2]
k <- length(unique(cl))
plot(data, col = as.integer(cl)+1L, pch = as.integer(cl)+1L, ...)
# make grid
r <- sapply(data, range, na.rm = TRUE)
xs <- seq(r[1,1], r[2,1], length.out = resolution)
ys <- seq(r[1,2], r[2,2], length.out = resolution)
g <- cbind(rep(xs, each=resolution), rep(ys, time = resolution))
colnames(g) <- colnames(r)
g <- as.data.frame(g)
### guess how to get class labels from predict
### (unfortunately not very consistent between models)
p <- predict(model, g, type = predict_type)
if(is.list(p)) p <- p$class
p <- as.factor(p)
if(showgrid) points(g, col = as.integer(p)+1L, pch = ".")
z <- matrix(as.integer(p), nrow = resolution, byrow = TRUE)
contour(xs, ys, z, add = TRUE, drawlabels = FALSE,
lwd = 2, levels = (1:(k-1))+.5)
invisible(z)
}
model <- randomForest(class ~ ., data=data, mtry=2, ntrees=500)
#this is the final plot
final_plot = decisionplot(model, data, class = "class", main = "rf (1)")
From here, I am trying to convert this object ("final_plot") into a ggplot object:
library(ggpubr)
final = ggpubr::as_ggplot(final_plot)
But this gives me the following error:
Error in gList(...) : only 'grobs' allowed in "gList"
From here, I eventually would have wanted to use this command to convert the ggplot into a plotly object:
plotly_plot = ggplotly(final)
Does anyone know if there is a straightforward way to convert "final_plot" into a ggplot object? (and then plotly)? I don't have the ggplotify library.
Thanks

The method of all possible regressions in R

I want to determine which variables are best suited for the model. To do this, I use the method of all possible regressions, i.e. build models with all possible combinations of predictors. For example:
library(fpp2)
# uschange is dataset from "fpp2" package
train <- ts(uschange[1:180, ], start = 1970, frequency = 5)
fit1 <- tslm(Consumption ~ Income, data = train)
fit2 <- tslm(Consumption ~ Production, data = train)
fit3 <- tslm(Consumption ~ Savings, data = train)
fit4 <- tslm(Consumption ~ Unemployment, data = train)
fit5 <- tslm(Consumption ~ Income + Production, data = train)
fit6 <- tslm(Consumption ~ Income + Savings, data = train)
# and so on...
After that, I need to evaluate the models in two ways:
test <- data.frame(
Income = uschange[181:187, 2],
Production = uschange[181:187, 3],
Savings = uschange[181:187, 4],
Unemployment = uschange[181:187, 5]
)
y <- uschange[181:187, 1]
CV(fit1)
accuracy(forecast(fit1, test), y)
CV(fit2)
accuracy(forecast(fit2, test), y)
CV(fit3)
accuracy(forecast(fit3, test), y)
# and so on...
As a result, I want to get a model with the smallest value of AICc from CV() and with the smallest error value (for example MAE from accuracy()).
How can I do this automatically?
EDIT:
> dput(head(uschange, 20))
structure(c(0.615986218, 0.46037569, 0.876791423, -0.274245141,
1.897370758, 0.911992909, 0.794538845, 1.648587467, 1.313722178,
1.891474954, 1.530714, 2.318294715, 1.81073916, -0.041739961,
0.354235565, -0.291632155, -0.877027936, 0.351135548, 0.409597702,
-1.475808634, 0.972261043, 1.169084717, 1.55327055, -0.255272381,
1.987153628, 1.447334175, 0.531811929, 1.160125137, 0.457011505,
1.016624409, 1.904101264, 3.890258661, 0.708252663, 0.79430954,
0.433818275, 1.093809792, -1.661684821, -0.938353209, 0.094487794,
-0.122595985, -2.452700312, -0.551525087, -0.358707862, -2.185454855,
1.90973412, 0.901535843, 0.308019416, 2.291304415, 4.149573867,
1.89062398, 1.273352897, 3.436892066, 2.799076357, 0.817688618,
0.868996932, 1.472961869, -0.882483578, 0.074279194, -0.41314971,
-4.064118932, 4.810311502, 7.287992337, 7.289013063, 0.985229644,
3.657770614, 6.051341804, -0.445832214, -1.53087186, -4.35859438,
-5.054525795, 5.809959038, 16.04471706, -5.348868495, 8.426034362,
2.758795652, 11.14642986, -2.533514487, -6.592644641, 0.51717884,
11.3433954, 0.9, 0.5, 0.5, 0.7, -0.1, -0.1, 0.1, 0, -0.2, -0.1,
-0.2, -0.3, -0.3, 0, -0.1, 0.1, 0.2, 0.3, 0.5, 1.3), .Dim = c(20L,
5L), .Dimnames = list(NULL, c("Consumption", "Income", "Production",
"Savings", "Unemployment")), .Tsp = c(1970, 1974.75, 4), class = c("mts",
"ts", "matrix"))
Try this:
# get all names of predictors
cols <- colnames(uschange)[-1]
# create all combinations
out <- unlist(lapply(1:length(cols), function(n) combn(cols, n, FUN=function(row) paste0("Consumption ~ ", paste0(row, collapse = "+")))))
# fit models:
mods = lapply(out, function(frml) tslm(frml, data=train))
# define helper function:
cv_this <- function(x){
return(list('cv' = CV(x), 'acc' = accuracy(forecast(x, test), y)))
}
# run helper function over all models to get evaluations out:
lapply(mods, cv_this)

How to extract the Prediction Intervals of a Gaussian Process Regression via caret kernlab package?

I am trying to use a Gaussian Process Regression (GPR) model to predict hourly streamflow discharges in a river. I've got good results applying the caret::kernlab train () function (thanks Kuhn!).
Since the uncertainty idea is one of the main inherent ones advantages of the GPR, I would like to know if anyone could help me to access the results related to the prediction inteval of the test dataset.
I'll put an extract of the code I've been working. Since my real data are huge (and sincerely, I don't know how to put it here), I'll example with the data(airquality). The main goal in this particular example is to predict airquality$Ozone, using as predictos the lag-variables of airquality$Temperature.
rm(list = ls())
data(airquality)
airquality = na.omit(as.data.frame(airquality)); str(airquality)
library(tidyverse)
library(magrittr)
airquality$Ozone %>% plot(type = 'l')
lines(airquality$Temp, col = 2)
legend("topleft", legend = c("Ozone", "Temperature"),
col=c(1, 2), lty = 1:1, cex = 0.7, text.font = 4, inset = 0.01,
box.lty=0, lwd = 1)
attach(airquality)
df_lags <- airquality %>%
mutate(Temp_lag1 = lag(n = 1L, Temp)) %>%
na.omit()
ESM_train = data.frame(df_lags[1:81, ]) # Training Observed 75% dataset
ESM_test = data.frame(df_lags[82:nrow(df_lags), ]) # Testing Observed 25% dataset
grid_gaussprRadial = expand.grid(.sigma = c(0.001, 0.01, 0.05, 0.1, 0.5, 1, 2)) # Sigma parameters searching for GPR
# TRAIN MODEL ############################
# Tuning set
library(caret)
set.seed(111)
cvCtrl <- trainControl(
method ="repeatedcv",
repeats = 1,
number = 20,
allowParallel = TRUE,
verboseIter = TRUE,
savePredictions = "final")
# Train (aprox. 4 seconds time-simulation)
attach(ESM_train)
set.seed(111)
system.time(Model_train <- caret::train(Ozone ~ Temp + Temp_lag1,
trControl = cvCtrl,
data = ESM_train,
metric = "MAE", # Using MAE since I intend minimum values are my focus
preProcess = c("center", "scale"),
method = "gaussprRadial", # Setting RBF kernel function
tuneGrid = grid_gaussprRadial,
maxit = 1000,
linout = 1)) # Regression type
plot(Model_train)
Model_train
ESM_results_train <- Model_train$resample %>% mutate(Model = "") # K-fold Training measures
# Select the interested TRAIN data and arrange them as dataframe
Ozone_Obs_Tr = Model_train$pred$obs
Ozone_sim = Model_train$pred$pred
Resid = Ozone_Obs_Tr - Ozone_sim
train_results = data.frame(Ozone_Obs_Tr,
Ozone_sim,
Resid)
# Plot Obs x Simulated train results
library(ggplot2)
ggplot(data = train_results, aes(x = Ozone_Obs_Tr, y = Ozone_sim)) +
geom_point() +
geom_abline(intercept = 0, slope = 1, color = "black")
# TEST MODEL ############################
# From "ESM_test" dataframe, we predict ESM Ozone time series, adding it in "ESM_forecasted" dataframe
ESM_forecasted = ESM_test %>%
mutate(Ozone_Pred = predict(Model_train, newdata = ESM_test, variance.model = TRUE))
str(ESM_forecasted)
# Select the interested TEST data and arrange them as a dataframe
Ozone_Obs = ESM_forecasted$Ozone
Ozone_Pred = ESM_forecasted$Ozone_Pred
# Plot Obs x Predicted TEST results
ggplot(data = ESM_forecasted, aes(x = Ozone_Obs, y = Ozone_Pred)) +
geom_point() +
geom_abline(intercept = 0, slope = 1, color = "black")
# Model performance #####
library(hydroGOF)
gof_TR = gof(Ozone_sim, Ozone_Obs_Tr)
gof_TEST = gof(Ozone_Pred,Ozone_Obs)
Performances = data.frame(
Train = gof_TR,
Test = gof_TEST
); Performances
# Plot the TEST prediction
attach(ESM_forecasted)
plot(Ozone_Obs, type = "l", xlab = "", ylab = "", ylim = range(Ozone_Obs, Ozone_Pred))
lines(Ozone_Pred , col = "coral2", lty = 2, lwd = 2)
legend("top", legend = c("Ozone Obs Test", "Ozone Pred Test"),
col=c(1, "coral2"), lty = 1:2, cex = 0.7, text.font = 4, inset = 0.01, box.lty=0, lwd = 2)
These last lines generate the following plot:
The next, and last, step would be to extract the prediction intervals, which is based on a gaussian distribution around each prediction point, to plot it together with this last plot.
The caret::kernlab train() appliance returned better prediction than, for instance, just kernlab::gaussprRadial(), or even tgp::bgp() packages. For both of them I could find the prediction interval.
For example, to pick up the prediction intervals via tgp::bgp(), it could be done typing:
Upper_Bound <- Ozone_Pred$ZZ.q2 #Ozone_Pred - 2 * sigma^2
Lower_Bound <- Ozone_Pred$ZZ.q1 #Ozone_Pred + 2 * sigma^2
Therefore, via caret::kernlab train(), I hope the required standard deviations could be found typing something as
Model_train$...
or maybe, with
Ozone_Pred$...
Moreover, at link: https://stats.stackexchange.com/questions/414079/can-mad-median-absolute-deviation-or-mae-mean-absolute-error-be-used-to-calc,
Stephan Kolassa author explained that we could estimate the prediction intervals through MAE, or even RMSE. But I didn't understand if this is my point, since the MAE I got is just the comparison between Obs x Predicted Ozone data, in this example.
Please, this solution is very important to me! I think I am near to obtain my main results, but I don't know anymore how to try.
Thanks a lot, friends!
I don't really know how the caret framework works, but getting a prediction interval for a GP regression with a Gaussian likelihood is easy enough to do manually.
First we just need a function for the squared exponential kernel, also called the radial basis function kernel, which is what you were using. sf here is the scale factor (unused in the kernlab implementation), and ell is the length scale, called sigma in the kernlab implementation:
covSEiso <- function(x1, x2 = x1, sf = 1.0, ell = 1.0) {
sf <- sf^2
ell <- -0.5 * (1 / (ell^2))
n <- nrow(x1)
m <- nrow(x2)
d <- ncol(x1)
result <- matrix(0, nrow = n, ncol = m)
for ( j in 1:m ) {
for ( i in 1:n ) {
result[i, j] <- sf * exp(ell * sum((x1[i, ] - x2[j, ])^2))
}
}
return(result)
}
I'm not sure what your code says about which length scale to use; below I will use a length scale of 25 and scale factor of 50 (obtained via GPML's hyperparameter optimization routines). Then we use the covSEiso() function above to get the relevant covariances, and the rest is application of basic Gaussian identities. I would refer you to Chapter 2 of Rasmussen and Williams (2006) (graciously provided for free online).
data(airquality)
library(tidyverse)
library(magrittr)
df_lags <- airquality %>%
mutate(Temp_lag1 = lag(n = 1L, Temp)) %>%
na.omit()
ESM_train <- data.frame(df_lags[1:81, ]) # Training Data 75% dataset
ESM_test <- data.frame(df_lags[82:nrow(df_lags), ]) # Testing Data 25% dataset
## For convenience I'll define separately the training and test inputs
X <- ESM_train[ , c("Temp", "Temp_lag1")]
Xstar <- ESM_test[ , c("Temp", "Temp_lag1")]
## Get the kernel manually
K <- covSEiso(X, ell = 25, sf = 50)
## We also need covariance between the test cases
Kstar <- covSEiso(Xstar, X, ell = 25, sf = 50)
Ktest <- covSEiso(Xstar, ell = 25, sf = 50)
## Now the 95% credible region for the posterior is
predictive_mean <- Kstar %*% solve(K + diag(nrow(K))) %*% ESM_train$Ozone
predictive_var <- Ktest - (Kstar %*% solve(K + diag(nrow(K))) %*% t(Kstar))
## Then for the prediction interval we only need to add the observation noise
z <- sqrt(diag(predictive_var)) + 25
interval_high <- predictive_mean + 2 * z
interval_low <- predictive_mean - 2 * z
Then we can check out the prediction intervals
This all is pretty easy to do via my gplmr package (available on GitHub) which can call GPML from R if you have Octave installed:
data(airquality)
library(tidyverse)
library(magrittr)
library(gpmlr)
df_lags <- airquality %>%
mutate(Temp_lag1 = lag(n = 1L, Temp)) %>%
na.omit()
ESM_train <- data.frame(df_lags[1:81, ]) # Training Data 75% dataset
ESM_test <- data.frame(df_lags[82:nrow(df_lags), ]) # Testing Data 25% dataset
X <- as.matrix(ESM_train[ , c("Temp", "Temp_lag1")])
y <- ESM_train$Ozone
Xs <- as.matrix(ESM_test[ , c("Temp", "Temp_lag1")])
ys <- ESM_test$Ozone
hyp0 <- list(mean = numeric(), cov = c(0, 0), lik = 0)
hyp <- set_hyperparameters(hyp0, "infExact", "meanZero", "covSEiso","likGauss",
X, y)
gp_res <- gp(hyp, "infExact", "meanZero", "covSEiso", "likGauss", X, y, Xs, ys)
predictive_mean <- gp_res$YMU
interval_high <- gp_res$YMU + 2 * sqrt(gp_res$YS2)
interval_low <- gp_res$YMU - 2 * sqrt(gp_res$YS2)
Then just plot the predictions, as above:
plot(NULL, xlab = "", ylab = "", xaxt = "n", yaxt = "n",
xlim = range(ESM_test$Temp), ylim = range(c(interval_high, interval_low)))
axis(1, tick = FALSE, line = -0.75)
axis(2, tick = FALSE, line = -0.75)
mtext("Temp", 1, 1.5)
mtext("Ozone", 2, 1.5)
idx <- order(ESM_test$Temp)
polygon(c(ESM_test$Temp[idx], rev(ESM_test$Temp[idx])),
c(interval_high[idx], rev(interval_low[idx])),
border = NA, col = "#80808080")
lines(ESM_test$Temp[idx], predictive_mean[idx])
points(ESM_test$Temp, ESM_test$Ozone, pch = 19)
plot(NULL, xlab = "", ylab = "", xaxt = "n", yaxt = "n",
xlim = range(ESM_test$Temp_lag1), ylim = range(c(interval_high, interval_low)))
axis(1, tick = FALSE, line = -0.75)
axis(2, tick = FALSE, line = -0.75)
mtext("Temp_lag1", 1, 1.5)
mtext("Ozone", 2, 1.5)
idx <- order(ESM_test$Temp_lag1)
polygon(c(ESM_test$Temp_lag1[idx], rev(ESM_test$Temp_lag1[idx])),
c(interval_high[idx], rev(interval_low[idx])),
border = NA, col = "#80808080")
lines(ESM_test$Temp_lag1[idx], predictive_mean[idx])
points(ESM_test$Temp_lag1, ESM_test$Ozone, pch = 19)

R Visualization of markov chains | change values in transition matrix by hand

I run a markov model in R, primaly to get the markov graph.
I want to exclude all lines with a probability < 0,4 from transistion matrix (In this case the line from start to c2 should be deleted.). I tried this by setting these values to 0. But changing values in transition matrix results in an Error: Please see below: I marked the position of interrest with "#######################" (line 76)
# creating a data sample
df1 <- data.frame(path = c('c1 > c2 > c3', 'c1', 'c2 > c3'), conv = c(1, 0, 0), conv_null = c(0, 1, 1)) # original
df1
# calculating the models
mod1 <- markov_model(df1,
var_path = 'path',
var_conv = 'conv',
var_null = 'conv_null',
out_more = TRUE)
mod1
# extracting the results of attribution:
df_res1 <- mod1$result
df_res1
# extracting a transition matrix:
df_trans1 <- mod1$transition_matrix
df_trans1
df_trans1 <- dcast(df_trans1, channel_from ~ channel_to, value.var = 'transition_probability')
df_trans1
### plotting the Markov graph ###
df_trans <- mod1$transition_matrix
df_trans
# adding dummies in order to plot the graph
df_dummy <- data.frame(channel_from = c('(start)', '(conversion)', '(null)'),
channel_to = c('(start)', '(conversion)', '(null)'),
transition_probability = c(
0,
1,
1
)) # die Ãœbergangswarhscheinlichkeit von zu sich selber eintragen
df_dummy
df_trans <- rbind(df_trans, df_dummy)
df_trans
# ordering channels
df_trans$channel_from <- factor(df_trans$channel_from,
levels = c('(start)', '(conversion)', '(null)',
'c1',
'c2',
'c3'
))
df_trans$channel_from
df_trans$channel_to <- factor(df_trans$channel_to,
levels = c('(start)', '(conversion)', '(null)',
'c1',
'c2',
'c3'
))
df_trans$channel_to
df_trans <- dcast(df_trans, channel_from ~ channel_to, value.var = 'transition_probability')
df_trans
# creating the markovchain object
trans_matrix <- matrix(data = as.matrix(df_trans[, -1]),
nrow = nrow(df_trans[, -1]), ncol = ncol(df_trans[, -1]),
dimnames = list(c(as.character(df_trans[, 1])), c(colnames(df_trans[, -1]))))
trans_matrix[is.na(trans_matrix)] <- 0
trans_matrix
####################### I want to delete transition-propabilities < 0.4 from markov graph by setting these values to 0.
trans_matrix[trans_matrix < 0.4] <- 0 #
####################### After doing this, the following querie gives me an error: Error! Row sums not equal to one check positions: 1
trans_matrix1 <- new("markovchain", transitionMatrix = trans_matrix)
trans_matrix1
# plotting the graph
plot(trans_matrix1, edge.arrow.size = 0.5, size = 100, cex.main = 0.11, cex.lab = 0.5, cex.axis = 0.5)
The transition matrix is no longer a transition matrix if you set some positive entries to 0, because the row sums must equal one. So new("markovchain", ....) does not work with such a matrix.
But if you want the plot only, this is possible by modifying the slot transitionMatrix:
library(markovchain)
tm <- rbind(c(0.3, 0.5, 0.2), c(0.1, 0.1, 0.8), c(0.6, 0.2, 0.2))
states <- c("a", "b", "c")
mc <- new("markovchain", states=states, transitionMatrix=tm, name="X")
tm[tm<0.4] <- 0
dimnames(tm) <- list(states, states)
mc#transitionMatrix <- tm
plot(mc)

Anesrake algorithm doesn't work with zero as a weight

I tried using the anesrake package, but it won't accept a weight of zero, giving the error message:
Error in while (range(weightvec)[2] > cap + 1e-04) { :
missing value where TRUE/FALSE needed
Sample code:
ipfdata<- read.csv("dummydata.csv", header = T)
ipfdata$caseid <- 1:length(ipfdata$age)
sex <- c(0.30, 0.70)
age <- c(0.2, 0.1, 0.05, 0.05, 0.05, 0.05, 0.3, 0.2)
ses <- c(0.20, 0.20, 0.0)
targets <- list(sex, age, ses)
names(targets) <- c("sex", "age", "ses")
outsave <- anesrake(targets, ipfdata, caseid = ipfdata$caseid, weightvec = NULL, cap = 10, verbose = TRUE, maxit = 50, choosemethod = "total", type = "nolim", pctlim = 0.0001, nlim=10, iterate = T, force1 = TRUE)
(sample code modified from this question: https://stackoverflow.com/questions/19458306/ipf-raking-using-anesrake-in-r-error)
The package was never updated despite my contacting the author to address this issue. The only workaround is to remove any rows with the variable set to zero before raking.
In the given sample above, you would have to remove any rows with the third SES factor, and then change the SES vector to c(0.20, 0.20) instead of c(0.20, 0.20, 0.0).

Resources