I want to discretize a variable using R, preferably SparkR, so that the desired results would be like the following.
library(arules)
mtcars %>% mutate(bins = discretize(x = mpg, method = "interval", breaks = 4))
I checked the document but could see the non-R solutions only at https://spark.apache.org/docs/2.2.0/ml-features.html#bucketizer.
Please advise.
In general SparkR provides a very limited subset of ML functions (a full support is planned for Spark 3.0, as a separate R package SPARK-24359 SPIP: ML Pipelines in R, though simple discretization like this, can be performed using CASE ... WHEN ... statements.
First compute the breaks:
df <- createDataFrame(mtcars)
min_max <- df %>%
select(min(df$mpg), max(df$mpg)) %>%
collect() %>%
unlist()
n <- 4
breaks <- seq(min_max[[1]], min_max[[2]], length.out = n)
Then generate expression:
bucket <- purrr::map2(
breaks[-n], breaks[-1],
function(x, y) between(column("mpg"), c(x, y))) %>%
purrr::reduce2(
., seq(length(.)),
function(acc, x, y) otherwise(when(x, y), acc),
.init = lit(NA))
df %>% withColumn("bucket", bucket)
Related
I have the following code, in which I have three lists and I am sort of trying to calculate the MSE.
seg_mean <- list(c(3.5,2.5),c(6.5,5.5),c(9.2,8.8),c(5.3,4.7))
mean_tot <- list(c(3,3),c(6,6),c(9,9),c(5,5))
seg_len <- list(20,18,17,15)
MSE <- mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) #element wise difference between seg_mean and mean_tot
MSE <- lapply(MSE, function(x) x^2) #element wise squaring
MSE <- mapply('/', MSE, seg_len, SIMPLIFY = FALSE) #dividing each list element by seg_len
This works. My question is can I make these lines of code easier by using %>% from dplyr package in R?
I am not sure how it will work but I am hoping it would look like
MSE <- mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) %>% lapply(function(x) x^2) %>% mapply('/', seg_len, SIMPLIFY = FALSE)
With |> (base R's native pipe), you can do:
mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) |>
lapply(function(x) x^2) |>
mapply(FUN = '/', seg_len, SIMPLIFY = FALSE)
It also works with %>%. The difference is in the second mapply, where one needs to specify the first argument, otherwise it'll get interpreted as mapply(., '/', seg_len, ...) instead of mapply('/', ., seg_len, ...)
Seems no one is able to tackle this, so I re-ask:
I would like to use an external function from sjstats package within my own function.
The external function takes 4 arguments as like: https://strengejacke.github.io/sjstats/reference/weighted_sd.html
weighted_mannwhitney(data, x, grp, weights, ...)
A simple call to that external function can be done like this.
weighted_mannwhitney(c12hour ~ c161sex + weight, efc)
I have built a simple function below, which currently only works when provided with data as input - adding arguments produces errors.
I would like to be able to pass on some arguments (data, x, grp, weights) to the internally called external functional, so I can change those arguments for different datasets.
The Question
My attempts at generalizing the function are not successful, can anyone show me how I can do this? I show how I would like to call the generalized function below.
library(sjstats) # has weighted Mann-Whitney function which I would like to automate within own function
library(tidyverse)
data(efc)
efc$weight <- abs(rnorm(nrow(efc), 1, .3))
# Own working function
own_funk <- function(data, ...) {
# Weighted MannWhitney Test p-value
res <- data %>%
summarise(
across(
c(c12hour,e17age),
~sjstats:::weighted_mannwhitney.formula(.x ~ c161sex + weight, data)$p.value[[1]]
)
) %>%
#round(., 3) %>%
tibble::rownames_to_column() %>%
pivot_longer(-rowname) %>%
pivot_wider(names_from=rowname, values_from=value) %>%
rename("Outcome" = 1, "P-value" = 2) %>%
as.data.frame()
return(res)
}
Call to own working function
own_funk(efc)
How I would like to call the function
# NB: The x can take on many variables
own_funk(data = efc, x = c(c12hour,e17age), grp = c161sex, weights = weight)
Somewhat similar to #snaut's solution I'd use map to loop over your x columns/formulas. The difference mainly is that we probably don't want to hard code the variables within the function and rather pass these as an argument to the function.
library(sjstats) # has weighted Mann-Whitney function which I would like to automate within own function
library(tidyverse)
library(survey)
# Own working function
own_funk <- function(data, x, grp, weights) {
# Weighted MannWhitney Test p-value
formula <- paste0(x, " ~ ", grp, " + ", weights)
res <- map2(.x = formula,
.y = x,
.f = ~ data |>
summarize(!!sym(.y) := sjstats:::weighted_mannwhitney.formula(as.formula(.x), data = data)$p.value[[1]]) %>%
#round(., 3) %>%
tibble::rownames_to_column() %>%
pivot_longer(-rowname) %>%
pivot_wider(names_from=rowname, values_from=value) %>%
rename("Outcome" = 1, "P-value" = 2) %>%
as.data.frame()) |>
bind_rows()
return(res)
}
own_funk(data = efc, x = c("c12hour", "e17age"), grp = "c161sex", weights = "weight")
Which gives:
Outcome P-value
1 c12hour 0.006806572
2 e17age 0.187765467
sjstats:::weighted_mannwhitney.formula is clearly meant for interactive use and not for programming. sjstats:::weighted_mannwhitney.default seems to be more suited for programming, but still does something with unqouting variable names that I did not quite figure out.
What I do most of the time when I have to use functions created to be used in an interactive way for programming is to create a temporary column with a fixed name and then rename the columns I want to iterate over to that column.
This is what I have done in my solution. I also don't use summarise and across but instead use map_dfr to iterate over the column names and output rows of a tibble.
I did not quite understand what you want to archive with the pivot_longer and pivot_wider but I'm sure you can format the output of my solution to your needs.
library(sjstats) # has weighted Mann-Whitney function which I would like to automate within own function
library(tidyverse)
data(efc)
efc$weight <- abs(rnorm(nrow(efc), 1, .3))
# Own working function
own_funk <- function(mydata, ...) {
# Weighted MannWhitney Test p-value
map_dfr(
c("c12hour", "e17age"),
function(column){
tmp_data <- mydata %>%
rename(tmp_col = {column})
tibble(
variable = column,
`P-value`=sjstats:::weighted_mannwhitney.formula(tmp_col ~ c161sex + weight, data=tmp_data)$p.value[[1]]
)
}
)
}
own_funk(efc)
I have an issue where after replicating data for a training and testing set, I'm showing a large amount of memory allocated to my user in Rstudio, but not being used in my R session. I've created a small example to reproduce my situation :)
This code runs a bunch of model, based on different formulas, algorithms, and parameter sets that I give it. It is a function, but I've created a simple script for reprex.
library(dplyr)
library(purrr)
library(modelr)
library(tidyr)
library(pryr)
# set my inputs
data <- mtcars
formulas <- c(test1 = mpg ~ cyl + wt + hp,
test2 = mpg ~ cyl + wt)
params = list()
methods <- "lm"
n <- 20 # num of cv splits
mult <- 10 # number of times I want to replicate some of the data
frac <- .25 # how much I want to cut down other data (fractional)
### the next few chunks get the unique combos of the inputs.
if (length(params) != 0) {
cross_params <- params %>%
map(cross) %>%
map_df(enframe, name = "param_set", .id = "method") %>%
list
} else cross_params <- NULL
methods_df <- tibble(method = methods) %>%
list %>%
append(cross_params) %>%
reduce(left_join, by = "method") %>%
split(1:nrow(.))
# wrangle formulas into a split dataframe
formulas_df <- tibble(formula = formulas,
name = names(formulas)) %>%
split(.$name)
# split out the data into n random train-test combos
cv_data <- data %>%
crossv_kfold(n) %>% # rsample?
mutate_at(vars(train:test), ~map(.x, as_tibble))
# sample out if needed
cv_data_samp <- cv_data %>%
mutate(train = modify(train,
~ .x %>%
split(.$gear == 4) %>%
# take a sample of the non-vo data
modify_at("FALSE", sample_frac, frac) %>%
# multiply out the vo-on data
modify_at("TRUE", function(.df) {
map_df(seq_along(1:mult), ~ .df)
}) %>%
bind_rows))
# get all unique combos of formula and method
model_combos <- list(cv = list(cv_data_samp),
form = formulas_df,
meth = methods_df) %>%
cross %>%
map_df(~ bind_cols(nest(.x$cv), .x$form, .x$meth)) %>%
unnest(data, .preserve = matches("formula|param|value")) %>%
{if ("value" %in% names(.)) . else mutate(., value = list(NULL))}
# run the models
model_combos %>%
# put all arguments into a single params column
mutate(params = pmap(list(formula = formula, data = train), list)) %>%
mutate(params = map2(params, value, ~ append(.x, .y))) %>%
mutate(params = modify(params, discard, is.null)) %>%
# run the models
mutate(model = invoke_map(method, params))
mem_change(rm(data, cv_data, cv_data_samp))
mem_used()
Now after I do this, my mem_used comes out to 77.3mb, but I'm seeing roughly double that (160Mb) allocated to my R user. This really explodes when my data is 3 Gb, which is my real-life case. I end up using 100Gb and tying up a whole server :(.
What is going on and how can I optimize?
Any help appreciated!!!
I figured this out! The issue was that I was converting my series of modelr resample objects to tibbles and that was exploding the memory even though I was subsequently sampling them down. The solution? Write methods for dealing with resample objects so that I never have to convert resample objects to tibble. These looked like:
# this function just samples the indexes instead of the data
sample_frac.resample <- function(data, frac) {
data$idx <- sample(data$idx, frac * length(data$idx))
data
}
# this function replicates the indexes. I should probably call it something else.
augment.resample <- function(data, n) {
data$idx <- unlist(map(seq_along(1:n), ~ data$idx))
data
}
# This function does simple splitting (logical only) of resample obejcts
split.resample <- function(data, .p) {
pos <- list(data = data$data, idx = which(.p, 1:nrow(data$data)))
neg <- list(data = data$data, idx = which(!.p, 1:nrow(data$data)))
class(pos) <- "resample"
class(neg) <- "resample"
list("TRUE" = pos,
"FALSE" = neg)
}
# This function takes the equivalent of a `bind_rows` for resample objects.
# Since bind rows does not call `useMethod` I had to call it something else
bind <- function(data) {
out <- list(data = data[[1]]$data, idx = unlist(map(data, pluck, "idx")))
class(out) <- "resample"
out
}
Then I just converted to a tibble in the same purrr closure in which my model is run for that CV. Problem solved! My memory usage is VERY low now.
I wanted to write a function that is the composition of two functions using the pipe operator %>%, so that, for instance, the following are equivalent (imagine I call it %O%):
iris[1:4] %>% lapply(FUN = function(x) hist(sample(x))
iris[1:4] %>% lapply(FUN = sample %O% hist)
I want it in this direction (not hist %O% sample) because it would follow the same logic as %>%.
I've come to something like that:
'%O%' <- function(lhs1, rhs1){
return(
function(x){
return(x %>% lhs1 %>% rhs1)
})
}
However, it raises errors when I try
iris[1:4] %>% lapply(FUN = sample(size = 100, replace = TRUE) %O% hist)
What should I do to allow the parameters to be understood by %>% inside the %O% function? Is it an eval and quote problem? I also don't really understand how %>% is able to read arguments in lhs and rhs.
Any help will be appreciated. Thanks a lot!
Use a magrittr functional sequence instead?
f <- . %>% sample() %>% hist()
iris[1:4] %>% lapply(f)
Or just
iris[1:4] %>% lapply(. %>% sample() %>% hist())
Here are some possibilities:
iris[1:4] %>% lapply(FUN = function(x) x %>% sample %>% hist)
library(gsubfn)
iris[1:4] %>% fn$lapply(FUN = x ~ x %>% sample %>% hist)
library(functional)
iris[1:4] %>% lapply(FUN = Compose(sample, hist))
library(functional)
`%c%` <- Compose
iris[1:4] %>% lapply(FUN = sample %c% hist)
Update: Added additional solutions.
I am trying to identify correlated explanatory variables and eliminate. I'm using Sapply to apply the regression to the variables I am interested in, and manually deleting the ones with FIVs > 10. However, when I try to reproduce this to quickly run for many vifs, I cannot manage to get my regression script to run with a pasted formula object containing the the names I want to keep. Below:
regressiondata <- data.frame(matrix(ncol=9,nrow=100,runif(900,1,100)))
colnames(regressiondata) <- c("indep1","indep2","indep3","indep4","var1","var2","var3","var4","var5")
vifs1_model <- sapply(regressiondata[,indep_variables],function(x) vif(lm(x~var1+var2+var3+var4+var5,
data = regressiondata,
na.action=na.exclude)))
vifs1 <- rowMeans(vifs1_model)
formula_variables <- paste(names(vifs1),collapse="+")
final_model <- t(round(sapply(regressiondata[,indep_variables],
function(x) lm(x ~ formula_variables,data=regressiondata,na.action=na.exclude)$coef),2))
when I run "final_model" I get this error:
Error in t(round(sapply(regressiondata[, indep_variables], function(x) lm(x ~ :
error in evaluating the argument 'x' in selecting a method for function 't': Error in model.frame.default(formula = x ~ formula_variables, data = regressiondata, :
variable lengths differ (found for 'formula_variables')
I think you have a couple of issues:
You are using sapply over a dataframe when it looks like you simply want to sapply over the vector of independent variable names
Your last nested call to lm seems to mix expressions and strings
Here is my walk through. Your code refers to some missing objects so I have added in some lines I think you left out
library(car) # for fiv()
regressiondata <- data.frame(matrix(ncol=9,nrow=100,runif(900,1,100)))
colnames(regressiondata) <- c("indep1",
"indep2",
"indep3",
"indep4",
"var1",
"var2",
"var3",
"var4",
"var5")
indep_variables <- names(regressiondata)[1:4] # object did not exist
I broke out the anonymous functions for clarity:
f1 <- function(x) {
vif(lm(x~var1+var2+var3+var4+var5,
data = regressiondata,
na.action=na.exclude))
}
Now your regressions
vifs1_model <- sapply(regressiondata[,indep_variables], f1)
vifs1 <- rowMeans(vifs1_model)
formula_variables <- paste(names(vifs1),collapse="+")
I named this function that pulls the coefficients and handed lm a character vector (string) with the whole formula:
getCoefs <- function(x) {
lm(paste(x, "~", formula_variables), data=regressiondata,
na.action=na.exclude)$coef
}
Now, just sapply over the vector of names, then transpose and round:
final_model <- sapply(indep_variables, getCoefs)
final_model <- t(round(final_model ,2))
Here is a dplyr way of doing things. The bulk of the work is done by the sub_regression function, which conducts a regression, filters independent variables by vif, and then redoes the regression
library(dplyr)
library(tidyr)
library(magrittr)
library(car)
sub_regression = function(sub_data_frame)
lm(independent_value ~ var1+var2+var3+var4+var5,
data = sub_data_frame ,
na.action="na.exclude") %>%
vif %>%
Filter(function(x) x <= 10, .) %>%
names %>%
paste(collapse = " + ") %>%
paste("independent_value ~ ", .) %>%
as.formula %>%
lm(. , sub_data_frame, na.action="na.exclude") %>%
coefficients %>%
round(3) %>%
as.list %>%
data.frame(check.names = FALSE)
matrix(ncol=9,nrow=100,runif(900,1,100)) %>%
data.frame %>%
setNames(c("indep1","indep2","indep3","indep4","var1","var2","var3","var4","var5")) %>%
gather(independent_variable, independent_value,
indep1, indep2, indep3, indep4) %>%
group_by(independent_variable) %>%
do(sub_regression(.))