R: Conditionally mutating a variable (tricky) - r

We are currently working on a project for school, and we do not have that much experience with coding and R. The dataset that we are working on contains the variable operationtype, which has a lot of combinations between several operation types. We want to recode this into the variable operationcategory. These are the categories we want to recode the many operations into:
"AVR/P+other"
"AVR/P+MVP/R+other"
"MVR/P+other"
"CABG+other"
"CABG+AVR/P+other"
"CABG+MVR/P+other"
If none of above then > ~ "Remaining"
We were wondering if this can be done somewhat automatically, where we can specify the following for AVR/P+other: If it includes AVR/P, however does not include MVP/R then classify as AVR/P+other, if it does include MVP/R then classify as "AVR/P+MVP/R+other". Since these are two categories that are closely related. Doing this by hand would take forever, so hopefully this is possible.
Thank you for your help in advance.
Koen

Assuming that operationtype contains the exact string, what I would probably do is something like this:
library(dplyr)
library(stringr)
transformed_df <- df %>%
mutate(operationcategory = case_when(str_detect(operationtype, "AVR/P") & str_detect(operationtype, "MVP/R") ~ "AVR/P+MVP/R+other",
str_detect(operationtype, "AVR/P") ~ "AVR/P+other",
TRUE ~ "Remaining"))
Just beware that they are evaluated as they come, so the most restrictive contidions should be on top.
You could use regular expressions to use a single str_detect, but this is probably easier to understand and use.

Related

How to change a dataframe's column types using tidy selection principles

I'm wondering what are the best practices to change a dataframe's column types ideally using tidy selection languages.
Ideally you would set the col types correctly up front when you import the data but that isn't always possible for various reasons.
So the next best pattern that I could identify is the below:
#random dataframe
df <- tibble(a_col=1:10,
b_col=letters[1:10],
c_col=seq.Date(ymd("2022-01-01"),by="day",length.out = 10))
My current favorite pattern involves using across() because I can use tidy selection verb to select variables that I want and then can "map" a formula to those.
# current favorite pattern
df<- df %>%
mutate(across(starts_with("a"),as.character))
Does anyone have any other favorite patterns or useful tricks here? It doesn't have to mutate. Often times I have to change the column types of dataframes with 100s of columns so it becomes quite tedious.
Yes this happens. Pain is where dates are in character format and if you once modify them and try to modify again (say in a mutate / summarise) there will be error.
In such a cases, change datatype only when you get to know what kind of data is there.
Select with names of columns id there is a sense in them
Check before applying the as.* if its already in that type with is.*
Applying it can be be by map / lapply / for loop, whatever is comfortable.
But it would be difficult to have a single approach for "all dataframes" as people try to name fields as per their choice or convenience.
Shared mine. Hope others help.

Collapse some categorical variables in tidyverse

I'm working with a large dataset that has several locations. However, for one of my analyses, two locations "Wells1" and "Wells2", need to be collapsed into a single location "Wells". All other locations should keep their current names.
There are several excellent questions showing how to do this using different basic R functions (#1, #2), but I was wondering if anyone knows which tidyverse function would achieve the same goal.
The only thing I've come up with so far is:
case_when(recvDeployName %in% c("Wells1", "Wells2") ~ "Wells")
However, I get the following error message:
Error: Case 1 (.) must be a two-sided formula, not a list
I suspect, I need to specify what should be done with the other categories, but I'm not sure what that is.
The case_when can be written as
case_when(recvDeployName %in% c("Wells1", "Wells2") ~ "Wells",
TRUE ~ recvDeployName)

Generating a dummy variable from lots of categories

So...I have a large data set with a variable that has many categories. I want to create new variables that group some of those categories into one.
I could do that with a conditional statement, but given the amount of categories it would take me forever to go one line at the time. Also, while my original variable is numeric, the values themselves are random so I can´t use logical or range statements.
How do I create this conditional variable based on many particular values?
I tried the following, but without success. Below is an example of the different categories I want to group into one.
classes <- c(549,162,210,222,44,96,62,208,525,202,149,442,427,
564,423,106,422,546,205,560,127,536,34,261,568,
366,524,401,548,95,156,8,528, 430,527,556,203,554,523,
501,530,55,252,585,19,540,71,204,502,504, 196,436,48,
102,526,201,521,23,558,552,118,416,117,216,510,494,
516,544,518)
So this seemed pretty intuitive to me, but it doesn´t work.
df$chem<- cbind(ifelse(df$class == classes ,1,0))
Needless to say I´m a beginner, and this is probably not so hard to do, but I´ve been looking for a solution to this particular problem and I can´t seem to find it. What am I missing? Thanks!
You are looking for %in% not ==
eg
df$chem <- cbind(ifelse(df$class %in% classes ,1,0))
or using the logical to numeric conversion
df$chem <- as.numeric(df$class %in% classes)
if you want individual dummy variables for all the categories in df$class then you can use the class.ind function in the package nnet (which is shipped as a recommended package)
library(nnet)
class_ind <- class.ind(df$class)
# add if you want to combine with the original
df_ind <- do.call(cbind, list(df, class.ind(df$class))

Unable to filter a data frame?

I am using something like this to filter my data frame:
d1 = data.frame(data[data$ColA == "ColACat1" & data$ColB == "ColBCat2", ])
When I print d1, it works as expected. However, when I type d1$ColB, it still prints everything from the original data frame.
> print(d1)
ColA ColB
-----------------
ColACat1 ColBCat2
ColACat1 ColBCat2
> print(d1$ColA)
Levels: ColACat1 ColACat2
Maybe this is expected but when I pass d1 to ggplot, it messes up my graph and does not use the filter. Is there anyway I can filter the data frame and get only the records that match the filter? I want d1 to not know the existence of data.
As you allude to, the default behavior in R is to treat character columns in data frames as a special data type, called a factor. This is a feature, not a bug, but like any useful feature if you're not expecting it and don't know how to properly use it, it can be quite confusing.
factors are meant to represent categorical (rather than numerical, or quantitative) variables, which comes up often in statistics.
The subsetting operations you used do in fact work normally. Namely, they will return the correct subset of your data frame. However, the levels attribute of that variable remains unchanged, and still has all the original levels in it.
This means that any method written in R that is designed to take advantage of factors will treat that column as a categorical variable with a bunch of levels, many of which just aren't present. In statistics, one often wants to track the presence of 'missing' levels of categorical variables.
I actually also prefer to work with stringsAsFactors = FALSE, but many people frown on that since it can reduce code portability. (TRUE is the default, so sharing your code with someone else may be risky unless you preface every single script with a call to options).
A potentially more convenient solution, particularly for data frames, is to combine the subset and droplevels functions:
subsetDrop <- function(...){
droplevels(subset(...))
}
and use this function to extract subsets of your data frames in a way that is assured to remove any unused levels in the result.
This was such a pain! ggplot messes up if you don't do this right. Using this option at the beginning of my script solved it:
options(stringsAsFactors = FALSE)
Looks like it is the intended behavior but unfortunately I had turned this feature on for some other purpose and it started causing trouble for all my other scripts.

How can I structure and recode messy categorical data in R?

I'm struggling with how to best structure categorical data that's messy, and comes from a dataset I'll need to clean.
The Coding Scheme
I'm analyzing data from a university science course exam. We're looking at patterns in
student responses, and we developed a coding scheme to represent the kinds of things
students are doing in their answers. A subset of the coding scheme is shown below.
Note that within each major code (1, 2, 3) are nested non-unique sub-codes (a, b, ...).
What the Raw Data Looks Like
I've created an anonymized, raw subset of my actual data which you can view here.
Part of my problem is that those who coded the data noticed that some students displayed
multiple patterns. The coders' solution was to create enough columns (reason1, reason2,
...) to hold students with multiple patterns. That becomes important because the order
(reason1, reason2) is arbitrary--two students (like student 41 and student 42 in my
dataset) who correctly applied "dependency" should both register in an analysis, regardless of
whether 3a appears in the reason column or the reason2 column.
How Can I Best Structure Student Data?
Part of my problem is that in the raw data, not all students display the same
patterns, or the same number of them, in the same order. Some students may do just one
thing, others may do several. So, an abstracted representation of example students might
look like this:
Note in the example above that student002 and student003 both are coded as "1b", although I've deliberately shown the order as different to reflect the reality of my data.
My (Practical) Questions
Should I concatenate reason1, reason2, ... into one column?
How can I (re)code the reasons in R to reflect the multiplicity for some students?
Thanks
I realize this question is as much about good data conceptualization as it is about specific features of R, but I thought it would be appropriate to ask it here. If you feel it's inappropriate for me to ask the question, please let me know in the comments, and stackoverflow will automatically flood my inbox with sadface emoticons. If I haven't been specific enough, please let me know and I'll do my best to be clearer.
Make it "long":
library(reshape)
dnow <- read.csv("~/Downloads/catsample20100504.csv")
dnow <- melt(dnow, id.vars=c("Student", "instructor"))
dnow$variable <- NULL ## since ordering does not matter
subset(dnow, Student%in%c(41,42)) ## see the results
What to do next will depend on the kind of analysis you would like to do. But the long format is the useful for irregular data such as yours.
you should use ddply from plyr and split on all of the columns if you want to take into account the different reasons, if you want to ignore them don't use those columns in the split. You'll need to clean up some of the question marks and extra stuff first though.
x <- ddply(data, c("split_column1", "split_column3" etc),
summarize(result_df, stats you want from result_df))
What's the (bigger picture) question you're attempting to answer? Why is this information interesting to you?
Are you just trying to find patterns such as 'if the student does this, then they also likely do this'?
Something I'd consider if that's the case - split the data set into smaller random samples for your analysis to reduce the risk of false positives.
Interesting problem though!

Resources