Convert several levels of factor into 2 in R - r

I have 197 levels relating to location, I want to simplify this by creating a new variable "INSIDE" which stores 1 when location is a building/home/etc and 0 when location is outside. I have tried grepl() but it gives an error
data$Inside<-ifelse(grepl(data$Premise.Description,pattern = c("BUILDING","ROOM","AUTO","BALCONY","BANK","BAR","STORE","CHURCH","COLLEGE","CONDOMINIUM","CENTER","DAY CARE","SCHOOL","HOSPITAL","LIBRARY","PARLOR","OFFICE","MOSQUE","CLUB","PORCH","MALL","WAREHOUSE")),1,0)
Warning message:
In grepl(crime_3yr$Premise.Description, pattern = c("BUILDING", :
argument 'pattern' has length > 1 and only the first element will be used
I have tried using lapply() but it did not work too.
I want the output to be like this:
BUILDING 1
SHOP 1
Street 0

grepl takes a regex instead of a list of options, try this:
data$Inside<-ifelse(grepl(data$Premise.Description,pattern = "BUILDING|ROOM|AUTO|BALCONY|BANK|BAR|STORE|CHURCH|COLLEGE|CONDOMINIUM|CENTER|DAY CARE|SCHOOL|HOSPITAL|LIBRARY|PARLOR|OFFICE|MOSQUE|CLUB|PORCH|MALL|WAREHOUSE"),1,0)

If you want to keep the code similar to what you listed you need to look into regular expressions which is what the pattern part of the grepl needs to be.
data$Inside<-ifelse(grepl(data$Premise.Description,pattern = "BUILDING|ROOM|AUTO|BALCONY|BANK|BAR|STORE|CHURCH|COLLEGE|CONDOMINIUM|CENTER|DAY CARE|SCHOOL|HOSPITAL|LIBRARY|PARLOR|OFFICE|MOSQUE|CLUB|PORCH|MALL|WAREHOUSE"),1,0)

Try this code:
Your data.frame:
data<-data.frame(Premise.Description= c("BUILDING 1","MY ROOM","AUTO","BALCONY","OTHER"))
The solution:
toMatch<-c("BUILDING","ROOM","AUTO","BALCONY","BANK","BAR","STORE","CHURCH","COLLEGE","CONDOMINIUM","CENTER","DAY CARE","SCHOOL","HOSPITAL","LIBRARY","PARLOR","OFFICE","MOSQUE","CLUB","PORCH","MALL","WAREHOUSE")
data$Inside<-grepl(paste(toMatch,collapse="|"), data$Premise.Description)
data
Premise.Description Inside
1 BUILDING 1 TRUE
2 MY ROOM TRUE
3 AUTO TRUE
4 BALCONY TRUE
5 OTHER FALSE

You might be better off using data.table:
library(data.table)
setDT(data)
data[
grepl(c("BUILDING","ROOM","AUTO","BALCONY","BANK","BAR","STORE","CHURCH","COLLEGE","CONDOMINIUM","CENTER","DAY CARE","SCHOOL","HOSPITAL","LIBRARY","PARLOR","OFFICE","MOSQUE","CLUB","PORCH","MALL","WAREHOUSE"), Premise),
Inside := TRUE
]

Related

How to deselect many variables without removing specific variables in dplyr

Say there is a data frame that has a structure like this:
df <- data.frame(x.1 = rnorm(n=100),
x.2 = rnorm(n=100),
x.3 = rnorm(n=100),
x.special = rnorm(n=100),
x.y.z = rnorm(n=100))
Inspecting the head, we get this output:
x.1 x.2 x.3 x.special x.y.z
1 1.01014580 -1.4047666 1.50374721 -0.8339784 -0.0831983
2 0.44307253 -0.4695634 -0.71951820 1.5758893 1.2163749
3 -0.87051845 0.1793721 -0.26838489 -1.0477929 -1.0813926
4 -0.28491936 0.4186763 -0.07494088 -0.2177471 0.3490200
5 -0.03769566 -0.3656822 0.12478667 -0.7975811 -0.4481193
6 -0.83808036 0.6842561 0.71231627 -0.3348798 1.7418141
Suppose I want to remove all the numbered variables but keep the x.special and x.y.z variables. I know that I can easily deselect with:
df %>%
select(-x.1,
-x.2,
-x.3)
However for something like 50 or 100 variables like this, it would become cumbersome. Similarly, I know I can pick patterns like so:
df %>%
select(-contains("x."))
But this of course removes everything because the special variables have the . name. Is there a more intelligent way of picking these variables? I feel like there is an option for finding the numeric variable in the name.
# use regex to remove these colums...
colsBool <- !grepl(x=names(df), pattern="\\d")
Result:
> head(df[, colsBool])
x.special x.y.z
1 1.1145156 -0.4911891
2 0.7059937 0.4500111
3 -0.6566422 1.6085353
4 -0.6322514 -0.8017260
5 0.4785106 0.6014765
6 -0.8508830 -0.5078307
Regular expressions are your best friend in this situation.
For instance, if you wanted to remove columns whose last value is a number, just do !grepl(pattern = "\\d$",...), the $ sign at the end of the expression will match only columns ending with a number. The ! sign in front of the grepl() expression negates the values in the match, that is, a TRUE becomes FALSE and vice-versa.

If/else if on a text column in R

I'm a new to R and need help with the following.
Have
Need
Male_18_24_pn
18_24
Male_25_39_pn
25_39
Male_40_64_pn
40_64
Male_65_84_pn
65_84
Male_85_plus_pn
85_plus
Female_18_24_pn
18_24
I need to create the "Need" column using the "Have" column, wondering how I can achieve this in R. As a initial effort, I tried the following code to test but got warning message and every cell of "Need" populated with "18_24":
if (str_detect(pe_1P_new$Have,"18_24")) {pe_1P_new$Need= "18_24"}
Warning message:
In if (str_detect(pe_1P_new$Have, "18_24")) { :
the condition has length > 1 and only the first element will be used
Your help is greatly appreciated. Thanks in advance!
You can do:
require(data.table)
dt = data.table(
have = c("Male_18_24_pn", "Male_25_39_pn",
"Male_40_64_pn", "Male_65_84_pn",
"Male_85_plus_pn", "Female_18_24_pn")
)
dt[ , need := gsub('(Male|Female)_(.+)(_pn)', '\\2', have) ]
Base R solution:
dt$need = gsub('(Male|Female)_(.+)(_pn)', '\\2', dt$have)
No need for a loop or any conditional statements. You can extract the needed information with the help of a vectorized function, such as gsub() and a simple regex.
[Volunteer edit] This is an attempt to explain the regular expression logic of that substitution pattern (see ?regex for a very terse but complete description). You need to understand what a capture class can do.
gsub('(Male|Female)#This matches "Male" or "Female", the first "capture classes"
_(.+)# Second capture class, matching anything after an underscore
(_pn)',# ... up to but not including an "_pn"
'\\2', # replace anything matched with only the second capture class
dt$have)
You will not be able to run this version because the carriage retruns and spaces get in the way of the regex engine process.
We could use trimws as well
dt[, need := trimws(have, whitespace = '[[:alpha:]]+_|_[[:alpha:]]+')]
We can try gsub like below
> dt[, need := gsub(".*?_(.*)_.*", "\\1", have)][]
have need
1: Male_18_24_pn 18_24
2: Male_25_39_pn 25_39
3: Male_40_64_pn 40_64
4: Male_65_84_pn 65_84
5: Male_85_plus_pn 85_plus
6: Female_18_24_pn 18_24
You can use the package {unglue}
df <- data.frame(
Have = c("Male_18_24_pn", "Male_25_39_pn",
"Male_40_64_pn", "Male_65_84_pn",
"Male_85_plus_pn", "Female_18_24_pn")
)
library(unglue)
unglue_unnest(df, Have, "{}_{Need}_pn", remove = FALSE)
#> Have Need
#> 1 Male_18_24_pn 18_24
#> 2 Male_25_39_pn 25_39
#> 3 Male_40_64_pn 40_64
#> 4 Male_65_84_pn 65_84
#> 5 Male_85_plus_pn 85_plus
#> 6 Female_18_24_pn 18_24
Created on 2021-07-20 by the reprex package (v0.3.0)

R: Replace all Values that are not equal to a set of values

All.
I've been trying to solve a problem on a large data set for some time and could use some of your wisdom.
I have a DF (1.3M obs) with a column called customer along with 30 other columns. Let's say it contains multiple instances of customers Customer1 thru Customer3000. I know that I have issues with 30 of those customers. I need to find all the customers that are NOT the customers I have issues and replace the value in the 'customer' column with the text 'Supported Customer'. That seems like it should be a simple thing...if it werent for the number of obs, I would have loaded it up in Excel, filtered all the bad customers out and copy/pasted the text 'Supported Customer' over what remained.
Ive tried replace and str_replace_all using grepl and paste/paste0 but to no avail. my current code looks like this:
#All the customers that have issues
out <- c("Customer123", "Customer124", "Customer125", "Customer126", "Customer127",
"Customer128", ..... , "Customer140")
#Look for everything that is NOT in the list above and replace with "Enabled"
orderData$customer <- str_replace_all(orderData$customer, paste0("[^", paste(out, collapse =
"|"), "]"), "Enabled Customers")
That code gets me this error:
Error in stri_replace_all_regex(string, pattern, fix_replacement(replacement), :
In a character range [x-y], x is greater than y. (U_REGEX_INVALID_RANGE)
I've tried the inverse of this approach and pulled a list of all obs that dont match the list of out customers. Something like this:
in <- orderData %>% filter(!customer %in% out) %>% select(customer) %>%
distinct(customer)
This gets me a much larger list of customers that ARE enabled (~3,100). Using the str_replace_all and paste approach seems to have issues though. At this large number of patterns, paste no longer collapses using the "|" operator. instead I get a string that looks like:
"c(\"Customer1\", \"Customer2345\", \"Customer54\", ......)
When passed into str_replace_all, this does not match any patterns.
Anyways, there's got to be an easier way to do this. Thanks for any/all help.
Here is a data.table approach.
First, some example data since you didn't provide any.
customer <- sample(paste0("Customer",1:300),5000,replace = TRUE)
orderData <- data.frame(customer = sample(paste0("Customer",1:300),5000,replace = TRUE),stringsAsFactors = FALSE)
orderData <- cbind(orderData,matrix(runif(0,100,n=5000*30),ncol=30))
out <- c("Customer123", "Customer124", "Customer125", "Customer126", "Customer127", "Customer128","Customer140")
library(data.table)
setDT(orderData)
result <- orderData[!(customer %in% out),customer := gsub("Customer","Supported Customer ",customer)]
result
customer 1 2 3 4 5 6 7 8 9
1: Supported Customer 134 65.35091 8.57117 79.594166 84.88867 97.225276 84.563997 17.15166 41.87160 3.717705
2: Supported Customer 225 72.95757 32.80893 27.318046 72.97045 28.698518 60.709381 92.51114 79.90031 7.311200
3: Supported Customer 222 39.55269 89.51003 1.626846 80.66629 9.983814 87.122153 85.80335 91.36377 14.667535
4: Supported Customer 184 24.44624 20.64762 9.555844 74.39480 49.189537 73.126275 94.05833 36.34749 3.091072
5: Supported Customer 194 42.34858 16.08034 34.182737 75.81006 35.167769 23.780069 36.08756 26.46816 31.994756
---

R, getting an invalid argument to unary operator when using order function

I'm essentially doing the exact same thing 3 times, and when adding a new variable I get this error
Error in -emps$EV : invalid argument to unary operator
The code chunk causing this is
evps<-aggregate(EV~player,s1k,mean)
sort2<-evps[order(-evps$EV),]
head(sort2,10)
s1k$EM<-s1k$points-s1k$EV
emps<-aggregate(EM~player,s1k,mean)
sort3<-emps[order(-emps$EV),]
head(sort3,10)
Works like a charm for the first list, but the identical code thereafter causes the error.
This specific line is causing the error
sort3<-emps[order(-emps$EV),]
How can I fix/workaround this?
Full Code
url <- getURL("https://raw.githubusercontent.com/M-ttM/Basketball/master/class.csv")
shots <- read.csv(text = url)
shots$make<-shots$points>0
shots2<-shots[which(!(shots$player=="Luc Richard Mbah a Moute")),]
fit1<-glm(make~factor(type)+factor(period), data=shots2,family="binomial")
summary(fit1)
shots2$makeodds<-fitted(fit1)
shots2$EV<-shots2$makeodds*ifelse(shots2$type=="3pt",3,2)
shots3<-shots2[which(shots2$y>7),]
locmakes<-data.frame(table(shots3[, c("x", "y")]))
s1k <- shots2[with(shots2, player %in% names(which(table(player)>=1000))), ]
pps<-aggregate(points~player,s1k,mean)
sort<-pps[order(-PPS$points),]
head(sort,10)
evps<-aggregate(EV~player,s1k,mean)
sort2<-evps[order(-evps$EV),]
head(sort2,10)
s1k$EM<-s1k$points-s1k$EV
emps<-aggregate(EM~player,s1k,mean)
sort3<-emps[order(-emps$EV),]
head(sort3,10)
The error message seems to occur when trying to order columns including chr type data. A possible workaround is to use the reverse function rev() instead of the minus sign, like so:
column_a = c("a","a","b","b","c","c")
column_b = seq(6)
df = data.frame(column_a, column_b)
df$column_a = as.character(df$column_a)
df[with(df, order(-column_a, column_b)),]
> Error in -column_a : invalid argument to unary operator
df[with(df, order(rev(column_a), column_b)),]
column_a column_b
5 c 5
6 c 6
3 b 3
4 b 4
1 a 1
2 a 2
Let me know if it works in your case.
On this line, emps$EV doesn't exist.
s1k$EM<-s1k$points-s1k$EV
emps<-aggregate(EM~player,s1k,mean)
sort3<-emps[order(-emps$EV),]
head(sort3,10)
You probably meant
s1k$EM<-s1k$points-s1k$EV
emps<-aggregate(EM~player,s1k,mean)
sort3<-emps[order(-emps$EM),]
head(sort3,10)

Subset by function's variable using $variable

I am having trouble to subset from a list using a variable of my function.
rankhospital <- function(state,outcome,num = "best") {
#code here
e3<-dataframe(...,state.name,...)
if (num=="worst"){ return(worst(state,outcome))
}else if((num%in%b=="TRUE" & outcome=="heart attack")=="TRUE"){
sep<-split(e3,e3$state.name)
hosp.estado<-sep$state
hospital<-hosp.estado[num,1]
return(as.character(hospital))
I split my data frame by state (which is a variable of my function)
But hosp.estado<-sep$state doesn't work. I have also tried as.data.frame.
The function (rankhospital("NY"....) returns me a character(0).
When I feed the sep$state with sep$"NY" directly in code it works perfectly so I guess the problem is I can't use a function's variable to do this. Am I right? What could I use instead?
Thank you!!
If state is a variable in your function, you can refer to a column with the name given by state using: sep[state] or sep[[state]]. The first produces a data frame with one column named based on the value of state. The second produces an unnamed vector.
df=data.frame(NY=rnorm(10),CA=rnorm(10), IL=rnorm(10))
state="NY"
df[state]
# NY
# 1 -0.79533912
# 2 -0.05487747
# 3 0.25014132
# 4 0.61824329
# 5 -0.17262350
# 6 -2.22390027
# 7 -1.26361438
# 8 0.35872890
# 9 -0.01104548
# 10 -0.94064916
df[[state]]
# [1] -0.79533912 -0.05487747 0.25014132 0.61824329 -0.17262350 -2.22390027 -1.26361438 0.35872890 -0.01104548 -0.94064916
class(df[state])
# [1] "data.frame"
class(df[[state]])
# [1] "numeric"
It seems like you are trying to get the top hospital in a state. You don't want to split here (see the result of sep to see what I mean). Instead, use:
as.character(e3[e3$state.name==state, 1][num])
This hopefully does what you want.
You need sep[[state]] instead of sep$state to get the data frame out of your sep list, which matches the state parameter of your function. Like this:
e3 <- read.csv("https://raw.github.com/Hindol/data-analysis-coursera/master/HW3/hospital-data.csv")
state <- "WY"
num <- 1:5
sep<-split(e3,e3$State)
hosp.estado<-sep[[state]]
hospital<-hosp.estado[num,1]
as.character(hospital)
# [1] "530002" "530006" "530008" "530010" "530011"

Resources