R how to speed up pattern matching using vectors - r

I have a column in one dataframe with city and state names in it:
ac <- c("san francisco ca", "pittsburgh pa", "philadelphia pa", "washington dc", "new york ny", "aliquippa pa", "gainesville fl", "manhattan ks")
ac <- as.data.frame(ac)
I would like to search for the values in ac$ac in another data frame column, d$description and return the value of column id if there is a match.
dput(df)
structure(list(month = c(202110L, 201910L, 202005L, 201703L,
201208L, 201502L), id = c(100559687L, 100558763L, 100558934L,
100558946L, 100543422L, 100547618L), description = c("residential local telephone service local with more san francisco ca flat rate with eas package plan includes voicemail call forwarding call waiting caller id call restriction three way calling id block speed dialing call return call screening modem rental voip transmission telephone access line 34 95 modem rental 7 00 total 41 95",
"digital video programming service multilatino ultra bensalem pa service includes digital economy multilatino digital preferred tier and certain additonal digital channels coaxial cable transmission",
"residential all distance telephone service unlimited voice only harrisburg pa flat rate with eas only features call waiting caller id caller id with call waiting call screening call forwarding call forwarding selective call return 69 3 way calling anonymous call rejection repeat dialing speed dial caller id blocking coaxial cable transmission",
"residential all distance telephone service unlimited voice only pittsburgh pa flat rate with eas only features call waiting caller id caller id with call waiting call screening call forwarding call forwarding selective call return 69 3 way calling anonymous call rejection repeat dialing speed dial caller id blocking",
"local spot advertising 30 second advertisement austin tx weekday 6 am 6 pm other audience demographic w18 49 number of rating points for daypart 0 29 average cpp 125",
"residential public switched toll interstate manhattan ks ks plan area residence switched toll base period average revenue per minute 0 18 minute online"
)), row.names = c(1L, 1245L, 3800L, 10538L, 20362L, 50000L), class = "data.frame")
I have tried to do this via accessing the row indexes of the matches via the following methods:
which(ac$ac %in% df$description)--this returns integer(0).
grep(ac$ac, df$description, value = FALSE)--this returns the first index, 1. But this isn't vectorized.
str_detect(string = ac$ac, pattern = df$description) -- but this returns all FALSE which is incorrect.
My question: how do I search for ac$ac in df$description and return the corresponding value of df$id in the event of a match? Note that the vectors are not of the same length. I am looking for ALL matches, not just the first. I would prefer something simple and fast, because the actual datasets that I will be using have over 100k rows each but any suggestions or ideas are welcome. Thanks.
Edit. Due to Andre's initial answer below, the name of the question was changed to account for the change in the scope of the question.
Edit (12/7): bounty added to generate additional interest and a fast, efficient scalable solution.
Edit (12/8): Clarification--I would like to be able to add the id variable from df to the ac dataframe, as in ac$id.

The simplest solutions are usually the fastest!
Here is my suggestion:
str = paste0(ac, collapse="|")
df$id[grep(str, df$description)]
But you can also this way
df$id[as.logical(rowSums(!is.na(sapply(ac, function(x) stringr::str_match(df$description, x)))))]
Or this way
df$id[grepl(str, df$description, perl=T)]
However, it has to be compared. By the way, I added suggestions from #Andre Wildberg and #Martina C. Arnolda.
Below is the Benchmark.
str = paste0(ac, collapse="|")
fFiolka1 = function() df$id[grep(str, df$description)]
fFiolka2 = function() df$id[as.logical(rowSums(!is.na(sapply(ac, function(x) stringr::str_match(df$description, x)))))]
fFiolka3 = function() df$id[grepl(str, df$description, perl=T)]
fWildberg1 = function() df$id[unlist(sapply(ac, function(x) grep(x, df$description)))]
fWildberg2 = function() df$id[as.logical(rowSums(sapply(ac, function(x) stri_detect_regex(df$description, x))))]
fArnolda1 = function() df[grep(str, df$description), ]["id"]
fArnolda2 = function() df[stringi::stri_detect_regex(df$description, str), ]["id"]
fArnolda3 = function() df %>% filter(description %>% str_detect(str)) %>% select(id)
library(microbenchmark)
ggplot2::autoplot(microbenchmark(
fFiolka1(), fFiolka2(), fFiolka3(),
fWildberg1(), fWildberg2(),
fArnolda1(), fArnolda2(), fArnolda3(),
times=100))
Note, for the sake of simplicity I left ac as a vector !.
ac <- c("san francisco ca", "pittsburgh pa", "philadelphia pa", "washington dc", "new york ny", "aliquippa pa", "gainesville fl", "manhattan ks")
Special update for #jvalenti
OKAY. Now I understand better what you want to achieve. However, in order to fully show the best solution, I have slightly modified your data. Here they are
library(tidyverse)
ac <- c("san francisco ca", "pittsburgh pa", "philadelphia pa", "washington dc", "new york ny", "aliquippa pa", "gainesville fl", "manhattan ks")
ac = tibble(ac = ac)
df = structure(list(
month = c(202110L, 201910L, 202005L, 201703L, 201208L, 201502L),
id = c(100559687L, 100558763L, 100558934L, 100558946L, 100543422L, 100547618L),
description = c(
"residential local telephone pittsburgh pa local with more san francisco ca flat rate with eas philadelphia pa plan includes voicemail call forwarding call waiting caller id call restriction three way calling id block speed dialing call return call screening modem rental voip transmission telephone access line 34 95 modem rental 7 00 total 41 95",
"digital video san francisco ca pittsburgh pa multilatino ultra bensalem pa service includes digital economy multilatino digital preferred tier and certain additonal digital channels coaxial cable transmission",
"residential all distance telephone pittsburgh pa unlimited voice only harrisburg pa flat rate with eas only features call waiting caller id caller id with call waiting call screening call forwarding call forwarding selective call return 69 3 way calling anonymous call rejection repeat dialing speed dial caller id blocking coaxial cable transmission",
"residential all distance telephone pittsburgh pa unlimited voice philadelphia pa san francisco ca pa flat rate with eas only features call waiting caller id caller id with call waiting call screening call forwarding call forwarding selective call return 69 3 way calling anonymous call rejection repeat dialing speed dial caller id blocking",
"local spot advertising 30 second advertisement austin tx weekday 6 am 6 pm other audience demographic w18 49 number of rating points for daypart 0 29 average cpp 125",
"residential public switched toll pittsburgh pa manhattan ks ks plan area residence switched toll base san philadelphia pa ca average revenue per minute 0 18 minute online"
)), row.names = c(1L, 1245L, 3800L, 10538L, 20362L, 50000L), class = "data.frame")
Below you will find four different solutions. One based on the for loop, two solutions based on the functions from the dplyr package, and yet a function from the collapse package.
fSolition1 = function(){
id = vector("list", nrow(ac))
for(i in seq_along(ac$ac)){
id[[i]] = df$id[grep(ac$ac[i], df$description)]
}
ac %>% mutate(id = id) %>% unnest(id)
}
fSolition1()
fSolition2 = function(){
ac %>% group_by(ac) %>%
mutate(id = list(df$id[grep(ac, df$description)])) %>%
unnest(id)
}
fSolition2()
fSolition3 = function(){
ac %>% rowwise(ac) %>%
mutate(id = list(df$id[grep(ac, df$description)])) %>%
unnest(id)
}
fSolition3()
fSolition4 = function(){
ac %>%
collapse::ftransform(id = lapply(ac, function(x) df$id[grep(x, df$description)])) %>%
unnest(id)
}
fSolition4()
Note that for the given data, all functions that return the following table as a result
# A tibble: 12 x 2
ac id
<chr> <int>
1 san francisco ca 100559687
2 san francisco ca 100558763
3 san francisco ca 100558946
4 pittsburgh pa 100559687
5 pittsburgh pa 100558763
6 pittsburgh pa 100558934
7 pittsburgh pa 100558946
8 pittsburgh pa 100547618
9 philadelphia pa 100559687
10 philadelphia pa 100558946
11 philadelphia pa 100547618
12 manhattan ks 100547618
It's time for a benchmark
library(microbenchmark)
ggplot2::autoplot(microbenchmark(
fSolition1(), fSolition2(), fSolition3(), fSolition4(), times=100))
It is perhaps no surprise to anyone that the collapse based solution is the fastest. However, second place may be a big surprise. The good old solution based on the for function is in second place!! Anyone else want to say that for is slow?
Special update for #Gwang-Jin Kim
The actions on vectors did not change much. Look below.
df_ac = ac$ac
df_decription = df$description
df_id = df$id
fSolition5 = function(){
id = vector("list", length = length(df_ac))
for(i in seq_along(df_ac)){
id[[i]] = df_id[grep(df_ac[i], df_decription)]
}
ac %>% mutate(id = id) %>% unnest(id)
}
fSolition5()
library(microbenchmark)
ggplot2::autoplot(microbenchmark(
fSolition1(), fSolition2(), fSolition3(), fSolition4(), fSolition5(), times=100))
But the combination of for and ftransform can be surprising !!!
fSolition6 = function(){
id = vector("list", nrow(ac))
for(i in seq_along(ac$ac)){
id[[i]] = df$id[grep(ac$ac[i], df$description)]
}
ac %>% collapse::ftransform(id = id) %>% unnest(id)
}
fSolition6()
library(microbenchmark)
ggplot2::autoplot(microbenchmark(
fSolition1(), fSolition2(), fSolition3(), fSolition4(), fSolition5(), fSolition6(), times=100))
Last update for #jvalenti
Dear jvaleniti, in your question you wrote I have a column in one dataframe with city and state names and then I will be using have over 100k rows. My conclusion is that it is very likely that a given city will appear several times in your variable description.
However, in the comment you wrote I don't want to change the number of rows in ac
So what kind of results do you expect? Let's see what can be done with it.
Solution 1 - we return all id as a list of vectors
ac %>% collapse::ftransform(id = map(ac, ~df$id[grep(.x, df$description)]))
# # A tibble: 8 x 2
# ac id
# * <chr> <list>
# 1 san francisco ca <int [3]>
# 2 pittsburgh pa <int [5]>
# 3 philadelphia pa <int [3]>
# 4 washington dc <int [0]>
# 5 new york ny <int [0]>
# 6 aliquippa pa <int [0]>
# 7 gainesville fl <int [0]>
# 8 manhattan ks <int [1]>
Solution 2 - we only return the first id
ac %>% collapse::ftransform(id = map_int(ac, ~df$id[grep(.x, df$description)][1]))
# # A tibble: 8 x 2
# ac id
# * <chr> <int>
# 1 san francisco ca 100559687
# 2 pittsburgh pa 100559687
# 3 philadelphia pa 100559687
# 4 washington dc NA
# 5 new york ny NA
# 6 aliquippa pa NA
# 7 gainesville fl NA
# 8 manhattan ks 100547618
Solution 3 - we only return the last id
ac %>%
collapse::ftransform(id = map_int(ac, function(x) {
idx = grep(x, df$description)
ifelse(length(idx)>0, df$id[idx[length(idx)]], NA)}))
# # A tibble: 8 x 2
# ac id
# * <chr> <int>
# 1 san francisco ca 100558946
# 2 pittsburgh pa 100547618
# 3 philadelphia pa 100547618
# 4 washington dc NA
# 5 new york ny NA
# 6 aliquippa pa NA
# 7 gainesville fl NA
# 8 manhattan ks 100547618
Solution 4 - or maybe you would like to choose any id from all possible
ac %>%
collapse::ftransform(id = map_int(ac, function(x) {
idx = grep(x, df$description)
ifelse(length(idx)==0, NA, ifelse(length(idx)==1, df$id[idx], df$id[sample(idx, 1)]))}))
# # A tibble: 8 x 2
# ac id
# * <chr> <int>
# 1 san francisco ca 100558763
# 2 pittsburgh pa 100559687
# 3 philadelphia pa 100547618
# 4 washington dc NA
# 5 new york ny NA
# 6 aliquippa pa NA
# 7 gainesville fl NA
# 8 manhattan ks 100547618
Solution 5 - if you accidentally wanted to see all the id's and wanted to keep the number of ac lines at the same time
ac %>%
collapse::ftransform(id = map(ac, function(x) {
idx = grep(x, df$description)
if(length(idx)==0) tibble(id = NA, idn = "id1") else tibble(
id = df$id[idx],
idn = paste0("id",1:length(id)))})) %>%
unnest(id) %>%
pivot_wider(ac, names_from = idn, values_from = id)
# # A tibble: 8 x 6
# ac id1 id2 id3 id4 id5
# <chr> <int> <int> <int> <int> <int>
# 1 san francisco ca 100559687 100558763 100558946 NA NA
# 2 pittsburgh pa 100559687 100558763 100558934 100558946 100547618
# 3 philadelphia pa 100559687 100558946 100547618 NA NA
# 4 washington dc NA NA NA NA NA
# 5 new york ny NA NA NA NA NA
# 6 aliquippa pa NA NA NA NA NA
# 7 gainesville fl NA NA NA NA NA
# 8 manhattan ks 100547618 NA NA NA NA
Unfortunately, the description provided by you does not indicate which of the above five solutions is an acceptable solution for you. You will have to decide for yourself.

Perhaps this is an option?
ac$id <- sapply(ac$ac, function(x) d$id[grep(x, d$description)])
# ac id
# 1 san francisco ca 100559687
# 2 pittsburgh pa 100558946
# 3 philadelphia pa
# 4 washington dc
# 5 new york ny
# 6 aliquippa pa
# 7 gainesville fl
# 8 manhattan ks 100547618

Try this sapply with grep.
df$id[ unlist( sapply( ac$ac, function(x) grep(x, df$description ) ) ) ]
[1] 100559687 100558946 100547618
EDIT, try stri_detect_regex from stringi. Should be 2-5 times faster.
library(stringi)
df$id[ as.logical( rowSums( sapply( ac$ac, function(x)
stri_detect_regex( df$description, x ) ) ) ) ]
[1] 100559687 100558946 100547618
Microbenchmark on an extended data set with 1.728M rows:
Memory should not be a problem unless you are using a system with less than 4Gb RAM total.
nrow(df)
[1] 1728000
library(microbenchmark)
microbenchmark(
"grep1" = { res <- sapply(ac$ac, function(x) df$id[grep(x, df$description)]) },
"grep2" = { res <- df$id[ unlist( sapply( ac$ac, function(x) grep(x, df$description ) ) ) ] },
"stringi" = { res <- df$id[ as.logical( rowSums( sapply( ac$ac, function(x) stri_detect_regex( df$description, x ) ) ) ) ] }, times=10 )
Unit: seconds
expr min lq mean median uq max neval cld
grep1 96.90757 97.98706 100.13299 99.05837 101.99050 107.04312 10 b
grep2 97.51382 97.66425 100.00610 99.20753 101.17921 106.86661 10 b
stringi 46.15548 46.65894 48.68073 47.29635 50.15713 53.50351 10 a
Memory footprint during microbenchmark:
Path: /Library/Frameworks/R.framework/Versions/4.0/Resources/bin/exec/R
Physical footprint: 638.3M
Physical footprint (peak): 1.8G

You can use regex_inner_join from package fuzzyjoin
> library(fuzzyjoin)
> regex_inner_join(df, ac, by = c(description = "ac"))
month id
1 202110 100559687
2 201703 100558946
3 201502 100547618
description
1 residential local telephone service local with more san francisco ca flat rate with eas package plan includes voicemail call forwarding call waiting caller id call restriction three way calling id block speed dialing call return call screening modem rental voip transmission telephone access line 34 95 modem rental 7 00 total 41 95
2 residential all distance telephone service unlimited voice only pittsburgh pa flat rate with eas only features call waiting caller id caller id with call waiting call screening call forwarding call forwarding selective call return 69 3 way calling anonymous call rejection repeat dialing speed dial caller id blocking
3 residential public switched toll interstate manhattan ks ks plan area residence switched toll base period average revenue per minute 0 18 minute online
ac
1 san francisco ca
2 pittsburgh pa
3 manhattan ks

Checking using a regular expression and non-expensive functions should be fast:
First, we generate the pattern to be checked: ac_regex <- paste(ac$ac, collapse = "|").
There are several ways to detect matches in description and subset. Here are three:
# 1 grep()
df[grep(ac_regex, df$description), ]["id"],
# 2 stringi::stri_detect_*()
df[stri_detect_regex(df$description, ac_regex), ]["id"],
# 3 stringr::str_detect() + tidy subsetting
df %>% filter(description %>% str_detect(ac_regex)) %>% select(id),
All three return the desired subset of df:
id
1 100559687
2 100558946
3 100547618
(You need the packages tidyverse and stringi for options 2 and 3.)
Let's benchmark (using package bench):
bench::mark(
base_grep = df[grep(ac_regex, df$description), ]["id"],
base_stringi = df[stringi::stri_detect_regex(df$description, ac_regex), ]["id"],
tidy = df %>% filter(description %>% str_detect(ac_regex)) %>% select(id),
check = F
)
expression median
<bch:expr> <bch:tm>
1 base_grep 146.61µs
2 base_stringi 119.6µs
3 tidy 1.99ms
I'd go with stringi!

First there is no c$c assignment in the provided code. All the data is assigned to a variable called c. This variable does not have any c members (c$c) you are trying to work with.
Second it is a very bad practice to assign any data to variables called as the basic functions of R c <- c(...).

Related

How to remove values in a column based on other column values equaling the column values above it?

I am currently coding in R and merged two dataframes together so I could include all the information together but I don't want the one column "Cost" to be duplicated multiple times (it was due to the unique values of the last 3 columns). I want it to include the cost 100 only in the first column and then for every other instance where the columns "State", "Market", "Date", and "Cost" are the same as above. I attached what the dataframe looks like and what I want it to be changed to. Thank you!
What it currently looks like
What it should look like
Please use index like in this example:
name_of_your_dataset[nrow_init:nrow_fin, ncol] <- NA
In your case, assuming the name of your dataset as 'data'
data[2:4,4]<- NA
Just leave a positive feedback and if I was useful, just vote this answer up.
Here is a solution using duplicated with your dataframe (df)
State Market Date Cost Word format Type
1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
3 AZ Phoenix 10-22-2020 NA YES FM Country
4 AZ Phoenix 10-23-2020 NA NONE CM Rock
Set duplicates to NA
df$Cost[duplicated(df$Cost)] <- NA
Output:
State Market Date Cost Word format Type
1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
3 AZ Phoenix 10-22-2020 NA YES FM Country
4 AZ Phoenix 10-23-2020 NA NONE CM Rock
The column Date is different so I think you want to do replace duplicated Cost for every value of State and Market combination.
library(dplyr)
df <- df %>%
group_by(State, Market) %>%
mutate(Cost = replace(Cost, duplicated(Cost), NA)) %>%
ungroup
df
# State Market Date Cost Word format Type
# <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
#1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
#2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
#3 AZ Phoenix 10-22-2020 NA YES FM Country
#4 AZ Phoenix 10-23-2020 NA NONE CM Rock
data
It is easier to help if you provide data in a reproducible format
df <- structure(list(State = c("AZ", "AZ", "AZ", "AZ"), Market = c("Phoenix",
"Phoenix", "Phoenix", "Phoenix"), Date = c("10-20-2020", "10-21-2020",
"10-22-2020", "10-23-2020"), Cost = c(100, 100, 100, 100), Word = c("HELLO",
"GOODBYE", "YES", "NONE"), format = c("AM", "PM", "FM", "CM"),
Type = c("Sports related", "Non Sports related", "Country",
"Rock")), row.names = c(NA, -4L), class = "data.frame")

Mutate DF1 based on DF2 with a check

nubie here with a dataframe/mutate question... I want to update a dataframe (df1) based on data in another dataframe (df2). For one offs I've used MUTATE so I figure this is the way to go. Additionally I would like a check function added (TRUE/FALSE ?) to indicate if the the field in df1 was updated.
For Example..
df1-
State
<chr>
1 N.Y.
2 FL
3 AL
4 MS
5 IL
6 WS
7 WA
8 N.J.
9 N.D.
10 S.D.
11 CALL
df2
State New_State
<chr> <chr>
1 N.Y. New York
2 FL Florida
3 AL Alabama
4 MS Mississippi
5 IL Illinois
6 WS Wisconsin
7 WA Washington
8 N.J. New Jersey
9 N.D. North Dakota
10 S.D. South Dakota
11 CAL California
I want the output to look like this
df3
New_State Test
<chr>
1 New York TRUE
2 Florida TRUE
3 Alabama TRUE
4 Mississippi TRUE
5 Illinois TRUE
6 Wisconsin TRUE
7 Washington TRUE
8 New Jersey TRUE
9 North Dakota TRUE
10 South Dakota TRUE
11 CALL FALSE
In essence I want R to read the data in df1 and change df1 based on the match in df2 chaining out to the full state name and replace. Lastly if the data in df1 was update mark as "TRUE" (N.Y. to NEW YORK) and "FALSE" if not updated (CALL vs CAL)
Thanks in advance for any and all help.
This should give you the result you're looking for:
match_vec <- match(df1$State, table = df2$State)
This vector should match all the abbreviated state names in df1 with those in df2. Where there's no match, you end up with a missing value:
Then the following code using dplyr should produce the df3 you requested.
library(dplyr)
df3 <- df1 %>%
mutate(New_State = df2$New_State[match_vec]) %>%
mutate(Test = !is.na(match_vec)) %>%
mutate(New_State = ifelse(is.na(New_State),
State, New_State)) %>%
select(New_State, Test)

Need to ID states from mixed names /IDs in location data

Need to ID states from mixed location data
Need to search for 50 states abbreviations & 50 states full names, and return state abbreviation
N <- 1:10
Loc <- c("Los Angeles, CA", "Manhattan, NY", "Florida, USA", "Chicago, IL" , "Houston, TX",
+ "Texas, USA", "Corona, CA", "Georgia, USA", "WV NY NJ", "qwerty uy PO DOPL JKF" )
df <- data.frame(N, Loc)
> # Objective create variable state such
> # state contains abbreviated names of states from Loc:
> # for "Los Angeles, CA", state = CA
> # for "Florida, USA", sate = FL
> # for "WV NY NJ", state = NA
> # for "qwerty NJuy PO DOPL JKF", sate = NA (inspite of containing the srting NJ, it is not wrapped in spaces)
>
# End result should be Newdf
State <- c("CA", "NY", "FL", "IL", "TX","TX", "CA", "GA", NA, NA)
Newdf <- data.frame(N, Loc, State)
> Newdf
N Loc State
1 1 Los Angeles, CA CA
2 2 Manhattan, NY NY
3 3 Florida, USA FL
4 4 Chicago, IL IL
5 5 Houston, TX TX
6 6 Texas, USA TX
7 7 Corona, CA CA
8 8 Georgia, USA GA
9 9 WV NY NJ <NA>
10 10 qwerty uy PO DOPL JKF <NA>
Is there a package? or can a loop be written? Even if the schema could be demonstrated with a few states, that would be sufficient - I will post the full solution when I get to it. Btw, this is for a Twitter dataset downloaded using rtweet package, and the variable is: place_full_name
There are default constants in R, state.abb and state.name which can be used.
vars <- stringr::str_extract(df$Loc, paste0('\\b',c(state.abb, state.name),
'\\b', collapse = '|'))
#[1] "CA" "NY" "Florida" "IL" "TX" "Texas" "CA" "Georgia" "WV" NA
If you want everything as abbreviations, we can go further and do :
inds <- vars %in% state.name
vars[inds] <- state.abb[match(vars[inds], state.name)]
vars
#[1] "CA" "NY" "FL" "IL" "TX" "TX" "CA" "GA" "WV" NA
However, we can see that in 9th row you expect output as NA but here it returns "WV" because it is a state name. In such cases, you need to prepare rules which are strict enough so that it only extracts state names and nothing else.
Utilising the built-in R constants, state.abb and state.name, we can try to extract these from the Loc with regular expressions.
state.abbs <- sub('.+, ([A-Z]{2})', '\\1', df$Loc)
state.names <- sub('^(.+),.+', '\\1', df$Loc)
Now if the state abbreviations are not in any of the built-in ones, then we can use match to find the positions of our state.names that are in any of the items in the built-in state.name vector, and use that to index state.abb, else keep what we already have. Those that don't match either return NA.
df$state.abb <- ifelse(!state.abbs %in% state.abb,
state.abb[match(state.names, state.name)], state.abbs)
df
N Loc state.abb
1 1 Los Angeles, CA CA
2 2 Manhattan, NY NY
3 3 Florida, USA FL
4 4 Chicago, IL IL
5 5 Houston, TX TX
6 6 Texas, USA TX
7 7 Corona, CA CA
8 8 Georgia, USA GA
9 9 WV NY NJ <NA>
10 10 qwerty uy PO DOPL JKF <NA>

If/Else statement in R

I have two dataframes in R:
city price bedroom
San Jose 2000 1
Barstow 1000 1
NA 1500 1
Code to recreate:
data = data.frame(city = c('San Jose', 'Barstow'), price = c(2000,1000, 1500), bedroom = c(1,1,1))
and:
Name Density
San Jose 5358
Barstow 547
Code to recreate:
population_density = data.frame(Name=c('San Jose', 'Barstow'), Density=c(5358, 547));
I want to create an additional column named city_type in the data dataset based on condition, so if the city population density is above 1000, it's an urban, lower than 1000 is a suburb, and NA is NA.
city price bedroom city_type
San Jose 2000 1 Urban
Barstow 1000 1 Suburb
NA 1500 1 NA
I am using a for loop for conditional flow:
for (row in 1:length(data)) {
if (is.na(data[row,'city'])) {
data[row, 'city_type'] = NA
} else if (population[population$Name == data[row,'city'],]$Density>=1000) {
data[row, 'city_type'] = 'Urban'
} else {
data[row, 'city_type'] = 'Suburb'
}
}
The for loop runs with no error in my original dataset with over 20000 observations; however, it yields a lot of wrong results (it yields NA for the most part).
What has gone wrong here and how can I do better to achieve my desired result?
I have become quite a fan of dplyr pipelines for this type of join/filter/mutate workflow. So here is my suggestion:
library(dplyr)
# I had to add that extra "NA" there, did you not? Hm...
data <- data.frame(city = c('San Jose', 'Barstow', NA), price = c(2000,1000, 500), bedroom = c(1,1,1))
population <- data.frame(Name=c('San Jose', 'Barstow'), Density=c(5358, 547));
data %>%
# join the two dataframes by matching up the city name columns
left_join(population, by = c("city" = "Name")) %>%
# add your new column based on the desired condition
mutate(
city_type = ifelse(Density >= 1000, "Urban", "Suburb")
)
Output:
city price bedroom Density city_type
1 San Jose 2000 1 5358 Urban
2 Barstow 1000 1 547 Suburb
3 <NA> 500 1 NA <NA>
Using ifelse create the city_type in population_density, then we using match
population_density$city_type=ifelse(population_density$Density>1000,'Urban','Suburb')
data$city_type=population_density$city_type[match(data$city,population_density$Name)]
data
city price bedroom city_type
1 San Jose 2000 1 Urban
2 Barstow 1000 1 Suburb
3 <NA> 1500 1 <NA>

Showing multiple columns in aggregate function including strings/characters in R

R noob question here.
Let's say I have this data frame:
City State Pop
Fresno CA 494
San Franciso CA 805
San Jose CA 945
San Diego CA 1307
Los Angeles CA 3792
Reno NV 225
Henderson NV 257
Las Vegas NV 583
Gresham OR 105
Salem OR 154
Eugene OR 156
Portland OR 583
Fort Worth TX 741
Austin TX 790
Dallas TX 1197
San Antonio TX 1327
Houston TX 2100
I want to get let's say every 3rd lowest population per State, which would have:
City State Pop
San Jose CA 945
Las Vegas NV 583
Eugene OR 156
Dallas TX 1197
I tried this one:
ord_pop_state <- aggregate(Pop ~ State , data = ord_pop, function(x) { x[3] } )
And I get this one:
State Pop
CA 945
NV 583
OR 156
TX 1197
What do I lack on this one, in order for me to get the desired output that includes the City?
I would suggest to try data.table package for such task as the syntax is easier and the code is more efficient. I would also suggest to add order function in order to make sure that the data is sorted
library(data.table)
setDT(ord_pop)[order(Pop), .SD[3L], keyby = State]
# State City Pop
# 1: CA San Jose 945
# 2: NV Las Vegas 583
# 3: OR Eugene 156
# 4: TX Dallas 1197
So basically, first the data was ordered by Pop, then we subsetted .SD (which the notation parameter of the data itself) by State
Though this is easily solvable with base R too (we will assume that the data is sorted here), we can just create an index per group and then just do a simple subset by that index
ord_pop$indx <- with(ord_pop, ave(Pop, State, FUN = seq))
ord_pop[ord_pop$indx == 3L, ]
# City State Pop indx
# 3 San Jose CA 945 3
# 8 Las Vegas NV 583 3
# 11 Eugene OR 156 3
# 15 Dallas TX 1197 3
Here is a dplyr version:
df2 <- df %>%
group_by(state) %>% # Group observations by state
arrange(-pop) %>% # Within those groups, sort in descending order by pop
slice(3) # Extract the third row in each arranged group
Here's the toy data I used to test it:
set.seed(1)
df <- data.frame(state = rep(LETTERS[1:3], each = 5), city = rep(letters[1:5], 3), pop = round(rnorm(15, 1000, 100), digits=0))
And here's the output from that; it's a coincidence that 'b' was third-largest in each case, not a glitch in the code:
> df2
Source: local data frame [3 x 3]
Groups: state
state city pop
1 A b 1018
2 B b 1049
3 C b 1039
In R same end results can be achieved using different packages.Choice of package is a trade-off between efficiency and simplicity of code.
Since you come from a strong SQL background,this might be easier to use:
library(sqldf)
#Example to return 3rd lowest population of a State
result <-sqldf('Select City,State,Pop from data order by Pop limit 1 offset 2;')
#Note the SQL query is a sample and needs to be modifed to get desired result.

Resources