Have two columns and need a third substracting the two using dplyr.
Very simple example for the sake of clarity. Split/separate approach not valid in my case.
x <- c("FRANCE","GERMANY","RUSSIA")
y <- c("Paris FRANCE", "Berlin GERMANY", "Moscow RUSSIA")
cities <- data.frame(x,y)
cities
x y
1 FRANCE Paris FRANCE
2 GERMANY Berlin GERMANY
3 RUSSIA Moscow RUSSIA
Expected results:
x y new
1 FRANCE Paris FRANCE Paris
2 GERMANY Berlin GERMANY Berlin
3 RUSSIA Moscow RUSSIA Moscow
What I've tried so far (to no avail):
this gets the very same df but removing the city (contrary as desired)
cities %>% mutate(new = setdiff(x,y))
x y new
1 FRANCE Paris FRANCE FRANCE
2 GERMANY Berlin GERMANY GERMANY
3 RUSSIA Moscow RUSSIA RUSSIA
On the contrary, setdiff in reverse order gets same initial data
cities %>% mutate(new = setdiff(y,x))
x y new
1 FRANCE Paris FRANCE Paris FRANCE
2 GERMANY Berlin GERMANY Berlin GERMANY
3 RUSSIA Moscow RUSSIA Moscow RUSSIA
Using gsub to remove worked just for first row issuing a warning
cities %>% mutate(new = gsub(x,"",y))
Warning message:
In gsub(x, "", y) :
argument 'pattern' has length > 1 and only the first element will be used
x y new
1 FRANCE Paris FRANCE Paris
2 GERMANY Berlin GERMANY Berlin GERMANY
3 RUSSIA Moscow RUSSIA Moscow RUSSIA
We can use stringr::str_replace:
library(tidyverse)
cities %>%
mutate_if(is.factor, as.character) %>%
mutate(new = trimws(str_replace(y, x, "")))
# x y new
#1 FRANCE Paris FRANCE Paris
#2 GERMANY Berlin GERMANY Berlin
#3 RUSSIA Moscow RUSSIA Moscow
Here is a solution with base R:
x <- c("FRANCE","GERMANY","RUSSIA")
y <- c("Paris FRANCE", "Berlin GERMANY", "Moscow RUSSIA")
cities <- data.frame(x,y,stringsAsFactors = F)
cities$new = mapply(function(a,b)
{setdiff(strsplit(a,' ')[[1]],strsplit(b,' ')[[1]])}, cities$y, cities$x)
Output:
x y new
1 FRANCE Paris FRANCE Paris
2 GERMANY Berlin GERMANY Berlin
3 RUSSIA Moscow RUSSIA Moscow
Hope this helps!
Related
I'm new to R and I'm not sure how to rephrase the question, but basically, I have this dataset coming from the following code:
data_url <- 'https://prod-scores-api.ausopen.com/year/2021/stats'
dat <- jsonlite::fromJSON(data_url)
men_aces <- bind_rows(dat$statistics$rankings[[1]]$players[1])
men_aces_table <- dat$players %>%
inner_join(men_aces, by = c('uuid' = 'player_id')) %>% select(full_name, nationality)
Which resulted in this data frame:
full_name nationality.uuid nationality.name nationality.code
1 Novak Djokovic 99da9b29-eade-4ac3-a7b0-b0b8c2192df7 Serbia SRB
2 Alexander Zverev 99d83e85-3173-4ccc-9d91-8368720f4a47 Germany GER
3 Milos Raonic 07779acb-6740-4b26-a664-f01c0b54b390 Canada CAN
4 Daniil Medvedev fa925d2d-337f-4074-a0bd-afddb38d66e1 Russia RUS
5 Nick Kyrgios 9b11f78c-47c1-43c4-97d0-ba3381eb9f07 Australia AUS
nationality is the nested object inside the player object if you check the JSON url, it contains the above properties (uuid, name, code), if I select the full_name property I would get the value (which is of type character) right back.
I'm not sure how to select the name and from that data frame (nationality) and rename it to country.
My expected outcome is:
full_name country
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
I would appreciate some help. Sorry I was unclear.
Use purrr::pmap_chr
library(tidyverse)
dat$players %>%
inner_join(men_aces, by = c('uuid' = 'player_id')) %>%
select(full_name, nationality) %>%
mutate(nationality = pmap_chr(nationality, ~ ..2))
full_name nationality
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
6 Alexander Bublik Kazakhstan
7 Reilly Opelka United States of America
8 Jiri Vesely Czech Republic
9 Andrey Rublev Russia
10 Lloyd Harris South Africa
11 Aslan Karatsev Russia
12 Taylor Fritz United States of America
13 Matteo Berrettini Italy
14 Grigor Dimitrov Bulgaria
15 Feliciano Lopez Spain
16 Stefanos Tsitsipas Greece
17 Felix Auger-Aliassime Canada
18 Thanasi Kokkinakis Australia
19 Ugo Humbert France
20 Borna Coric Croatia
You could do:
bind_cols(full_name = dat$players$full_name, country = dat$players$nationality$name)
# A tibble: 169 x 2
full_name country
<chr> <chr>
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
6 Alexander Bublik Kazakhstan
7 Reilly Opelka United States of America
8 Jiri Vesely Czech Republic
9 Andrey Rublev Russia
10 Lloyd Harris South Africa
just add this line at the end
newdf <- data.frame(full_name = men_aces_table$full_name, country = men_aces_table$nationality$name)
After splitting a data frame into multiple data frames by country,I wanted to be able to calculate the mean of the column centralization in each country's data frame that i split. I used tapply which worked and I tried to use sapply() but the weird thing is that all mean values of the country follows the mean value of the first country. I cannot figure out why and I am asked to use sapply as an exercise so I would like to know how i can improve on my code. Any pointer would be appreciated. (it might be a dumb mistake)
INPUT/my code:
strikes.df = read.csv("http://www.stat.cmu.edu/~pfreeman/strikes.csv")
strikes.by.country=split(strikes.df,strikes.df$country)
my.fun=function(x=strikes.by.country){
l=length(strikes.by.country)
for (i in 1:l){
return(strikes.by.country[[i]]$centralization %>% mean)
}
}
sapply(strikes.by.country, my.fun)
#using tapply()
tapply(strikes.df[,"centralization",],INDEX=strikes.df[,"country",],FUN=mean)
OUTPUT
0.374644 0.374644 0.374644 0.374644 0.374644
Finland France Germany Ireland Italy
0.374644 0.374644 0.374644 0.374644 0.374644
Japan Netherlands New.Zealand Norway Sweden
0.374644 0.374644 0.374644 0.374644 0.374644
Switzerland UK USA
0.374644 0.374644 0.374644
Australia Austria Belgium Canada Denmark
0.374644022 0.997670495 0.749485177 0.002244134 0.499958552
Finland France Germany Ireland Italy
0.750374065 0.002729909 0.249968231 0.499711882 0.250699502
Japan Netherlands New.Zealand Norway Sweden
0.124675342 0.749602699 0.375940378 0.875341821 0.875253817
Switzerland UK USA
0.499990005 0.375946785 0.002390639
i am given instruction to use sapply after using split; thats why the only thing that occured to me is using for loops.
Better use sapply on the unique country names. Actually there's no need to split anything.
sapply(unique(strikes.df$country), function(x)
mean(strikes.df[strikes.df$country == x, "centralization"]))
# Australia Austria Belgium Canada Denmark Finland France
# 0.374644022 0.997670495 0.749485177 0.002244134 0.499958552 0.750374065 0.002729909
# Germany Ireland Italy Japan Netherlands New.Zealand Norway
# 0.249968231 0.499711882 0.250699502 0.124675342 0.749602699 0.375940378 0.875341821
# Sweden Switzerland UK USA
# 0.875253817 0.499990005 0.375946785 0.002390639
But if you depend on using split as well, you may do:
sapply(split(strikes.df$centralization, strikes.df$country), mean)
# Australia Austria Belgium Canada Denmark Finland France
# 0.374644022 0.997670495 0.749485177 0.002244134 0.499958552 0.750374065 0.002729909
# Germany Ireland Italy Japan Netherlands New.Zealand Norway
# 0.249968231 0.499711882 0.250699502 0.124675342 0.749602699 0.375940378 0.875341821
# Sweden Switzerland UK USA
# 0.875253817 0.499990005 0.375946785 0.002390639
Or write it in two lines:
s <- split(strikes.df$centralization, strikes.df$country)
sapply(s, mean)
Edit
If splitting the whole data frame is required, do
s <- split(strikes.df, strikes.df$country)
sapply(s, function(x) mean(x[, "centralization"]))
or
foo <- function(x) mean(x[, "centralization"])
sapply(s, foo)
Using the gapminder::gapminder dataset as example data this can be achieved like so:
The example code computes mean life expectancy (lifeExp) by continent.
# sapply: simplifies. returns a vector
sapply(split(gapminder::gapminder, gapminder::gapminder$continent), function(x) mean(x$lifeExp, na.rm = TRUE))
#> Africa Americas Asia Europe Oceania
#> 48.86533 64.65874 60.06490 71.90369 74.32621
# lapply: returns a list
lapply(split(gapminder::gapminder, gapminder::gapminder$continent), function(x) mean(x$lifeExp, na.rm = TRUE))
#> $Africa
#> [1] 48.86533
#>
#> $Americas
#> [1] 64.65874
#>
#> $Asia
#> [1] 60.0649
#>
#> $Europe
#> [1] 71.90369
#>
#> $Oceania
#> [1] 74.32621
I would like to do a group_by or aggregate. I have something like:
> head(affiliation_clean)
Affiliation_ID Affiliation_Name City Country
1 000001 New Mexico State University Las Cruces Las Cruces United States
2 000001 New Mexico State University Las Cruces Las Cruces <NA>
3 000001 New Mexico State University Las Cruces <NA> <NA>
4 000002 Palo Alto Research Center Incorporated Palo Alto <NA>
5 000002 Palo Alto Research Center Incorporated <NA> United States
6 000002 Palo Alto Research Center Incorporated <NA> <NA>
Grouping by "Affiliation_ID" and taking the longest string of "Affiliation_Name", "City" and "Country", I would like to get:
> head(affiliation_clean)
Affiliation_ID Affiliation_Name City Country
1 000001 New Mexico State University Las Cruces Las Cruces United States
2 000002 Palo Alto Research Center Incorporated Palo Alto United States
Thanks in advance.
Here is a dplyr solution based on your description to select the longest string of each Affiliation_ID and column.
library(dplyr)
dat2 <- dat %>%
group_by(Affiliation_ID) %>%
summarise_all(funs(.[which.max(nchar(.))][1]))
dat2
# # A tibble: 2 x 4
# Affiliation_ID Affiliation_Name City Country
# <int> <chr> <chr> <chr>
# 1 1 New Mexico State University Las Cruces Las Cruces United States
# 2 2 Palo Alto Research Center Incorporated Palo Alto United States
DATA
dat <-read.table(text = " Affiliation_ID Affiliation_Name City Country
1 '000001' 'New Mexico State University Las Cruces' 'Las Cruces' 'United States'
2 '000001' 'New Mexico State University Las Cruces' 'Las Cruces' NA
3 '000001' 'New Mexico State University Las Cruces' NA NA
4 '000002' 'Palo Alto Research Center Incorporated' 'Palo Alto' NA
5 '000002' 'Palo Alto Research Center Incorporated' NA 'United States'
6 '000002' 'Palo Alto Research Center Incorporated' NA NA",
header = TRUE, stringsAsFactors = FALSE)
Assuming that there is a single unique 'City/Country' for each 'Affiliation_ID', 'Affiliation_Name', after grouping at the first two columns, get the unique non-NA element of all other columns with summarise_all
library(dplyr)
affiliation_clean %>%
group_by(Affiliation_ID, Affiliation_Name) %>%
summarise_all(funs(unique(.[!is.na(.)])) )
# A tibble: 2 x 4
# Groups: Affiliation_ID [?]
# Affiliation_ID Affiliation_Name City Country
# <chr> <chr> <chr> <chr>
#1 000001 New Mexico State University Las Cruces Las Cruces United States
#2 000002 Palo Alto Research Center Incorporated Palo Alto United States
I am relatively new in R.
I have a dataframe locs that has 1 variable V1 and looks like:
V1
edmonton general hospital
cardiovascular institute, hospital san carlos, madrid spain
hospital of santa maria, lisbon, portugal
and another dataframe cities that has two variables that look like this:
city country
edmonton canada
san carlos spain
los angeles united states
santa maria united states
tokyo japan
madrid spain
santa maria portugal
lisbon portugal
I want to create two new variables in locs that relates any string match of V1 within city so that locs looks like this:
V1 city country
edmonton general hospital edmonton canada
hospital san carlos, madrid spain san carlos, madrid spain
hospital of santa maria, lisbon, portugal santa maria, lisbon portugal, united states
A few things to note: V1 may have multiple country names. Also, if there is a repeat country (for instance, both san carlos and madrid are in spain), then I only want one instance of the country.
Please advise.
Thanks.
A solution using tidyverse and stringr. locs2 is the final output.
library(tidyverse)
library(stringr)
locs2 <- locs %>%
rowwise() %>%
mutate(city = list(str_match(V1, cities$city))) %>%
unnest() %>%
drop_na(city) %>%
left_join(cities, by = "city") %>%
group_by(V1) %>%
summarise_all(funs(toString(sort(unique(.)))))
Result
locs2 %>% as.data.frame()
V1 city country
1 cardiovascular institute, hospital san carlos, madrid spain madrid, san carlos spain
2 edmonton general hospital edmonton canada
3 hospital of santa maria, lisbon, portugal lisbon, santa maria portugal, united states
DATA
library(tidyverse)
locs <- data_frame(V1 = c("edmonton general hospital",
"cardiovascular institute, hospital san carlos, madrid spain",
"hospital of santa maria, lisbon, portugal"))
cities <- read.table(text = "city country
edmonton canada
'san carlos' spain
'los angeles' 'united states'
'santa maria' 'united states'
tokyo japan
madrid spain
'santa maria' portugal
lisbon portugal",
header = TRUE, stringsAsFactors = FALSE)
I have a dataset Data as below,
Region Country Market Price
EUROPE France France 30.4502
EUROPE Israel Israel 5.14110965
EUROPE France France 8.99665
APAC CHINA CHINA 2.6877232
APAC INDIA INDIA 60.9004
AFME SL SL 54.1729685
LA BRAZIL BRAZIL 56.8606917
EUROPE RUSSIA RUSSIA 11.6843732
APAC BURMA BURMA 63.5881232
AFME SA SA 115.0733685
I would like to summarize the data at Region level and get the SUM of Price at every Region Level.
I want the ouput to be Like below.
Data Output
Region Country Price
EUROPE France 30.4502
EUROPE Israel 5.14110965
EUROPE France 8.99665
EUROPE RUSSIA 11.6843732
Europe 56.27233285
APAC BURMA 63.5881232
APAC CHINA 2.6877232
APAC INDIA 60.9004
Apac 127.1762464
AFME BAHARAIN 54.1729685
AFME SA 115.0733685
AFME 169.246337
LA BRAZIL 56.8606917
LA 56.8606917
I have used summaryBy function of doBy package, i have tried the code below.
summaryBy
myfun1 <- function(x){c(s=Sum(x)}
DB= summaryBy(Data$Price ~Region + Country , data=Data, FUN=myfun1)
Anyhelp on this regard is very much appreciated.
You can do this by using dplyr to generate a summary table:
library(dplyr)
totals <- data %>% group_by(Region) %>% summarise(Country="",Price=sum(Price))
And then merging the summary with the rest of the data:
summary <- rbind(data[-3], totals)
Then you can sort by Region to put the summary with the region:
summary <- summary %>% arrange(Region)
Output:
Region Country Price
1 AFME SL 54.1730
2 AFME SA 115.0734
3 AFME 169.2463
4 APAC CHINA 2.6877
5 APAC INDIA 60.9004
6 APAC BURMA 63.5881
7 APAC 127.1762
8 EUROPE France 30.4502
9 EUROPE Israel 5.1411
10 EUROPE France 8.9967
11 EUROPE RUSSIA 11.6844
12 EUROPE 56.2723
13 LA BRAZIL 56.8607
14 LA 56.8607
You have to split data by Region factor and sum Price for each factor
lapply(split(data, data$Region), function(x) sum(x$Price))
Or, if you need to present result as you have shown:
totals = lapply(split(data, data$Region), function(x) rbind(x,data.frame(Region=unique(x$Region), Country="", Market="", Price=sum(x$Price))))
do.call(rbind, totals)