I'm trying to scrape wiki table with this code:
library(tidyverse)
library(rvest)
my_url <- "https://en.wikipedia.org/wiki/List_of_Australian_Open_men%27s_singles_champions"
mytable <- read_html(my_url) %>% html_nodes("table") %>% .[[4]]
mytable <- mytable %>% html_table()
The problem is that in the table returned in both columns with names (champion & runner-up) values are doubled. well not exactly doubled, it looks like two forms of presenting name/surname in different order and with comma once. It does not look like that on the original wiki page only "name surname" is visible there. Why does it happen and how to get rid of it? I need those columns to contain 'name surname' only.
head(mytable)
Year[f] Country Champion Country Runner-up Score in the final[4][14]
1 1969 AUS Laver, RodRod Laver[b] ESP Gimeno, AndrésAndrés Gimeno 6–3, 6–4, 7–5
2 1970 USA Ashe, ArthurArthur Ashe AUS Crealy, DickDick Crealy 6–4, 9–7, 6–2
3 1971 AUS Rosewall, KenKen Rosewall USA Ashe, ArthurArthur Ashe 6–1, 7–5, 6–3
4 1972 AUS Rosewall, KenKen Rosewall AUS Anderson, MalcolmMalcolm Anderson 7–6(7–2), 6–3, 7–5
5 1973 AUS Newcombe, JohnJohn Newcombe NZL Parun, OnnyOnny Parun 6–3, 6–7, 7–5, 6–1
6 1974 USA Connors, JimmyJimmy Connors AUS Dent, PhilPhil Dent 7–6(9–7), 6–4, 4–6, 6–3
htmltab could be used to scrap these Wiki tables.
library(htmltab)
#data cleaning steps
bFun <- function(node) {
x <- XML::xmlValue(node)
gsub("\\s[<†‡].*$", "", iconv(x, from = 'UTF-8', to = "Windows-1252", sub="byte"))
}
df1 <- htmltab(doc = "https://en.wikipedia.org/wiki/List_of_Australian_Open_men%27s_singles_champions",
which = 4,
rm_superscript = F,
bodyFun = bFun) #this function is not required if you are executing the code from Mac
head(df1)
which gives
# Year[f] Country Champion Country Runner-up Score in the final[4][14]
#2 1969 AUS Rod Laver[b] ESP Andrés Gimeno 6–3, 6–4, 7–5
#3 1970 USA Arthur Ashe AUS Dick Crealy 6–4, 9–7, 6–2
#4 1971 AUS Ken Rosewall USA Arthur Ashe 6–1, 7–5, 6–3
#5 1972 AUS Ken Rosewall AUS Malcolm Anderson 7–6(7–2), 6–3, 7–5
#6 1973 AUS John Newcombe NZL Onny Parun 6–3, 6–7, 7–5, 6–1
#7 1974 USA Jimmy Connors AUS Phil Dent 7–6(9–7), 6–4, 4–6, 6–3
and
df2 <- htmltab(doc = "https://en.wikipedia.org/wiki/List_of_Wimbledon_gentlemen%27s_singles_champions",
which = 3,
rm_superscript = F,
bodyFun = bFun) #this function is not required if you are executing the code from Mac
head(df2)
gives
# Year[d] Country Champion Country Runner-up Score in the final[4]
#2 1877 BRI[e] Spencer Gore BRI William Marshall 6–1, 6–2, 6–4
#3 1878 BRI Frank Hadow BRI Spencer Gore 7–5, 6–1, 9–7
#4 1879 BRI John Hartley BRI Vere St. Leger Goold 6–2, 6–4, 6–2
#5 1880 BRI John Hartley BRI Herbert Lawford 6–3, 6–2, 2–6, 6–3
#6 1881 BRI William Renshaw BRI John Hartley 6–0, 6–1, 6–1
#7 1882 BRI William Renshaw BRI Ernest Renshaw 6–1, 2–6, 4–6, 6–2, 6–2
Related
I have a text file containing information on book title, author name, and country of birth which appear in seperate lines as shown below:
Oscar Wilde
De Profundis
Ireland
Nathaniel Hawthorn
Birthmark
USA
James Joyce
Ulysses
Ireland
Walt Whitman
Leaves of Grass
USA
Is there any way to convert the text to a dataframe with these three items appearing as different columns:
ID Author Book Country
1 "Oscar Wilde" "De Profundis" "Ireland"
2 "Nathaniel Hawthorn" "Birthmark" "USA"
There are built-in functions for dealing with this kind of data:
data.frame(scan(text=xx, multi.line=TRUE,
what=list(Author="", Book="", Country=""), sep="\n"))
# Author Book Country
#1 Oscar Wilde De Profundis Ireland
#2 Nathaniel Hawthorn Birthmark USA
#3 James Joyce Ulysses Ireland
#4 Walt Whitman Leaves of Grass USA
You can create a 3-column matrix from one column of data.
dat <- read.table('data.txt', sep = ',')
result <- matrix(dat$V1, ncol = 3, byrow = TRUE) |>
data.frame() |>
setNames(c('Author', 'Book', 'Country'))
result <- cbind(ID = 1:nrow(result), result)
result
# ID Author Book Country
#1 1 Oscar Wilde De Profundis Ireland
#2 2 Nathaniel Hawthorn Birthmark USA
#3 3 James Joyce Ulysses Ireland
#4 4 Walt Whitman Leaves of Grass USA
There aren't any built in functions that handle data like this. But you can reshape your data after importing.
#Test data
xx <- "Oscar Wilde
De Profundis
Ireland
Nathaniel Hawthorn
Birthmark
USA
James Joyce
Ulysses
Ireland
Walt Whitman
Leaves of Grass
USA"
writeLines(xx, "test.txt")
And then the code
library(dplyr)
library(tidyr)
lines <- read.csv("test.txt", header=FALSE)
lines %>%
mutate(
rid = ((row_number()-1) %% 3)+1,
pid = (row_number()-1) %/%3 + 1) %>%
mutate(col=case_when(rid==1~"Author",rid==2~"Book", rid==3~"Country")) %>%
select(-rid) %>%
pivot_wider(names_from=col, values_from=V1)
Which returns
# A tibble: 4 x 4
pid Author Book Country
<dbl> <chr> <chr> <chr>
1 1 Oscar Wilde De Profundis Ireland
2 2 Nathaniel Hawthorn Birthmark USA
3 3 James Joyce Ulysses Ireland
4 4 Walt Whitman Leaves of Grass USA
I'm new to R and I'm not sure how to rephrase the question, but basically, I have this dataset coming from the following code:
data_url <- 'https://prod-scores-api.ausopen.com/year/2021/stats'
dat <- jsonlite::fromJSON(data_url)
men_aces <- bind_rows(dat$statistics$rankings[[1]]$players[1])
men_aces_table <- dat$players %>%
inner_join(men_aces, by = c('uuid' = 'player_id')) %>% select(full_name, nationality)
Which resulted in this data frame:
full_name nationality.uuid nationality.name nationality.code
1 Novak Djokovic 99da9b29-eade-4ac3-a7b0-b0b8c2192df7 Serbia SRB
2 Alexander Zverev 99d83e85-3173-4ccc-9d91-8368720f4a47 Germany GER
3 Milos Raonic 07779acb-6740-4b26-a664-f01c0b54b390 Canada CAN
4 Daniil Medvedev fa925d2d-337f-4074-a0bd-afddb38d66e1 Russia RUS
5 Nick Kyrgios 9b11f78c-47c1-43c4-97d0-ba3381eb9f07 Australia AUS
nationality is the nested object inside the player object if you check the JSON url, it contains the above properties (uuid, name, code), if I select the full_name property I would get the value (which is of type character) right back.
I'm not sure how to select the name and from that data frame (nationality) and rename it to country.
My expected outcome is:
full_name country
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
I would appreciate some help. Sorry I was unclear.
Use purrr::pmap_chr
library(tidyverse)
dat$players %>%
inner_join(men_aces, by = c('uuid' = 'player_id')) %>%
select(full_name, nationality) %>%
mutate(nationality = pmap_chr(nationality, ~ ..2))
full_name nationality
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
6 Alexander Bublik Kazakhstan
7 Reilly Opelka United States of America
8 Jiri Vesely Czech Republic
9 Andrey Rublev Russia
10 Lloyd Harris South Africa
11 Aslan Karatsev Russia
12 Taylor Fritz United States of America
13 Matteo Berrettini Italy
14 Grigor Dimitrov Bulgaria
15 Feliciano Lopez Spain
16 Stefanos Tsitsipas Greece
17 Felix Auger-Aliassime Canada
18 Thanasi Kokkinakis Australia
19 Ugo Humbert France
20 Borna Coric Croatia
You could do:
bind_cols(full_name = dat$players$full_name, country = dat$players$nationality$name)
# A tibble: 169 x 2
full_name country
<chr> <chr>
1 Novak Djokovic Serbia
2 Alexander Zverev Germany
3 Milos Raonic Canada
4 Daniil Medvedev Russia
5 Nick Kyrgios Australia
6 Alexander Bublik Kazakhstan
7 Reilly Opelka United States of America
8 Jiri Vesely Czech Republic
9 Andrey Rublev Russia
10 Lloyd Harris South Africa
just add this line at the end
newdf <- data.frame(full_name = men_aces_table$full_name, country = men_aces_table$nationality$name)
I have a huge database (more than 65M of rows) and I noticed that some cells are misplaced. As an example, let's say I have this:
library("tidyverse")
DATA <- tribble(
~SURNAME,~NAME,~STATE,~COUNTRY,
'Smith','Emma','California','USA',
'Johnson','Oliia','Texas','USA',
'Williams','James','USA','California',
'Jones','Noah','Pennsylvania','USA',
'Williams','Liam','Illinois','USA',
'Brown','Sophia','USA','Louisiana',
'Daves','Evelyn','USA','Oregon',
'Miller','Jacob','New Mexico','USA',
'Williams','Lucas','Connecticut','USA',
'Daves','John','California','USA',
'Jones','Carl','USA','Illinois'
)
=====
> DATA
# A tibble: 11 x 4
SURNAME NAME STATE COUNTRY
<chr> <chr> <chr> <chr>
1 Smith Emma California USA
2 Johnson Oliia Texas USA
3 Williams James USA California
4 Jones Noah Pennsylvania USA
5 Williams Liam Illinois USA
6 Brown Sophia USA Louisiana
7 Daves Evelyn USA Oregon
8 Miller Jacob New Mexico USA
9 Williams Lucas Connecticut USA
10 Daves John California USA
11 Jones Carl USA Illinois
As you can see, the Country and State are misplaced in some rows, how can I efficiently swap those ones?
Kind Regards,
Luiz.
Using data.table and the in-built state.name vector:
setDT(DATA)
DATA[COUNTRY %in% state.name, `:=`(COUNTRY = STATE, STATE = COUNTRY)]
DATA
# SURNAME NAME STATE COUNTRY
# 1: Smith Emma California USA
# 2: Johnson Oliia Texas USA
# 3: Williams James California USA
# 4: Jones Noah Pennsylvania USA
# 5: Williams Liam Illinois USA
# 6: Brown Sophia Louisiana USA
# 7: Daves Evelyn Oregon USA
# 8: Miller Jacob New Mexico USA
# 9: Williams Lucas Connecticut USA
# 10: Daves John California USA
# 11: Jones Carl Illinois USA
Check this solution (it assumes that COUNTRY column is in ISO3 format e.g. MEX, CAN):
DATA %>%
mutate(
COUNTRY_TMP = if_else(str_detect(COUNTRY, '[A-Z]{3}'), COUNTRY, STATE),
STATE = if_else(str_detect(COUNTRY, '[A-Z]{3}'), STATE, COUNTRY),
COUNTRY = COUNTRY_TMP
) %>%
select(-COUNTRY_TMP)
Assuming all country names are followed ISO3 format, we can first install the countrycode package. In this package, there is a data frame called codelist with a column iso3c with the ISO3 country names. We can use that as follows to swap the country name.
library(tidyverse)
library(countrycode)
DATA2 <- DATA %>%
mutate(STATE2 = ifelse(STATE %in% codelist$iso3c &
!COUNTRY %in% codelist$iso3c, COUNTRY, STATE),
COUNTRY2 = ifelse(!STATE %in% codelist$iso3c &
COUNTRY %in% codelist$iso3c, COUNTRY, STATE)) %>%
select(-STATE, -COUNTRY) %>%
rename(STATE = STATE2, COUNTRY = COUNTRY2)
DATA2
# # A tibble: 11 x 4
# SURNAME NAME STATE COUNTRY
# <chr> <chr> <chr> <chr>
# 1 Smith Emma California USA
# 2 Johnson Oliia Texas USA
# 3 Williams James California USA
# 4 Jones Noah Pennsylvania USA
# 5 Williams Liam Illinois USA
# 6 Brown Sophia Louisiana USA
# 7 Daves Evelyn Oregon USA
# 8 Miller Jacob New Mexico USA
# 9 Williams Lucas Connecticut USA
# 10 Daves John California USA
# 11 Jones Carl Illinois USA
Hi I have a data where the year value is embedded in the column name as follows and I would like to reshape it to long format.
state<- c('MN', 'PA', 'NY')
city<- c('Minessota', 'Pittsburgh','Newyork')
POPEST2010<- c(2899, 344,4555)
POPEST2011<- c(4444, 348,8999)
POPEST2012<- c(555, 55,77665)
df<- data.frame(state,city, POPEST2010, POPEST2011, POPEST2012)
Any suggestions on how I can reshape to long format so I can see the data as follow:
state city year POPEST
MN Minessota 2010 2899
MN Minessota 2011 4444
MN Minessota 2012 8999
similarly for other states Any ideas? Thanks so much!
A solution using rename and gather
df %>%
rename_all(.funs = funs(gsub('POPEST', '', .))) %>%
gather(year, POPEST, -state, -city)
similar:
df %>%
tidyr::gather(year,POPEST,matches("POPEST")) %>% mutate(year = sub("[^0-9]+","",year))
# state city year POPEST
#1 MN Minessota 2010 2899
#2 PA Pittsburgh 2010 344
#3 NY Newyork 2010 4555
#4 MN Minessota 2011 4444
#5 PA Pittsburgh 2011 348
#6 NY Newyork 2011 8999
#7 MN Minessota 2012 555
#8 PA Pittsburgh 2012 55
#9 NY Newyork 2012 77665
I am working with maps and ggplot2 to visualize the number of certain crimes in each state for different years. The data set that I am working with was produced by the FBI and can be downloaded from their site or from here (if you don't want to download the dataset I don't blame you, but it is way too big to copy and paste into this question, and including a fraction of the data set wouldn't help, as there wouldn't be enough information to recreate the graph).
The problem is easier seen than described.
As you can see California is missing a large chunk as well as a few other states. Here is the code that produced this plot:
# load libraries
library(maps)
library(ggplot2)
# load data
fbi <- read.csv("http://www.hofroe.net/stat579/crimes-2012.csv")
fbi <- subset(fbi, state != "United States")
states <- map_data("state")
# merge data sets by region
fbi$region <- tolower(fbi$state)
fbimap <- merge(fbi, states, by="region")
# plot robbery numbers by state for year 2012
fbimap12 <- subset(fbimap, Year == 2012)
qplot(long, lat, geom="polygon", data=fbimap12,
facets=~Year, fill=Robbery, group=group)
This is what the states data looks like:
long lat group order region subregion
1 -87.46201 30.38968 1 1 alabama <NA>
2 -87.48493 30.37249 1 2 alabama <NA>
3 -87.52503 30.37249 1 3 alabama <NA>
4 -87.53076 30.33239 1 4 alabama <NA>
5 -87.57087 30.32665 1 5 alabama <NA>
6 -87.58806 30.32665 1 6 alabama <NA>
And this is what the fbi data looks like:
Year Population Violent Property Murder Forcible.Rape Robbery
1 1960 3266740 6097 33823 406 281 898
2 1961 3302000 5564 32541 427 252 630
3 1962 3358000 5283 35829 316 218 754
4 1963 3347000 6115 38521 340 192 828
5 1964 3407000 7260 46290 316 397 992
6 1965 3462000 6916 48215 395 367 992
Aggravated.Assault Burglary Larceny.Theft Vehicle.Theft abbr state region
1 4512 11626 19344 2853 AL Alabama alabama
2 4255 11205 18801 2535 AL Alabama alabama
3 3995 11722 21306 2801 AL Alabama alabama
4 4755 12614 22874 3033 AL Alabama alabama
5 5555 15898 26713 3679 AL Alabama alabama
6 5162 16398 28115 3702 AL Alabama alabama
I then merged the two sets along region. The subset I am trying to plot is
region Year Robbery long lat group
8283 alabama 2012 5020 -87.46201 30.38968 1
8284 alabama 2012 5020 -87.48493 30.37249 1
8285 alabama 2012 5020 -87.95475 30.24644 1
8286 alabama 2012 5020 -88.00632 30.24071 1
8287 alabama 2012 5020 -88.01778 30.25217 1
8288 alabama 2012 5020 -87.52503 30.37249 1
... ... ... ...
Any ideas on how I can create this plot without those ugly missing spots?
I played with your code. One thing I can tell is that when you used merge something happened. I drew states map using geom_path and confirmed that there were a couple of weird lines which do not exist in the original map data. I, then, further investigated this case by playing with merge and inner_join. merge and inner_join are doing the same job here. However, I found a difference. When I used merge, order changed; the numbers were not in the right sequence. This was not the case with inner_join. You will see a bit of data with California below. Your approach was right. But merge somehow did not work in your favour. I am not sure why the function changed order, though.
library(dplyr)
### Call US map polygon
states <- map_data("state")
### Get crime data
fbi <- read.csv("http://www.hofroe.net/stat579/crimes-2012.csv")
fbi <- subset(fbi, state != "United States")
fbi$state <- tolower(fbi$state)
### Check if both files have identical state names: The answer is NO
### states$region does not have Alaska, Hawaii, and Washington D.C.
### fbi$state does not have District of Columbia.
setdiff(fbi$state, states$region)
#[1] "alaska" "hawaii" "washington d. c."
setdiff(states$region, fbi$state)
#[1] "district of columbia"
### Select data for 2012 and choose two columns (i.e., state and Robbery)
fbi2 <- fbi %>%
filter(Year == 2012) %>%
select(state, Robbery)
Now I created two data frames with merge and inner_join.
### Create two data frames with merge and inner_join
ana <- merge(fbi2, states, by.x = "state", by.y = "region")
bob <- inner_join(fbi2, states, by = c("state" ="region"))
ana %>%
filter(state == "california") %>%
slice(1:5)
# state Robbery long lat group order subregion
#1 california 56521 -119.8685 38.90956 4 676 <NA>
#2 california 56521 -119.5706 38.69757 4 677 <NA>
#3 california 56521 -119.3299 38.53141 4 678 <NA>
#4 california 56521 -120.0060 42.00927 4 667 <NA>
#5 california 56521 -120.0060 41.20139 4 668 <NA>
bob %>%
filter(state == "california") %>%
slice(1:5)
# state Robbery long lat group order subregion
#1 california 56521 -120.0060 42.00927 4 667 <NA>
#2 california 56521 -120.0060 41.20139 4 668 <NA>
#3 california 56521 -120.0060 39.70024 4 669 <NA>
#4 california 56521 -119.9946 39.44241 4 670 <NA>
#5 california 56521 -120.0060 39.31636 4 671 <NA>
ggplot(data = bob, aes(x = long, y = lat, fill = Robbery, group = group)) +
geom_polygon()
The problem is in the order of arguments to merge
fbimap <- merge(fbi, states, by="region")
has the thematic data first and the geo data second. Switching the order with
fbimap <- merge(states, fbi, by="region")
the polygons should all close up.