I'm trying to 're-count' a column in R and having issues by cleaning up the data. I'm working on cleaning data by location and once I change CA to California.
all_location <- read.csv("all_location.csv", stringsAsFactors = FALSE)
all_location <- count(all_location, location)
all_location <- all_location[with(all_location, order(-n)), ]
all_location
A tibble: 100 x 2
location n
<chr> <int>
1 CA 3216
2 Alaska 2985
3 Nevada 949
4 Washington 253
5 Hawaii 239
6 Montana 218
7 Puerto Rico 149
8 California 126
9 Utah 83
10 NA 72
From the above, there's CA and California. Below I'm able to clean grep and replace CA with California. However, my issue is that it's grouping by California but shows two separate instances of California.
ca1 <- grep("CA",all_location$location)
all_location$location <- replace(all_location$location,ca1,"California")
all_location
A tibble: 100 x 2
location n
<chr> <int>
1 California 3216
2 Alaska 2985
3 Nevada 949
4 Washington 253
5 Hawaii 239
6 Montana 218
7 Puerto Rico 149
8 California 126
9 Utah 83
10 NA 72
My goal would be to combine both to a total under n.
all_location$location[substr(all_location$location, 1, 5) %in% "Calif" ] <- "California"
to make sure everything that starts with "Calif" gets made into "California"
I am assuming that maybe you have a space in the California (e.g. "California ") that is already present which is why this is happening..
Related
I am trying to merge two dataframes in r, and this error message keeps coming up even though the variable types all should be correct.
Here is my code:
team_info <- baseballr::mlb_teams(season = 2022)
team_info_mlb <- subset(team_info, sport_name == 'Major League Baseball')
tim2 <- team_info_mlb %>%
rename('home_team' = club_name)
tim3 <- subset(tim2, select = c('team_full_name', 'home_team'))
new_pf <- baseballr::fg_park(yr = 2022)
new_pf <- subset(new_pf, select = c('home_team', '1yr'))
info_pf <- merge(tim3, new_pf, by = 'home_team')
The final line is where the problems happen. Let me know if anyone has advice.
The problem is that the data have some fancy class attributes.
> class(tim3)
[1] "baseballr_data" "tbl_df" "tbl" "data.table" "data.frame"
> class(new_pf)
[1] "baseballr_data" "tbl_df" "tbl" "data.table" "data.frame"
Just wrap them in as.data.frame(). Since both data sets have the same by variable you may omit explicit specification.
info_pf <- merge(as.data.frame(tim3), as.data.frame(new_pf))
info_pf
# home_team team_full_name 1yr
# 1 Angels Los Angeles Angels 102
# 2 Astros Houston Astros 99
# 3 Athletics Oakland Athletics 94
# 4 Blue Jays Toronto Blue Jays 106
# 5 Braves Atlanta Braves 105
# 6 Brewers Milwaukee Brewers 102
# 7 Cardinals St. Louis Cardinals 92
# 8 Cubs Chicago Cubs 103
# 9 Diamondbacks Arizona Diamondbacks 103
# 10 Dodgers Los Angeles Dodgers 98
# 11 Giants San Francisco Giants 99
# 12 Guardians Cleveland Guardians 97
# 13 Mariners Seattle Mariners 94
# 14 Marlins Miami Marlins 97
# 15 Mets New York Mets 91
# 16 Nationals Washington Nationals 97
# 17 Orioles Baltimore Orioles 108
# 18 Padres San Diego Padres 96
# 19 Phillies Philadelphia Phillies 98
# 20 Pirates Pittsburgh Pirates 101
# 21 Rangers Texas Rangers 98
# 22 Rays Tampa Bay Rays 89
# 23 Red Sox Boston Red Sox 111
# 24 Reds Cincinnati Reds 112
# 25 Rockies Colorado Rockies 112
# 26 Royals Kansas City Royals 108
# 27 Tigers Detroit Tigers 94
# 28 Twins Minnesota Twins 99
# 29 White Sox Chicago White Sox 100
# 30 Yankees New York Yankees 99
I have three datasets
one containing a bunch of information about storms.
one that contains full names of the cities and the abbreviations.
and one that contains the year and population for each state.
What I want to do is to add a column to the first dataframe storms called population that contains population per year for each state using the other two dataframes state_codes and states.
Can anyone point me in the right direction? Below some sample data
> head(storms)
num yr mo dy time state magnitude injuries fatalities crop_loss
1 1 1950 1 3 11:00:00 MO 3 3 0 0
2 1 1950 1 3 11:10:00 IL 3 0 0 0
3 2 1950 1 3 11:55:00 IL 3 3 0 0
4 3 1950 1 3 16:00:00 OH 1 1 0 0
5 4 1950 1 13 05:25:00 AR 3 1 1 0
6 5 1950 1 25 19:30:00 MO 2 5 0 0
> head(state_codes)
Name Abbreviation
1 Alabama AL
2 Alaska AK
3 Arizona AZ
4 Arkansas AR
5 California CA
6 Colorado CO
head(states)
Year Alabama Arizona Arkansas California Colorado Connecticut Delaware
1 1900 1830 124 1314 1490 543 910 185
2 1901 1907 131 1341 1550 581 931 187
3 1902 1935 138 1360 1623 621 952 188
4 1903 1957 144 1384 1702 652 972 190
5 1904 1978 151 1419 1792 659 987 192
6 1905 2012 158 1447 1893 680 1010 194
You didn't provide much data to test with, but this should do it.
First, join storms to state_codes, so that it will have state names that are in states. We can rename yr to match states$Year at the same time.
Then pivot states to be in long form.
Finally, join the new version of storms to the long version of states.
library(dplyr)
library(tidyr)
storms %>%
left_join(state_codes,by = c("state" = "Abbreviation")) %>%
rename(Year = yr) -> storms.with.names
states %>%
pivot_longer(-Year, names_to = "Name",
values_to = "Population") -> long.states
storms.with.names %>%
left_join(long.states) -> result
This answer doesn't use dplyr, but I'm offering it because I know that it's very fast on large datasets.
It follows the same first step as the accepted answer: merge state names into the storms dataset. But then it does something clever (I stole the idea): it creates a matrix of row and column numbers, and then uses that to extract the elements from the "states" dataset that you want for the new column.
#Add the state names to storms
storms<-merge(storms, state_codes, by.x = 6, by.y = 2, all.x = T)
#Get row and column indexes for the elements in 'states'
r<-match(storms$year, states$year)
c<-match(storms$state.y, names(states)) #state.y was the name of the merged column
smat<-cbind(r,c)
#And grab them into a new vector
storms$population<-states[smat]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have the following dataset for California housing data:
head(calif_cluster,15)
MedianHouseValue MedianIncome MedianHouseAge TotalRooms TotalBedrooms Population
1 190300 4.20510 16 2697.00 490.00 1462
2 150800 2.54810 33 2821.00 652.00 1206
3 252600 6.08290 17 6213.20 1276.05 3288
4 269700 4.03680 52 919.00 213.00 413
5 91200 1.63680 28 3072.00 790.00 1375
6 66200 2.18980 30 744.00 156.00 410
7 148800 2.63640 39 620.95 136.00 348
8 384800 4.46150 20 2270.00 498.00 1070
9 153200 2.75000 22 1931.00 445.00 1009
10 66200 1.60057 36 973.00 219.00 613
11 461500 3.78130 43 3070.00 668.00 1240
12 144600 2.85000 22 5175.00 1213.00 2804
13 143700 5.09410 8 6213.20 1276.05 3288
14 195500 5.30620 16 2918.00 444.00 1697
15 268800 2.42110 22 620.95 136.00 348
Households Latitude Longitude cluster_kmeans gender_dom marital race edu_level rental
1 515 38.48 -122.47 1 M other black jrcollege rented
2 640 38.00 -122.13 1 F other hispanic doctorate owned
3 1162 33.88 -117.79 3 M other white jrcollege owned
4 193 37.85 -122.25 1 M single others jrcollege owned
5 705 38.13 -122.26 1 F single white doctorate rented
6 165 38.96 -122.21 1 F single others jrcollege owned
7 125 34.01 -118.18 2 M married others postgrad owned
8 521 33.83 -118.38 2 F single white graduate rented
9 407 38.95 -121.04 1 M married others postgrad leased
10 187 35.34 -119.01 2 M single hispanic doctorate owned
11 646 33.76 -118.12 2 F other others highschl leased
12 1091 37.95 -122.05 3 M other white graduate rented
13 1162 36.87 -119.75 3 M other others postgrad leased
14 444 32.93 -117.13 2 M other asian jrcollege owned
15 125 37.71 -120.98 1 F single asian postgrad leased
As i have latitude & longitude information in the datasets, i would like to extract corresponding county for the given geo information using R. Also is it possible to getting the capital city(or largest city) for each of the extracted counties .These could make my stratified analysis more insightful;intend to do some clustering/mapping exercise.
take a look at ggmap::revgeocode
code
library(ggmap)
revgeocode(c(-122.47,38.48)) # longitude then latitude
# [1] "2233 Sulphur Springs Ave, St Helena, CA 94574, USA"
library(dplyr)
library(magrittr)
df12 %<>% rowwise %>% mutate(address = revgeocode(c(Longitude,Latitude))) %>% ungroup # add full address using google api through ggmap
df12 %<>% separate(address,c("street_address", "city","county","country"),remove=F,sep=",") # structure all the info you need
result
df12 %>% select(Longitude,Latitude,address,county)
# A tibble: 15 x 4
# Longitude Latitude address county
# * <dbl> <dbl> <chr> <chr>
# 1 -122.47 38.48 2233 Sulphur Springs Ave, St Helena, CA 94574, USA CA 94574
# 2 -122.13 38.00 3400-3410 Brookside Dr, Martinez, CA 94553, USA CA 94553
# 3 -117.79 33.88 19721 Bluefield Plaza, Yorba Linda, CA 92886, USA CA 92886
# 4 -122.25 37.85 6365 Florio St, Oakland, CA 94618, USA CA 94618
# 5 -122.26 38.13 119 Mimosa Ct, Vallejo, CA 94589, USA CA 94589
# 6 -122.21 38.96 Unnamed Road, Arbuckle, CA 95912, USA CA 95912
# 7 -118.18 34.01 4360-4414 Noakes St, Los Angeles, CA 90023, USA CA 90023
# 8 -118.38 33.83 903 Serpentine St, Redondo Beach, CA 90277, USA CA 90277
# 9 -121.04 38.95 14666-14690 Musso Rd, Auburn, CA 95603, USA CA 95603
# 10 -119.01 35.34 800 Ming Ave, Bakersfield, CA 93307, USA CA 93307
# 11 -118.12 33.76 6211-6295 E Marina Dr, Long Beach, CA 90803, USA CA 90803
# 12 -122.05 37.95 1120 Carey Dr, Concord, CA 94520, USA CA 94520
# 13 -119.75 36.87 1815-1899 E Pryor Dr, Fresno, CA 93720, USA CA 93720
# 14 -117.13 32.93 9010-9016 Danube Ln, San Diego, CA 92126, USA CA 92126
# 15 -120.98 37.71 748-1298 Claribel Rd, Modesto, CA 95356, USA CA 95356
data
df1 <- read.table(text = "MedianHouseValue MedianIncome MedianHouseAge TotalRooms TotalBedrooms Population
1 190300 4.20510 16 2697.00 490.00 1462
2 150800 2.54810 33 2821.00 652.00 1206
3 252600 6.08290 17 6213.20 1276.05 3288
4 269700 4.03680 52 919.00 213.00 413
5 91200 1.63680 28 3072.00 790.00 1375
6 66200 2.18980 30 744.00 156.00 410
7 148800 2.63640 39 620.95 136.00 348
8 384800 4.46150 20 2270.00 498.00 1070
9 153200 2.75000 22 1931.00 445.00 1009
10 66200 1.60057 36 973.00 219.00 613
11 461500 3.78130 43 3070.00 668.00 1240
12 144600 2.85000 22 5175.00 1213.00 2804
13 143700 5.09410 8 6213.20 1276.05 3288
14 195500 5.30620 16 2918.00 444.00 1697
15 268800 2.42110 22 620.95 136.00 348",header=T,stringsAsFactors=F)
df2 <- read.table(text = "Households Latitude Longitude cluster_kmeans gender_dom marital race edu_level rental
1 515 38.48 -122.47 1 M other black jrcollege rented
2 640 38.00 -122.13 1 F other hispanic doctorate owned
3 1162 33.88 -117.79 3 M other white jrcollege owned
4 193 37.85 -122.25 1 M single others jrcollege owned
5 705 38.13 -122.26 1 F single white doctorate rented
6 165 38.96 -122.21 1 F single others jrcollege owned
7 125 34.01 -118.18 2 M married others postgrad owned
8 521 33.83 -118.38 2 F single white graduate rented
9 407 38.95 -121.04 1 M married others postgrad leased
10 187 35.34 -119.01 2 M single hispanic doctorate owned
11 646 33.76 -118.12 2 F other others highschl leased
12 1091 37.95 -122.05 3 M other white graduate rented
13 1162 36.87 -119.75 3 M other others postgrad leased
14 444 32.93 -117.13 2 M other asian jrcollege owned
15 125 37.71 -120.98 1 F single asian postgrad leased",header=T,stringsAsFactors=F)
df12 <- cbind(df1,df2)
I don't think the library offers an option to get the capital or largest city in the county but I think you won't have too much trouble building a lookup table from online info.
Suppose this table:
Browse[2]> tra_all_data
ID CITY COUNTRY PRODUCT CATEGORY YEAR INDICATOR COUNT
1 1 VAL ES Tomato Vegetables 1999 10 10
2 2 MAD ES Beer Alcohol 1999 20 20
3 3 LON UK Whisky Alcohol 1999 30 30
4 4 VAL ES Tomato Vegetables 2000 100 100
5 5 VAL ES Beer Alcohol 2000 121 121
6 6 LON UK Whisky Alcohol 2000 334 334
7 7 MAD ES Tomato Vegetables 2000 134 134
8 8 LON UK Tomato Vegetables 2000 451 451
17 17 BIL ES Pincho Meat 1999 180 180
18 18 VAL ES Orange Vegetables 1999 110 110
19 19 MAD ES Wine Alcohol 1999 120 120
20 20 LON UK Wine Alcohol 1999 230 230
21 21 VAL ES Orange Vegetables 2000 100 100
22 22 VAL ES Wine Alcohol 2000 122 122
23 23 LON UK JB Alcohol 2000 133 133
24 24 MAD ES Orange Vegetables 2000 113 113
25 25 MAD ES Orange Vegetables 2000 113 113
26 26 LON UK Orange Vegetables 2000 145 145
And this piece of code:
CURRENT_COLS<-c("PRODUCT", "YEAR", "CITY")
tra_dAGG <- tra_all_data
regroup(as.list(CURRENT_COLS)) %>%
#group_by(PRODUCT, YEAR, CITY) %>%
summarise(Percent = sum(COUNT)) %>%
mutate(Percent = Percent / sum(Percent))
If I use this code as it is, I get the following warning:
Warning message:
'regroup' is deprecated.
Use 'group_by_' instead.
See help("Deprecated")
If I comment the regroup line and use the group_by line, it works but the point is that CURRENT_COLS changes in each iteration and I need to use this variable (I have explicitly defined CURRENT_COLS in this code to better explain my question)
Can anyone help me on this issue? How can I use a variable in the group_by?
Thank you so much in advance.
My R version: 3.1.2 (2014-10-31)
You need to use the newer standard evaluation versions of dplyr's functions. They are denoted by an additional _ at the end of the function name, for example select_().
In your case, you can change your code to:
CURRENT_COLS<-c("PRODUCT", "YEAR", "CITY")
tra_dAGG <- tra_all_data
group_by_(.dots = CURRENT_COLS) %>%
summarise(Percent = sum(COUNT)) %>%
mutate(Percent = Percent / sum(Percent))
Make sure you have the latest versions of dplyr installed and loaded.
To learn more about standard/non-standard evaluation in dplyr, see the vignette NSE.
my df2:
League freq
18 England 108
27 Italy 79
20 Germany 74
43 Spain 64
19 France 49
39 Russia 34
31 Mexico 27
47 Turkey 24
32 Netherlands 23
37 Portugal 21
49 United States 18
29 Japan 16
25 Iran 15
7 Brazil 13
22 Greece 13
14 Costa 11
45 Switzerland 11
5 Belgium 10
17 Ecuador 10
23 Honduras 10
42 South Korea 9
2 Argentina 8
48 Ukraine 7
3 Australia 6
11 Chile 6
12 China 6
15 Croatia 6
35 Norway 6
41 Scotland 6
34 Nigeria 5
I try to select europe.
europe <- subset(df2, nrow(x=18, 27, 20) select=c(1, 2))
What is the most effective way to select europe, africa, Asia ... from df2?
You either need to identify which countries are on which continents by hand, or you might be able to scrape this information from somewhere:
(basic strategy from Scraping html tables into R data frames using the XML package)
library(XML)
theurl <- "http://en.wikipedia.org/wiki/List_of_European_countries_by_area"
tables <- readHTMLTable(theurl)
library(stringr)
europe_names <- str_extract(as.character(tables[[1]]$Country),"[[:alpha:] ]+")
head(sort(europe_names))
## [1] "Albania" "Andorra" "Austria" "Azerbaijan" "Belarus"
## [6] "Belgium"
## there's also a 'Total' entry in here but it's probably harmless ...
subset(df2,League %in% europe_names)
Of course you'd have to figure this out again for Asia, America, etc.
So here's a slightly different approach from #BenBolker's, using the countrycode package.
library(countrycode)
cdb <- countrycode_data # database of countries
df2[toupper(df2$League) %in% cdb[cdb$continent=="Europe",]$country.name,]
# League freq
# 27 Italy 79
# 20 Germany 74
# 43 Spain 64
# 19 France 49
# 32 Netherlands 23
# 37 Portugal 21
# 22 Greece 13
# 45 Switzerland 11
# 5 Belgium 10
# 48 Ukraine 7
# 15 Croatia 6
# 35 Norway 6
One problem you're going to have is that "England" is not a country in any database (rather, "United Kingdom"), so you'll have to deal with that as a special case.
Also, this database considers the "Americas" as a continent.
df2[toupper(df2$League) %in% cdb[cdb$continent=="Americas",]$country.name,]
so to get just South America you have to use the region field:
df2[toupper(df2$League) %in% cdb[cdb$region=="South America",]$country.name,]
# League freq
# 7 Brazil 13
# 17 Ecuador 10
# 2 Argentina 8
# 11 Chile 6