How do I consolidate ddply across two columns? - r

I have some data for sites across a bunch of cities that looks about like this:
CITY STATE LAT LON SCORE
Jacksonville FL 30.328539 -81.65101 5
Jacksonville FL 30.392888 -81.67933 6
Jacksonville FL 30.268572 -81.73987 4
Jacksonville FL 30.348585 -81.49965 3
Lake Worth FL 26.579714 -80.07437 6
Lake Worth FL 26.609226 -80.12874 3
Miami FL 25.813808 -80.2058 3
Miami FL 25.753927 -80.27034 2
Miami FL 25.786326 -80.2029 6
Miami FL 25.817325 -80.19046 8
Miami FL 25.812625 -80.2369 9
Miami FL 25.885739 -80.23264 4
Miami FL 25.962069 -80.14465 5
I want to count the records for each city and average the score. I know I could do that with ddply if the cities were unique, but they aren't. There's a "Miami, KS" or something in there. So I need to do ddply on the combined city and state. Something like:
ddply(sometable, .(CITY, STATE), summarise,
mean.score=mean(SCORE),
record.count=length(SCORE)
)
Is there a way to do that? I also need to grab one of the lat/lon pairs for each city. Doesn't matter which one.

library(plyr)
ddply(data,c(.(CITY),.(STATE)),summarise,count=length(SCORE),mean=mean(SCORE))
or you can use:
library(data.table)
data <- data.table(data)
data[, list(count=length(SCORE), mean=mean(SCORE)), by=c("CITY", "STATE")]
or this:
aggregate(SCORE~CITY+STATE,data,function(x) cbind(length(x),mean(x)))
CITY STATE count mean
1 Jacksonville FL 4 4.500000
2 Lake Worth FL 2 4.500000
3 Miami FL 7 5.285714

Related

Mutate DF1 based on DF2 with a check

nubie here with a dataframe/mutate question... I want to update a dataframe (df1) based on data in another dataframe (df2). For one offs I've used MUTATE so I figure this is the way to go. Additionally I would like a check function added (TRUE/FALSE ?) to indicate if the the field in df1 was updated.
For Example..
df1-
State
<chr>
1 N.Y.
2 FL
3 AL
4 MS
5 IL
6 WS
7 WA
8 N.J.
9 N.D.
10 S.D.
11 CALL
df2
State New_State
<chr> <chr>
1 N.Y. New York
2 FL Florida
3 AL Alabama
4 MS Mississippi
5 IL Illinois
6 WS Wisconsin
7 WA Washington
8 N.J. New Jersey
9 N.D. North Dakota
10 S.D. South Dakota
11 CAL California
I want the output to look like this
df3
New_State Test
<chr>
1 New York TRUE
2 Florida TRUE
3 Alabama TRUE
4 Mississippi TRUE
5 Illinois TRUE
6 Wisconsin TRUE
7 Washington TRUE
8 New Jersey TRUE
9 North Dakota TRUE
10 South Dakota TRUE
11 CALL FALSE
In essence I want R to read the data in df1 and change df1 based on the match in df2 chaining out to the full state name and replace. Lastly if the data in df1 was update mark as "TRUE" (N.Y. to NEW YORK) and "FALSE" if not updated (CALL vs CAL)
Thanks in advance for any and all help.
This should give you the result you're looking for:
match_vec <- match(df1$State, table = df2$State)
This vector should match all the abbreviated state names in df1 with those in df2. Where there's no match, you end up with a missing value:
Then the following code using dplyr should produce the df3 you requested.
library(dplyr)
df3 <- df1 %>%
mutate(New_State = df2$New_State[match_vec]) %>%
mutate(Test = !is.na(match_vec)) %>%
mutate(New_State = ifelse(is.na(New_State),
State, New_State)) %>%
select(New_State, Test)

Map zip codes to their respective city and state in R?

I have a data frame of zip codes that I'm looking to map to a city & state for each specific zip code. Currently, I have played around with the zipcode package a bit but I'm not sure that can solve this specific issue.
Here's sample data of what I have now:
str(all_key$zip)
chr [1:406] "43031" "24517" "43224" "43832" "53022" "60185" "84104" "43081"
"85226" "85193" "54656" "43215" "94533" "95826" "64804" "49548" "54467"
The expected output would be adding a city & state column to each row of the data frame referring to the individual zips:
head(all_key)
zip city state
1 43031 city1 state1
2 24517 city2 state2
3 43224 city3 state3
4 43832 city4 state4
5 53022 city5 state5
6 60185 city6 state6
Thanks in advance for your help.
Another Update - February 2023
Another package (zipcodeR) has been added that makes this easier. See below.
Answer updated - January 2020
The zipcode package seems to have disappeared, so this answer has been updated to show how to add lat-lon from an external file. New answer at bottom.
Original answer
You can get the data from the zipcode package and just do a merge to look things up.
zip = c("43031", "24517", "43224", "43832", "53022",
"60185", "84104", "43081", "85226", "85193", "54656",
"43215", "94533", "95826", "64804", "49548", "54467")
ZC = data.frame(zip)
library(zipcode)
data(zipcode)
merge(ZC, zipcode)
zip city state latitude longitude
1 24517 Altavista VA 37.12754 -79.27409
2 43031 Johnstown OH 40.15198 -82.66944
3 43081 Westerville OH 40.10951 -82.91606
4 43215 Columbus OH 39.96513 -83.00431
5 43224 Columbus OH 40.03991 -82.96772
6 43832 Newcomerstown OH 40.27738 -81.59662
7 49548 Grand Rapids MI 42.86823 -85.66391
8 53022 Germantown WI 43.21916 -88.12043
9 54467 Plover WI 44.45228 -89.54399
10 54656 Sparta WI 43.96977 -90.80796
11 60185 West Chicago IL 41.89198 -88.20502
12 64804 Joplin MO 37.04716 -94.51124
13 84104 Salt Lake City UT 40.75063 -111.94077
14 85193 Casa Grande AZ 32.86000 -111.83000
15 85226 Chandler AZ 33.31221 -111.93177
16 94533 Fairfield CA 38.26958 -122.03701
17 95826 Sacramento CA 38.55010 -121.37492
If you need to keep the rows in the same order, you can just set the rownames on the zipcode data and use that to select the desired rows and columns.
rownames(zipcode) = zipcode$zip
zipcode[zip, 1:3]
zip city state
43031 43031 Johnstown OH
24517 24517 Altavista VA
43224 43224 Columbus OH
43832 43832 Newcomerstown OH
53022 53022 Germantown WI
60185 60185 West Chicago IL
84104 84104 Salt Lake City UT
43081 43081 Westerville OH
85226 85226 Chandler AZ
85193 85193 Casa Grande AZ
54656 54656 Sparta WI
43215 43215 Columbus OH
94533 94533 Fairfield CA
95826 95826 Sacramento CA
64804 64804 Joplin MO
49548 49548 Grand Rapids MI
54467 54467 Plover WI
Updated Answer - January 2020
Since the zipcode package has disappeared, this shows how to add lat-lon information from a downloaded data set. The file that I am using exists today but the method should work for other files. See the GIS StackExchange for some leads on where to download data.
## Original Data to match
zip = c("43031", "24517", "43224", "43832", "53022",
"60185", "84104", "43081", "85226", "85193", "54656",
"43215", "94533", "95826", "64804", "49548", "54467")
ZC = data.frame(zip)
## Download source file, unzip and extract into table
ZipCodeSourceFile = "http://download.geonames.org/export/zip/US.zip"
temp <- tempfile()
download.file(ZipCodeSourceFile , temp)
ZipCodes <- read.table(unz(temp, "US.txt"), sep="\t")
unlink(temp)
names(ZipCodes) = c("CountryCode", "zip", "PlaceName",
"AdminName1", "AdminCode1", "AdminName2", "AdminCode2",
"AdminName3", "AdminCode3", "latitude", "longitude", "accuracy")
## merge extra info onto original data
fZC_Info = merge(ZC, ZipCodes[,c(2:6,10:11)])
head(ZC_Info)
zip PlaceName AdminName1 AdminCode1 AdminName2 latitude longitude
1 24517 Altavista Virginia VA Campbell 37.1222 -79.2911
2 43031 Johnstown Ohio OH Licking 40.1445 -82.6973
3 43081 Westerville Ohio OH Franklin 40.1146 -82.9105
4 43215 Columbus Ohio OH Franklin 39.9671 -83.0044
5 43224 Columbus Ohio OH Franklin 40.0425 -82.9689
6 43832 Newcomerstown Ohio OH Tuscarawas 40.2739 -81.5940
Second Update - February 2023
Another package, zipcodeR, is now available that makes this easier. Here is some simple code to demonstrate it.
library(zipcodeR)
zip = c("43031", "24517", "43224", "43832", "53022",
"60185", "84104", "43081", "85226", "85193", "54656",
"43215", "94533", "95826", "64804", "49548", "54467")
reverse_zipcode(zip)[,c(1,3,7)]
# A tibble: 17 × 3
zipcode major_city state
<chr> <chr> <chr>
1 85193 Casa Grande AZ
2 85226 Chandler AZ
3 94533 Fairfield CA
4 95826 Sacramento CA
5 60185 West Chicago IL
6 49548 Grand Rapids MI
7 64804 Joplin MO
8 43031 Johnstown OH
9 43081 Westerville OH
10 43215 Columbus OH
11 43224 Columbus OH
12 43832 Newcomerstown OH
13 84104 Salt Lake City UT
14 24517 Altavista VA
15 53022 Germantown WI
16 54467 Plover WI
17 54656 Sparta WI
You can still use the "zipcode" package by downloading it from the archives
https://cran.r-project.org/src/contrib/Archive/zipcode/
Once you download the tar.gz file to your computer, you can install it from the RStudio GUI Packages pane. After clicking "Install", you can change the option to "Package Archive File" and point to the downloaded tar.gz file.
Install/use the USA package, also described here, which contains a tibble (zips and lats/longs) from the archived zipcode package.
library(usa)
zcs <- usa::zipcodes
head(zcs)
# A tibble: 6 x 5
zip city state lat long
<chr> <chr> <chr> <dbl> <dbl>
1 00210 Portsmouth NH 43.0 -71.0
2 00211 Portsmouth NH 43.0 -71.0
3 00212 Portsmouth NH 43.0 -71.0
4 00213 Portsmouth NH 43.0 -71.0
5 00214 Portsmouth NH 43.0 -71.0
6 00215 Portsmouth NH 43.0 -71.0
You can use the data frame in the R package zipcodeR.
To add the city and state to your data frame, you can select the variables you want from the data frame provided in zipcodeR (called zip_code_db), then join it with your data frame:
library(dplyr)
library(zipcodeR)
zip_code_db_selected =
zip_code_db %>%
select(zipcode, major_city, state)
all_key_with_city_st =
left_join(all_key, zip_code_db_selected, by = c("zip" = "zipcode"))

How can I separate one column into two in R so that the all capital letter words are in one column?

I have a one column like this:
x <- c('WV West Virginia','FL Florida','CA California','SC South Carolina')
# [1] WV West Virginia FL Florida
# [3] CA California SC South Carolina
How can I separate the abbreviation from the whole state name. And I want to give the two new columns two different headers. I think I can only solve this by separating the all upper letter words away.
With tidyr we can use separate to expand the column into two while specifying the new names. The argument extra=merge limits the output to the given columns. The separator will default to non-alpha-numerics:
library(tidyr)
separate(df, x, c("Abb", "State"), extra="merge")
# Abb State
#1 WV West Virginia
#2 FL Florida
#3 CA California
#4 SC South Carolina
Data
x = c('WV West Virginia', 'FL Florida','CA California', 'SC South Carolina')
Two approaches without external packages:
Approach 1: you could use substring in combination with nchar.
dat <-data.frame(raw=c("WV West Virginia","FL Florida", "CA California","SC South Carolina"),
stringsAsFactors=F)
dat$code <- substr(dat$raw,1,2)
dat$state <- substr(dat$raw, 4, nchar(dat$raw))
> dat
raw code state
1 WV West Virginia WV West Virginia
2 FL Florida FL Florida
3 CA California CA California
4 SC South Carolina SC South Carolina
Approach two: you could use regular expressions to replace parts of your strings:
##approach two: regex
dat$code <- sub(" .+","",dat$raw)
dat$state <- sub("[A-Z]{2} ","",dat$raw)
Use the state.* constants that come with the base datasets package
DF = data.frame(raw=c("WV West Virginia","FL Florida","CA California","SC South Carolina"))
DF$state.abbr <- substr(DF$raw, 1, 2)
DF$state.name <- state.name[ match(DF$state.abbr, state.abb) ]
# raw state.abbr state.name
# 1 WV West Virginia WV West Virginia
# 2 FL Florida FL Florida
# 3 CA California CA California
# 4 SC South Carolina SC South Carolina
This way, you can afford to have typos or other oddities in the state names.
Use the reshape2 package.
library(reshape2)
x <- rbind('WV West Virginia','FL Florida','CA California','SC South Carolina')
colsplit(x," ",c("Code","State"))
Output:
Code State
1 WV West Virginia
2 FL Florida
3 CA California
4 SC South Carolina
Based on #rawr's comment, we could split 'x' at white space that follows the first two characters, i.e. showed by the regex lookaround ((?<=^.{2})). The output will be a list, which we rbind, convert to data.frame and then cbind with the original vector 'x'.
cbind(x, as.data.frame(do.call(rbind,strsplit(x, '(?<=^.{2})\\s+', perl=TRUE)),
stringsAsFactors=FALSE))
# x V1 V2
#1 WV West Virginia WV West Virginia
#2 FL Florida FL Florida
#3 CA California CA California
#4 SC South Carolina SC South Carolina
Or instead of the regex lookaround, we could use stri_split with n=2 and split at whitespace.
library(stringi)
cbind(x,as.data.frame(do.call(rbind,stri_split(x, regex='\\s+', n=2))))
Here's a data.table/ gsub approach:
x <- c('WV West Virginia','FL Florida','CA California','SC South Carolina')
data.table::data.table(x)[,
abb := gsub("(^[A-Z]{2})( .+)", "\\1", x)][,
state := gsub("(^[A-Z]{2})( .+)", "\\2", x)][]
## x abb state
## 1: WV West Virginia WV West Virginia
## 2: FL Florida FL Florida
## 3: CA California CA California
## 4: SC South Carolina SC South Carolina

Arrange dataframe for pairwise correlations

I am working with data in the following form:
Country Player Goals
"USA" "Tim" 0
"USA" "Tim" 0
"USA" "Dempsey" 3
"USA" "Dempsey" 5
"Brasil" "Neymar" 6
"Brasil" "Neymar" 2
"Brasil" "Hulk" 5
"Brasil" "Luiz" 2
"England" "Rooney" 4
"England" "Stewart" 2
Each row represents the number of goals that a player scored per game, and also contains that player's country. I would like to have the data in the form such that I can run pairwise correlations to see whether being from the same country has some association with the number of goals that a player scores. The data would look like this:
Player_1 Player_2
0 8 # Tim Dempsey
8 5 # Neymar Hulk
8 2 # Neymar Luiz
5 2 # Hulk Luiz
4 2 # Rooney Stewart
(You can ignore the comments, they are there simply to clarify what each row contains).
How would I do this?
table(df$player)
gets me the number of goals per player, but then how to I generate these pairwise combinations?
This is a pretty classic self-join problem. I'm gonna start by summarizing your data to get the total goals for each player. I like dplyr for this, but aggregate or data.table work just fine too.
library(dplyr)
df <- df %>% group_by(Player, Country) %>% dplyr::summarize(Goals = sum(Goals))
> df
Source: local data frame [7 x 3]
Groups: Player
Player Country Goals
1 Dempsey USA 8
2 Hulk Brasil 5
3 Luiz Brasil 2
4 Neymar Brasil 8
5 Rooney England 4
6 Stewart England 2
7 Tim USA 0
Then, using good old merge, we join it to itself based on country, and then so we don't get each row twice (Dempsey, Tim and Tim, Dempsey---not to mention Dempsey, Dempsey), we'll subset it so that Player.x is alphabetically before Player.y. Since I already loaded dplyr I'll use filter, but subset would do the same thing.
df2 <- merge(df, df, by.x = "Country", by.y = "Country")
df2 <- filter(df2, as.character(Player.x) < as.character(Player.y))
> df2
Country Player.x Goals.x Player.y Goals.y
2 Brasil Hulk 5 Luiz 2
3 Brasil Hulk 5 Neymar 8
6 Brasil Luiz 2 Neymar 8
11 England Rooney 4 Stewart 2
15 USA Dempsey 8 Tim 0
The self-join could be done in dplyr if we made a little copy of the data and renamed the Player and Goals columns so they wouldn't be joined on. Since merge is pretty smart about the renaming, it's easier in this case.
There is probably a smarter way to get from the aggregated data to the pairs, but assuming your data is not too big (national soccer data), you can always do something like:
A<-aggregate(df$Goals~df$Player+df$Country,data=df,sum)
players_in_c<-table(A[,2])
dat<-NULL
for(i in levels(df$Country)) {
count<-players_in_c[i]
pair<-combn(count,m=2)
B<-A[A[,2]==i,]
dat<-rbind(dat, cbind(B[pair[1,],],B[pair[2,],]) )
}
dat
> dat
df$Player df$Country df$Goals df$Player df$Country df$Goals
1 Hulk Brasil 5 Luiz Brasil 2
1.1 Hulk Brasil 5 Neymar Brasil 8
2 Luiz Brasil 2 Neymar Brasil 8
4 Rooney England 4 Stewart England 2
6 Dempsey USA 8 Tim USA 0

Clustering list for hclust function

Using plot(hclust(dist(x))) method, I was able to draw a cluster tree map. It works. Yet I would like to get a list of all clusters, not a tree diagram, because I have huge amount of data (like 150K nodes) and the plot gets messy.
In other words, lets say if a b c is a cluster and if d e f g is a cluster then I would like to get something like this:
1 a,b,c
2 d,e,f,g
Please note that this is not exactly what I want to get as an "output". It is just an example. I just would like to be able to get a list of clusters instead of a tree plot It could be vector, matrix or just simple numbers that show which groups elements belong to.
How is this possible?
I will use the dataset available in R to demonstrate how to cut a tree into desired number of pieces. Result is a table.
Construct a hclust object.
hc <- hclust(dist(USArrests), "ave")
#plot(hc)
You can now cut the tree into as many branches as you want. For my next trick, I will split the tree into two groups. You set the number of cuts with the k parameter. See ?cutree and the use of paramter h which may be more useful to you (see cutree(hc, k = 2) == cutree(hc, h = 110)).
cutree(hc, k = 2)
Alabama Alaska Arizona Arkansas California
1 1 1 2 1
Colorado Connecticut Delaware Florida Georgia
2 2 1 1 2
Hawaii Idaho Illinois Indiana Iowa
2 2 1 2 2
Kansas Kentucky Louisiana Maine Maryland
2 2 1 2 1
Massachusetts Michigan Minnesota Mississippi Missouri
2 1 2 1 2
Montana Nebraska Nevada New Hampshire New Jersey
2 2 1 2 2
New Mexico New York North Carolina North Dakota Ohio
1 1 1 2 2
Oklahoma Oregon Pennsylvania Rhode Island South Carolina
2 2 2 2 1
South Dakota Tennessee Texas Utah Vermont
2 2 2 2 2
Virginia Washington West Virginia Wisconsin Wyoming
2 2 2 2 2
lets say,
y<-dist(x)
clust<-hclust(y)
groups<-cutree(clust, k=3)
x<-cbind(x,groups)
now you will get for each record, the cluster group.
You can subset the dataset as well:
x1<- subset(x, groups==1)
x2<- subset(x, groups==2)
x3<- subset(x, groups==3)

Resources