Reshape long to wide where most columns have multiple values - r

I have data as below:
IDnum zipcode City County State
10011 36006 Billingsley Autauga AL
10011 36022 Deatsville Autauga AL
10011 36051 Marbury Autauga AL
10011 36051 Prattville Autauga AL
10011 36066 Prattville Autauga AL
10011 36067 Verbena Autauga AL
10011 36091 Selma Autauga AL
10011 36703 Jones Autauga AL
10011 36749 Plantersville Autauga AL
10011 36758 Uriah Autauga AL
10011 36480 Atmore Autauga AL
10011 36502 Bon Secour Autauga AL
I have a list of zipcodes, the cities they encompass, and counties/states they are located in. IDnum = numeric value for county and state, combined. List is in format you see now, I need to reshape it from long to wide / vertical to horizontal, where the IDnum variable becomes the unique identifier, and all other possible value combinations become wide variables.
IDnum zip1 city1 county1 state1 zip2 city2 county2
10011 36006 Billingsley Autauga AL 36022 Deatsville Autauga
This is just sample of the dataset, it encompasses every zip in the USA and includes more variables. I have seen other questions and answers similar to this one, but not where there are multiple values in almost every column.
There are commands in SPSS and STATA that will reshape data this way, in SPSS I can run a Restructure/Cases to Vars command that turns 11 variables in my initial dataset into about 1750, b/c one county has over 290 zips and it replicates most of the other variables 290+ times. This will create many blanks, but I need it to be reshaped into one very long horizontal file.
I have looked at reshape and reshape2, and am hung up on the 'default to length' error message. I did get melt/dcast to sorta work, but this creates one variable that is a list of all values, rather than creating variables for each value.
melted_dupes <- melt(zip_code_list_dupes, id.vars= c("IDnum"))
HRZ_dupes <- dcast(melted_dupes, IDnum ~ variable, fun.aggregate = list)
I have tried tidyr and dplyr but got lost in syntax. Am a little surprised there isn't a command the data similar to built in commands in other packages, making me assume there is, and I just haven't figured it out.
Any help is appreciated.

You can do this with the base function reshape after adding in a consecutive count by IDnum. Assuming your data is stored in a data.frame named df:
df2 <- within(df, count <- ave(rep(1,nrow(df)),df$IDnum,FUN=cumsum))
Provides a new column of the consecutive count named "time". And now we can reshape to wide format
reshape(df2,direction="wide",idvar="IDnum",timevar="count")
IDnum zipcode.1 City.1 County.1 State.1 zipcode.2 City.2 County.2 State.2 zipcode.3 City.3 County.3 State.3 zipcode.4 City.4 County.4 State.4
1 10011 36006 Billingsley Autauga AL 36022 Deatsville Autauga AL 36051 Marbury Autauga AL 36051 Prattville Autauga AL
(output truncated, goes all the way to zipcode.12, etc.)

There might be a more efficient way, but try the following.
I used my own (example) dataset, very similar to yours.
Run the process step by step to see how it works, as you'll have to modify some things in the code.
library(dplyr)
library(tidyr)
# get example data
dt = data.frame(id = c(1,1,1,2,2),
zipcode = c(4,5,6,7,8),
city = c("A","B","C","A","C"),
county = c("A","B","C","A","C"),
state = c("A","B","C","A","C"))
dt
# id zipcode city county state
# 1 1 4 A A A
# 2 1 5 B B B
# 3 1 6 C C C
# 4 2 7 A A A
# 5 2 8 C C C
# get maximum number of rows for a single id
# this will help you get the wide format
max_num_rows = max((dt %>% count(id))$n)
# get names of columns to reshape
col_names = names(dt)[-1]
dt %>%
group_by(id) %>%
mutate(nrow = paste0("row",row_number())) %>%
unite_("V",col_names) %>%
spread(nrow, V) %>%
unite("z",matches("row")) %>%
separate(z, paste0(col_names, sort(rep(1:max_num_rows, ncol(dt)-1))), convert=T) %>%
ungroup()
# # A tibble: 2 × 13
# id zipcode1 city1 county1 state1 zipcode2 city2 county2 state2 zipcode3 city3 county3 state3
# * <dbl> <int> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int> <chr> <chr> <chr>
# 1 1 4 A A A 5 B B B 6 C C C
# 2 2 7 A A A 8 C C C NA <NA> <NA> <NA>

Related

Mutate DF1 based on DF2 with a check

nubie here with a dataframe/mutate question... I want to update a dataframe (df1) based on data in another dataframe (df2). For one offs I've used MUTATE so I figure this is the way to go. Additionally I would like a check function added (TRUE/FALSE ?) to indicate if the the field in df1 was updated.
For Example..
df1-
State
<chr>
1 N.Y.
2 FL
3 AL
4 MS
5 IL
6 WS
7 WA
8 N.J.
9 N.D.
10 S.D.
11 CALL
df2
State New_State
<chr> <chr>
1 N.Y. New York
2 FL Florida
3 AL Alabama
4 MS Mississippi
5 IL Illinois
6 WS Wisconsin
7 WA Washington
8 N.J. New Jersey
9 N.D. North Dakota
10 S.D. South Dakota
11 CAL California
I want the output to look like this
df3
New_State Test
<chr>
1 New York TRUE
2 Florida TRUE
3 Alabama TRUE
4 Mississippi TRUE
5 Illinois TRUE
6 Wisconsin TRUE
7 Washington TRUE
8 New Jersey TRUE
9 North Dakota TRUE
10 South Dakota TRUE
11 CALL FALSE
In essence I want R to read the data in df1 and change df1 based on the match in df2 chaining out to the full state name and replace. Lastly if the data in df1 was update mark as "TRUE" (N.Y. to NEW YORK) and "FALSE" if not updated (CALL vs CAL)
Thanks in advance for any and all help.
This should give you the result you're looking for:
match_vec <- match(df1$State, table = df2$State)
This vector should match all the abbreviated state names in df1 with those in df2. Where there's no match, you end up with a missing value:
Then the following code using dplyr should produce the df3 you requested.
library(dplyr)
df3 <- df1 %>%
mutate(New_State = df2$New_State[match_vec]) %>%
mutate(Test = !is.na(match_vec)) %>%
mutate(New_State = ifelse(is.na(New_State),
State, New_State)) %>%
select(New_State, Test)

Replacing items in a list with items from another list in R

I have a column in a list with country codes in characters, I want to replace these with numeric codes. for the "decoding" I have a second list where the character country codes are associated with the numeric codes.
I tried gsub:
for (i in 1:nrow(countries))
{gsub(countries$code3[i], countries$numcode[i], doc_report$nationality)}
I tried a for loop:
i <- NULL
n <- NULL
for (i in 1:nrow(doc_report)) {
for (n in 1:nrow(countries)) {
if(doc_report$nationality[i] == countries$code3[n])
doc_report$nationality[i] <- countries$numcode[n]
else
if(doc_report$nationality[i] == "NA")
doc_report$nationality[i] <- 000
}
}
and I had something in mind with merge()
this is how the column looks like that has to be replaced
[nationality] IRL GBR ITA FRA POL BRA ESP GBR GBR GBR
this is how the second table for decoding looks like:
[code3] AFG ALB DZA ASM AGO AIA ATG ARG ARM
[numcode] 4 8 12 16 24 660 NA 28 32 51
so in table one I want the numcode from table 2 rather than the code3 style.
Updated Answer
Here's an example with data formatted like yours to make it clearer that it does work despite duplicate country codes.
library(tidyverse)
country <- c("IRL", "GBR", "ITA", "FRA", "POL", "BRA", "ESP")
codes <- c(1,2,3,4,5,6,7)
countries <- tibble(country, codes)
doc_report <- tibble(x=c("a","b","c","d","e"),
country = c("ITA","ITA", "POL", "BRA","ESP"))
left_join(doc_report, countries, by="country")
The output of this code is:
# A tibble: 5 x 3
x country codes
<chr> <chr> <dbl>
1 a ITA 3
2 b ITA 3
3 c POL 5
4 d BRA 6
5 e ESP 7
Which I believe is the behavior you're looking for.
Original Answer
A simple solution would be to use the left_join() function in the dplyr package and then select() to remove the unneeded column.
Let's say doc_report keys countries by code and country_codes is a tibble with 1 column of country string codes and 1 column of corresponding numerical codes, you could do something like this
## join the country codes
doc_report <- left_join(doc_report, country_codes, by="code3")
## remove the unneeded column
doc_report <- select(doc_report, -code3)
Does this make sense? Happy to expand otherwise.

How to Merge Shapefile and Dataset?

I want to create a spatial map showing drug mortality rates by US county, but I'm having trouble merging the drug mortality dataset, crude_rate, with the shapefile, usa_county_df. Can anyone help out?
I've created a key variable, "County", in both sets to merge on but I don't know how to format them to make the data mergeable. How can I make the County variables correspond? Thank you!
head(crude_rate, 5)
Notes County County.Code Deaths Population Crude.Rate
1 Autauga County, AL 1001 74 975679 7.6
2 Baldwin County, AL 1003 440 3316841 13.3
3 Barbour County, AL 1005 16 524875 Unreliable
4 Bibb County, AL 1007 50 420148 11.9
5 Blount County, AL 1009 148 1055789 14.0
head(usa_county_df, 5)
long lat order hole piece id group County
1 -97.01952 42.00410 1 FALSE 1 0 0.1 1
2 -97.01952 42.00493 2 FALSE 1 0 0.1 2
3 -97.01953 42.00750 3 FALSE 1 0 0.1 3
4 -97.01953 42.00975 4 FALSE 1 0 0.1 4
5 -97.01953 42.00978 5 FALSE 1 0 0.1 5
crude_rate$County <- as.factor(crude_rate$County)
usa_county_df$County <- as.factor(usa_county_df$County)
merge(usa_county_df, crude_rate, "County")
[1] County long lat order hole
[6] piece id group Notes County.Code
[11] Deaths Population Crude.Rate
<0 rows> (or 0-length row.names)`
My take on this. First, you cannot expect a full answer with code because you did not provide a link to you data. Next time, please provide a full description of the problem with the data.
I just used the data you provided here to illustrate.
require(tidyverse)
# Load the data
crude_rate = read.csv("county_crude.csv", header = TRUE)
usa_county = read.csv("usa_county.csv", header = TRUE)
# Create the variable "county_join" within the county_crude to "left_join" on with the usa_county data. Note that you have to have the same type of data variable between the two tables and the same values as well
crude_rate = crude_rate %>%
mutate(county_join = c(1:5))
# Join the dataframes using a left join on the county_join and County variables
df_all = usa_county %>%
left_join(crude_rate, by = c("County"="county_join")) %>%
distinct(order,hole,piece,id,group, .keep_all = TRUE)
Data link: county_crude
Data link: usa_county
Blockquote

Having trouble merging/joining two datasets on two variables in R

I realize there have already been many asked and answered questions about merging datasets here, but I've been unable to find one that addresses my issue.
What I'm trying to do is merge to datasets using two variables and keeping all data from each. I've tried merge and all of the join operations from dplyr, as well as cbind and have not gotten the result I want. Usually what happens is that one column from one of the datasets gets overwritten with NAs. Another thing that will happen, as when I do full_join in dplyr or all = TRUE in merge is that I get double the number of rows.
Here's my data:
Primary_State Primary_County n
<fctr> <fctr> <int>
1 AK 12
2 AK Aleutians West 1
3 AK Anchorage 961
4 AK Bethel 1
5 AK Fairbanks North Star 124
6 AK Haines 1
Primary_County Primary_State Population
1 Autauga AL 55416
2 Baldwin AL 208563
3 Barbour AL 25965
4 Bibb AL 22643
5 Blount AL 57704
6 Bullock AL 10362
So I want to merge or join based on Primary_State and Primary_County, which is necessary because there are a lot of duplicate county names in the U.S. and retain the data from both n and Population. From there I can then divide the Population by n and get a per capita figure for each county. I just can't figure out how to do it and keep all of the data, so any help would be appreciated. Thanks in advance!
EDIT: Adding code examples of what I've already described above.
This code (as well as left_join):
countyPerCap <- merge(countyLicense, countyPops, all.x = TRUE)
Produces this:
Primary_State Primary_County n Population
1 AK 12 NA
2 AK Aleutians West 1 NA
3 AK Anchorage 961 NA
4 AK Bethel 1 NA
5 AK Fairbanks North Star 124 NA
6 AK Haines 1 NA
This code:
countyPerCap <- right_join(countyLicense, countyPops)
Produces this:
Primary_State Primary_County n Population
<chr> <chr> <int> <int>
1 AL Autauga NA 55416
2 AL Baldwin NA 208563
3 AL Barbour NA 25965
4 AL Bibb NA 22643
5 AL Blount NA 57704
6 AL Bullock NA 10362
Hope that's helpful.
EDIT: This is what happens with the following code:
countyPerCap <- merge(countyLicense, countyPops, all = TRUE)
Primary_State Primary_County n Population
1 AK 12 NA
2 AK Aleutians East NA 3296
3 AK Aleutians West 1 NA
4 AK Aleutians West NA 5647
5 AK Anchorage 961 NA
6 AK Anchorage NA 298192
It duplicates state and county and then adds n to one record and Population in another. Is there a way to deduplicate the dataset and remove the NAs?
We can give column names in merge by mentioning "by" in merge statement
merge(x,y, by=c(col1, col2 names))
in merge statement
I figured it out. There were trailing whitespaces in the Census data's county names, so they weren't matching with the other dataset's county names. (Note to self: Always check that factors match when trying to merge datasets!)
trim.trailing <- function (x) sub("\\s+$", "", x)
countyPops$Primary_County <- trim.trailing(countyPops$Primary_County)
countyPerCap <- full_join(countyLicense, countyPops,
by=c("Primary_State", "Primary_County"), copy=TRUE)
Those three lines did the trick. Thanks everyone!

Using dplyr to make a dataframe look like output from ftable -- assigning a value to certain elements using group_by

Say for instance I have the following tall dataframe df:
state <- state.abb[1:10]
county <- letters[1:10]
zipcode <- sample(1000:9999, 5)
library(data.table)
df <- data.frame(CJ(state, county, zipcode))
colnames(df) <- c("state", "county", "zip")
df[1:15,]
state county zip
1 AK a 2847
2 AK a 2913
3 AK a 3886
4 AK a 6551
5 AK a 8447
6 AK b 2847
7 AK b 2913
8 AK b 3886
9 AK b 6551
10 AK b 8447
11 AK c 2847
12 AK c 2913
13 AK c 3886
14 AK c 6551
15 AK c 8447
For purposes of presentation, it might look nicer like this:
state county zip
1 AK a 2847
2 2913
3 3886
4 6551
5 8447
6 b 2847
7 2913
8 3886
9 6551
10 8447
11 c 2847
12 2913
13 3886
14 6551
15 8447
I use dplyr frequently to create crosstabs instead of using base R's table or ftable functions so that I can pipe the output into xtable to make a nice HTML presentation.
To make this look like output from ftable, I want to set all elements but the first unique one from each of the columns I grouped by to "". I know I can use group_by to perform similar operations as this using dplyr, but it doesn't seem to play nice with indices, which is the only method I'm envisioning to accomplish this task:
library(dplyr)
df <- group_by(df, state, county)
df[-1,] <- ""
Should I be thinking about this differently, or is there some handy dplyr syntax to do this? Thanks.
Here is one way. First, group the data by state. Any duplicated county will be "" in the first mutate(). Then, ungroup the data. Given the county, a appears at the beginning of each state, whichever rows with a are ones you want to keep state names. Otherwise, you want "". This is done in the second mutate().
group_by(df, state) %>%
mutate(county = order_by(county, ifelse(!duplicated(county), county, ""))) %>%
ungroup() %>%
mutate(state = ifelse(county == "a", state, ""))
# state county zip
#1 AK a 2429
#2 3755
#3 6108
#4 8364
#5 9577
#6 b 2429
#7 3755
#8 6108
#9 8364
#10 9577
In data.table, the code above could be something like these.
setDT(df)[, county := ifelse(!duplicated(county), county, ""), by = state][,
state := ifelse(county == "a", state, "")]
setDT(df)[, county := ifelse(!duplicated(county), county, ""), by = state][
county != "a", state := ""]

Resources