Order multiple columns in R - r

Sample data:
now <- data.frame(id=c(123,123,123,222,222,222,135,135,135),year=c(2002,2001,2003,2006,2007,2005,2001,2002,2003),freq=c(3,1,2,2,3,1,3,1,2))
Desired output:
wanted <- data.frame(id=c(123,123,123,222,222,222,135,135,135),year=c(2001,2002,2003,2005,2006,2007,2001,2002,2003),freq=c(1,2,3,1,2,3,1,2,3))
This solution works, but I'm getting memory error (cannot assign 134kb...)
ddply(now,.(id), transform, year=sort(year))
Please note I need speedwise efficient solution as I have dataframe of length 300K and 50 columns. Thanks.

You can use dplyr to sort it (which is called arrange in dplyr). dplyr is also faster than plyr.
wanted <- now %>% arrange(id, year)
# or: wanted <- arrange(now, id, year)
> wanted
# id year freq
#1 123 2001 1
#2 123 2002 3
#3 123 2003 2
#4 135 2001 3
#5 135 2002 1
#6 135 2003 2
#7 222 2005 1
#8 222 2006 2
#9 222 2007 3
You could do the same with base R:
wanted <- now[order(now$id, now$year),]
However, there is a diffrence in your now and wanted data.frame for id == 123 and year 2002 (in your now df, the freq is 2 while it is 3 in the wanted df). Based on your question, I assume this is a typo and that you did not actually want to change the freq values.

You could use base R function here
now <- now[order(now$id, now$year), ]
or data.table for faster performance
library(data.table)
setDT(now)[order(id, year)]
or
now <- data.table(now, key = c("id", "year"))
or
setDT(now)
setkey(now, id, year)

Related

converting an abbreviation into a full word

I am trying to avoid writing a long nested ifelse statement in excel.
I am working on two datasets, one where I have abbreviations and county names.
Abbre
COUNTY_NAME
1 AD Adams
2 AS Asotin
3 BE Benton
4 CH Chelan
5 CM Clallam
6 CR Clark
And another data set that contains the county abbreviation and votes.
CountyCode Votes
1 WM 97
2 AS 14
3 WM 163
4 WM 144
5 SJ 21
For the second table, how do I convert the countycode (abbreviation) into the full spelled-out text and add that as a new column?
I have been trying to solve this unsuccessfully using grep, match, and %in%. Clearly I am missing something and any insight would be greatly appreciated.
We can use a join
library(dplyr)
library(tidyr)
df2 <- df2 %>%
left_join(Abbre %>%
separate(COUNTY_NAME, into = c("CountyCode", "FullName")),
by = "CountyCode")
Or use base R
tmp <- read.table(text = Abbre$COUNTY_NAME, header = FALSE,
col.names = c("CountyCode", "FullName"))
df2 <- merge(df2, tmp, by = 'CountyCode', all.x = TRUE)
Another base R option using match
df2$COUNTY_NAME <- with(
df1,
COUNTY_NAME[match(df2$CountyCode, Abbre)]
)
gives
> df2
CountyCode Votes COUNTY_NAME
1 WM 97 <NA>
2 AS 14 Asotin
3 WM 163 <NA>
4 WM 144 <NA>
5 SJ 21 <NA>
A data.table option
> setDT(df1)[setDT(df2), on = .(Abbre = CountyCode)]
Abbre COUNTY_NAME Votes
1: WM <NA> 97
2: AS Asotin 14
3: WM <NA> 163
4: WM <NA> 144
5: SJ <NA> 21

Programmatically Finding, Correcting IDs in Dataframes with Different Column and Row Lengths

I have two data frames of differing lengths and widths. Both contain panel data on sites across several years, with each site having a unique ID code. However, these unique ID codes were altered for some sites between data frames. For example:
Year <- c(2006,2006,2006,2006)
Name <- as.character(c("A","B","C","D.B"))
Qtr.2 <- as.numeric(c(14,32,62,40))
Code <- as.character(c(123,456,789,101))
DF1 <- data.frame(Year,Name,Qtr.2,Code,stringsAsFactors = FALSE)
Year2 <- c(2007,2007,2007,2007,2007,2007)
Name2 <- as.character(c("A","B","C","E","D.B","D.A"))
Qtr.3 <- as.numeric(c(14,32,62,11,40,20))
Code2 <- as.character(c("W33","456","789","121","W133","W111"))
Type <- as.character(c("Blue","Red","Red","Green","Blue","Red"))
DF2 <- data.frame(Year2,Name2,Qtr.3,Code2,Type,stringsAsFactors = FALSE)
> DF1
Year Name Qtr.2 Code
1 2006 A 14 123
2 2006 B 32 456
3 2006 C 62 789
4 2006 D.B 40 101
> DF2
Year2 Name2 Qtr.3 Code2 Type
1 2007 A 14 W33 Blue
2 2007 B 32 456 Red
3 2007 C 62 789 Red
4 2007 E 11 121 Green
5 2007 D.B 40 W133 Blue
6 2007 D.A 20 W111 Red
Here, site “A's” code has changed from “123” in DF1 to “W33” in DF2.
I am having trouble programmatically finding and converting the altered ID codes to match their prior ID code. In other words, I want to match names from DF1 to DF2, and replace "Code2" in DF2 with "Code" from DF1 when a matching name is discovered. My approach thus far has involved a rather convoluted padding and for loop process. However, I feel this must be a semiregular wrangling problem and there must be a simpler approach.
Ideally, my second DF would look as follows:
Year2_fixed <- c(2007,2007,2007,2007,2007,2007)
Name2_fixed <- as.character(c("A","B","C","E","D.B","D.A"))
Qtr.3_fixed <- as.numeric(c(14,32,62,11,40,20))
Code2_fixed <- as.character(c("123","456","789","121","101","W111"))
Type <- as.character(c("Blue","Red","Red","Green","Blue","Red"))
DF2_fixed <-data.frame(Year2_fixed,Name2_fixed,Qtr.3_fixed,Code2_fixed,Type,stringsAsFactors = FALSE)
> DF2_fixed
Year2_fixed Name2_fixed Qtr.3_fixed Code2_fixed Type
1 2007 A 14 123 Blue
2 2007 B 32 456 Red
3 2007 C 62 789 Red
4 2007 E 11 121 Green
5 2007 D.B 40 101 Blue
6 2007 D.A 20 W111 Red
I have done some looking but I haven't found a clear answer on OS that gets at this problem. It is possible I am not asking the question clearly enough in searches. Please point it out if it is out there, or let me know if I can clarify my question.
A few last points: I want to be able to perform an inner_join BY the code, preserving those observations that appear in both sets. I am providing a toy example, but, as is often the case, the true problem is too large to manually check these names.
Edit
As pointed out by others, stringAsFactors = FALSE has been added to prevent error.
Try using the match command:
DF2 <- within(DF2, {
ind <- match(Name2, DF1$Name)
new_code <- DF1$Code[ind]
Code_fixed <- ifelse(is.na(ind), as.character(Code2), as.character(new_code))
rm(ind, new_code)
})
DF2
A solution is to use dplyr::coalesce along with left_join to get the desired result.
library(dplyr)
DF2 %>% left_join(select(DF1, Name, Code), by=c("Name2" = "Name")) %>%
mutate(Code2 = coalesce(Code, Code2)) %>%
select(-Code)
# Year2 Name2 Qtr.3 Code2 Type
# 1 2007 A 14 123 Blue
# 2 2007 B 32 456 Red
# 3 2007 C 62 789 Red
# 4 2007 E 11 121 Green
# 5 2007 D.B 40 101 Blue
# 6 2007 D.A 20 W111 Red
Note: stringsAsFactors = FALSE has been added in OP's code to create data.frames, otherwise it would generate unnecessary warnings.
Data:
Year <- c(2006,2006,2006,2006)
Name <- as.character(c("A","B","C","D.B"))
Qtr.2 <- as.numeric(c(14,32,62,40))
Code <- as.character(c(123,456,789,101))
DF1 <- data.frame(Year,Name,Qtr.2,Code, stringsAsFactors = FALSE)
Year2 <- c(2007,2007,2007,2007,2007,2007)
Name2 <- as.character(c("A","B","C","E","D.B","D.A"))
Qtr.3 <- as.numeric(c(14,32,62,11,40,20))
Code2 <- as.character(c("W33","456","789","121","W133","W111"))
Type <- as.character(c("Blue","Red","Red","Green","Blue","Red"))
DF2 <- data.frame(Year2,Name2,Qtr.3,Code2,Type, stringsAsFactors = FALSE)

efficiently creating a panel data.frame from cross sections with unharmonized column names

I need to create a panel data set (long format) from multiple yearly (cross-sectional) data sets. The variables of interest have different names in the single data sets and i need to harmonize them.
I loaded the dataframes to a list and now want to manipulate the names using lapply or a chunk of code that allows binding the dataframes. I can see several ways of doing this, but would like to use one which works with little code on a large list of data.frames, so that I can do this for several variables and easily change specifics later on.
So what I am looking for is either a way to rename the columns, so that I able to simple use bind_rows() from dplyr or an equivalent method, or a way to rename and bind the datasets in one step. Since I need to do this for several variables it might be safer to keep the two steps apart.
To illustrate, here an example:
a <- data.frame(id=c("Marc", "Julia", "Rico"), year=2000:2002, laborincome=1:3)
b <- data.frame(id=c("Marc", "Julia", "Rico"), earningsfromlabor=2:4, year=2003:2005)
dflist <- list(a, b)
equivalent_vars <- c("laborincome", "earningsfromlabor")
newnanme <- "income"
Desired result:
data.frame(id=c("Marc", "Julia", "Rico"), income=c(1,2,3,2,3,4), year=2000:2005)
id income year
1 Marc 1 2000
2 Julia 2 2001
3 Rico 3 2002
4 Marc 2 2003
5 Julia 3 2004
6 Rico 4 2005
We could use setnames from data.table
library(data.table)
do.call(rbind, Map(setnames, dflist, old = equivalent_vars, new = newnanme))
# id year income
#1 Marc 2000 1
#2 Julia 2001 2
#3 Rico 2002 3
#4 Marc 2003 2
#5 Julia 2004 3
#6 Rico 2005 4
Or we can use the :=
library(dplyr)
library(purrr)
map2_df(dflist, equivalent_vars, ~ .x %>%
rename(!! (newnanme) := !! .y)) %>%
select(id, income, year)
# id income year
#1 Marc 1 2000
#2 Julia 2 2001
#3 Rico 3 2002
#4 Marc 2 2003
#5 Julia 3 2004
#6 Rico 4 2005

Aggregate function in R using two columns simultaneously

Data:-
df=data.frame(Name=c("John","John","Stacy","Stacy","Kat","Kat"),Year=c(2016,2015,2014,2016,2006,2006),Balance=c(100,150,65,75,150,10))
Name Year Balance
1 John 2016 100
2 John 2015 150
3 Stacy 2014 65
4 Stacy 2016 75
5 Kat 2006 150
6 Kat 2006 10
Code:-
aggregate(cbind(Year,Balance)~Name,data=df,FUN=max )
Output:-
Name Year Balance
1 John 2016 150
2 Kat 2006 150
3 Stacy 2016 75
I want to aggregate/summarize the above data frame using two columns which are Year and Balance. I used the base function aggregate to do this. I need the maximum balance of the latest year/ most recent year . The first row in the output , John has the latest year (2016) but the balance of (2015) , which is not what I need, it should output 100 and not 150. where am I going wrong in this?
Somewhat ironically, aggregate is a poor tool for aggregating. You could make it work, but I'd instead do:
library(data.table)
setDT(df)[order(-Year, -Balance), .SD[1], by = Name]
# Name Year Balance
#1: John 2016 100
#2: Stacy 2016 75
#3: Kat 2006 150
I will suggest to use the library dplyr:
data.frame(Name=c("John","John","Stacy","Stacy","Kat","Kat"),
Year=c(2016,2015,2014,2016,2006,2006),
Balance=c(100,150,65,75,150,10)) %>% #create the dataframe
tbl_df() %>% #convert it to dplyr format
group_by(Name, Year) %>% #group it by Name and Year
summarise(maxBalance=max(Balance)) %>% # calculate the maximum for each group
group_by(Name) %>% # group the resulted dataframe by Name
top_n(1,maxBalance) # return only the first record of each group
Here is another solution without the data.table package.
first sort the data frame,
df <- df[order(-df$Year, -df$Balance),]
then select the first one in each group with the same name
df[!duplicated[df$Name],]

How to create dataframe subset of the one patient observation with the lowest score on a variable

Hello I have a dataset with multiple patients, each with multiple observations.
I want to select the earliest observation for each patient.
Example:
Patient ID Tender Swollen pt_visit
101 1 10 6
101 6 12 12
101 4 3 18
102 9 5 18
102 3 6 24
103 5 2 12
103 2 1 18
103 8 0 24
The pt_visit variable is the number of months the patient was in the study at the time of the observation. What I need is the first observation from each patient based on the lowest number of months in the pt_visit column. However I need the earliest observation for each patient ID.
My desired results:
Patient ID Tender Swollen pt_visit
101 1 10 6
102 9 5 18
103 5 2 12
Assuming your data frame is called df, use the ddply function in the plyr package:
require(plyr)
firstObs <- ddply(df, "PatientID", function(x) x[x$pt_visit == min(x$pt_visit), ])
I would use the data.table package:
Data <- data.table(Data)
setkey(Data, Patient_ID, pt_visit)
Data[,.SD[1], by=Patient_ID]
Assuming that the Patient ID column is actually named Patient_ID, here are a few approaches. DF is assumed to be the name of the input data frame:
sqldf
library(sqldf)
sqldf("select Patient_ID, Tender, Swollen, min(pt_visit) pt_visit
from DF
group by Patient_ID")
or
sqldf("select *, min(pt_visit) pt_visit from DF group by Patient_ID")[-ncol(DF)]
Note: The above two alternatives use an extension to SQL only found in SQLite so be sure you are using the SQLite backend. (SQLite is the default backend for sqldf unless RH2, RProgreSQL or RMYSQL is loaded.)
subset/ave
subset(DF, ave(pt_visit, Patient_ID, FUN = rank) == 1)
Note: This makes use of the fact that there are no duplicate pt_visit values within the same Patient_ID. If there were we would need to specify the ties= argument to rank.
I almost think they should be a subset parameter named "by" that would do the same as it does in data.table. This is a base-solution:
do.call(rbind, lapply( split(dfr, dfr$PatientID),
function(x) x[which.min(x$pt_visit),] ) )
PatientID Tender Swollen pt_visit
101 101 1 10 6
102 102 9 5 18
103 103 5 2 12
I guess you can see why #hadley built 'plyr'.

Resources