Data.frame merge usage for selective row replacement [duplicate] - r

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
how to use merge() to update a table in R
What is the proper use of merge for this kind of operation in R? See below.
older <- data.frame(Member=c("first","second","third","fourth"),
VAL=c(NA,NA,NA,NA))
newer <- data.frame(Member=c("third","first"),
VAL=c(2125,4587))
#
merge.data.frame(older,newer,all=T)
Member VAL
1 first 4587
2 first NA
3 fourth NA
4 second NA
5 third 2125
6 third NA
That above is not exactly what I expect, I want to replace the older entries by newer ones, and not add another row. Like below. And I fail with merge.data.frame.
my.merge.fu(older,newer)
Member VAL
1 first 4587
2 second NA
3 third 2125
4 fourth NA
Kind of selective row replacement, where newer takes precedence and could not contain other Members than those in older.
Is there proper English term for such a R operation and is there prebuilt function for that?
Thank you.

You have effectively answered your own question.
If you want to deal with Matthew Ploude's point you could use
older$VAL[match(newer[newer$Member %in% older$Member, ]$Member, older$Member)
] <- newer[newer$Member %in% older$Member, ]$VAL
This also the effect that where newer has multiple new values, it is the latest which ends up in older so for example
older <- data.frame(Member=c("first","second","third","fourth"),
VAL=c(1234,NA,NA,5678))
newer <- data.frame(Member=c("third","first","fifth","first"),
VAL=c(2125,4587,2233,9876))
older$VAL[match(newer[newer$Member %in% older$Member,]$Member, older$Member)
] <- newer[newer$Member %in% older$Member,]$VAL
gives
> older
Member VAL
1 first 9876
2 second NA
3 third 2125
4 fourth 5678

Related

Populate multiple columns by values in one column [duplicate]

This question already has answers here:
How to reshape data from long to wide format
(14 answers)
Closed 5 years ago.
I haven't been able to find a solution to this so far... This one came the closest: 1
Here is a small subset of my dataframe, df:
ANIMAL(chr) MARKER(int) GENOTYPE(int)
"1012828" 1550978 0
"1012828" 1550982 2
"1012828" 1550985 1
"1012830" 1550982 0
"1012830" 1550985 2
"1012830" 1550989 2
And what I want is this...
ANIMAL MARKER_1550978 MARKER_1550982 MARKER_1550985 MARKER_1550989
"1012828" 0 2 1 NA
"1012830" NA 0 2 2
My thought, initially was to create columns for each marker according to the referenced question
markers <- unique(df$MARKER)
df[,markers] <- NA
since I can't have integers for column names in R. I added "MARKER_" to each new column so it would work:
df$MARKER <- paste("MARKER_",df$MARKER)
markers <- unique(df$MARKER)
df[,markers] <- NA
Now I have all my new columns, but with the same number of rows. I'll have no problem getting rid of unnecessary rows and columns, but how would I correctly populate my new columns with their correct GENOTYPE by MARKER and ANIMAL? Am guessing one-or-more of these: indexing, match, %in%... but don't know where to start. Searching for these in stackoverflow did not yield anything that seemed pertinent to my challenge.
What you're asking is a very common dataframe operation, commonly called "spreading", or "widening". The inverse of this operation is "gathering". Check out this handy cheatsheet, specifically the part on reshaping data.
library(tidyr)
df %>% spread(MARKER, GENOTYPE)
#> ANIMAL 1550978 1550982 1550985 1550989
#> 1 1012828 0 2 1 NA
#> 2 1012830 NA 0 2 2

How to use lookup in R? [duplicate]

This question already has answers here:
How should I deal with "package 'xxx' is not available (for R version x.y.z)" warning?
(18 answers)
Closed 7 years ago.
I was looking for an R equivalent function to the vlookup function in Excel and I came across lookup in R, however when I try to use it I keep getting errors.
I have 2 data frames (myresday and Assorted). myresday contains 2 columns: one with codes (column name is Res.Code) and the other with corresponding days of the week (colname = ContDay). Each of the codes represents a person and each person is matched with a day of the week they are supposed to be in work. Assorted contains the record of when each person actually came in over the course of a year. It is a dataframe similar to myresday, however it is much bigger. I want to see if the codes in Assorted are matched with the correct days or if the days corresponding to each code is incorrect.
I was trying to use lookup but kept coming across several errors. Here is my code:
Assorted$Cont_Day <- lookup(Assorted$VISIT_PROV_ID, myresday[, 1:2])
'the codes in myresday are in column 1, the days in column 2
R kept saying the function couldn't be found. I looked more into the function and someone suggested to use qdapTools library, so I put:
library('qdapTools')
before my code, and it said there is not qdapTools package.
Does anyone know how to do this or know of a better way to solve this?
you need to install qdapTools before using its library.
using this code works:
install.packages("qdapTools")
library('qdapTools')
Assorted$Cont_Day <- lookup(Assorted$VISIT_PROV_ID, myresday[, 1:2])
The base library function merge is likely to do what you need it to do, without involving any packages.
Let's make some toy data
set.seed(100)
myresday <- data.frame(
Res.Code=1:30,
ContDay=sample(1:7, 30, replace=T))
Assorted <- data.frame(
date=sample(seq(as.Date('2010-01-1'),as.Date('2011-01-01'), by='day'), 100, replace=T),
VISIT_PROV_ID=sample(1:30, 100, replace=T))
head(Assorted)
date VISIT_PROV_ID
1 2010-06-28 8
2 2010-12-06 26
3 2010-05-08 23
4 2010-12-16 15
5 2010-09-12 18
6 2010-11-22 1
And then do the merge
checkDay <- merge(Assorted, myresday, by.x='VISIT_PROV_ID', by.y='Res.Code')
head(checkDay)
VISIT_PROV_ID date ContDay
1 1 2010-06-16 3
2 1 2010-08-07 3
3 1 2010-11-22 3
4 1 2010-03-18 3
5 2 2010-08-19 2
6 2 2010-11-04 2
Edit: Updated column names

Understanding the syntax for Column vs Row indexing in R

I'm a bit confused on the filtering scheme on an R data frame.
For example, let's say we have the following data frame titled dframe:
> str(dframe)
'data.frame': 143 obs. of 3 variables:
$ Year : int 1999 2005 2007 2008 2009 2010 2005 2006 2007 2008 ...
$ Name : Factor w/ 18 levels "AADAM","AADEN",..: 1 1 2 2 2 2 3 3 3 3 ...
$ Frequency: int 5 6 10 34 38 12 10 6 10 5 ...
Now if I want to filter dframe where the values of Name is of "AADAM", the proper filter is:
dframe[dframe$Name=="AADAM",]
The part where I'm confused is why the comma doesn't come first. Why isn't it this: dframe[,dframe$Name=="AARUSH"]
UPDATE: You clarified your question is really "Please give examples of what sort of logical expressions are valid for filtering columns?"
I agree with you the syntax appears weird initially, but it has the following logic.
The bottom line is that column-filter expressions are typically less rich and expressive than row-filtering expressions, and in particular you can't chain logical indexing the way you do with rows.
Best way is to think of indexing expressions as the general form:
dframe[<row-index-expression>,<col-index-expression>]
where either index-expression is optional, so you can just do one and we (crucially!) need the comma to disambiguate whether it's row- or column-indexing:
dframe[<row-index-expression>,] # such as dframe[dframe$Name=="ADAM",]
dframe[,<col-index-expression>]
Before we look at examples of col-index-expression and what's valid (and invalid) to include in one, let's review and discuss how R does indexing - I had the same confusion when I started with it.
In this example, you have three columns. You can refer to them by their string names 'Year','Name','Frequency'. You can also refer to them by column indices 1,2,3 where the numbers 1,2,3 correspond to the entries colnames(dframe). R does indexing using the '[' operator, also the '[[' operator. Here are some valid examples of ways to index column-indexing:
dframe[,2] # column 2 / Name
dframe[,'Name'] # column 2 / Name
dframe[,c('Name','Frequency')] # string vector - very common
dframe[,c(2,3)] # integer vector - also very common
dframe[,c(F,T,T)] # logical vector - very rarely seen, and a pain in the butt to compute
Now, if you choose to use a logical expression for the column-index, it must be a valid expression without using column names - inside a column it doesn't know their own names.
Suppose you wanted to dynamically filter "give me only the factor columns from dframe".
Something like:
unlist(apply(dframe[1,1:3], 2, is.factor), use.names=F) # except I can't seem to remove the colnames
For more help and examples on indexing look at the '[' operator help-page:
Type ?'['
dframe[,dframe$Name=="ADAM"] is invalid attempt at column-indexing because the columns know nothing about Name=="ADAM"
Addendum: code to generate example dataframe (because you didn't dump us a dput output)
set.seed(123)
N = 10
randomName <- function() { cat(sample(letters, size=runif(1)*6+2, replace=T), sep='') }
dframe = data.frame(Year=round(runif(N,1980,2014)),
Name = as.factor(replicate(N, randomName())),
Frequency=round(runif(N, 2,40)))
You have to remember that when you're sub-setting, the part before the comma is specifying which rows you want, and the part after the comma is specifying which columns you want. ie:
dframe[rowsyouwant, columnsyouwant]
You're filtering based on columns, but you want all of the columns in your result, so the space after the comma is blank. You want some sub-set of rows, so your filtering specification goes before the comma, where the rows you want are specified.
As others have indicated, requesting a certain subset of a data frame requires the syntax [rows, columns]. Since dframe[has 143 rows, has 3 columns], any request for some part of dframe should be of the form
dframe[which of the 143 rows do I want?, which of the 3 columns do I want?].
Because dframe$Name is a vector of length 143, the comparison dframe$Name=='AADAM' is a vector of T/F values that also has length 143. So,
dframe[dframe$Name=='AADAM',]
is like saying
dframe[of the 143 rows I want these ones, I want all columns]
whereas
dframe[,dframe$Name=='AADAM']
generates an error because it's like saying
dframe[I want all rows, of the 143 columns I want these ones]
On a side note, you may want to look into the subset() function if you're not already familiar with it. You could get the same result by writing subset(dframe, Name=='AADAM')
As others have said, the structure within brackets is row, then column.
One way I think of the syntax of selecting data from a data.frame using:
dframe[dframe$Name=="AADAM",]
is to think of a noun, then a verb where:
dframe[] is the noun. It is the object on which you want to perform an action
and
[dframe$Name=="AADAM",] is the verb. It is the action you want to perform.
I have a silly way of expressing this to myself, but it keeps things straight in my mind:
Hey, you! dframe! I am going to... ...in this case, select all of your rows in which Name is equal to AADAM!
By keeping the column portion of [dframe$Name=="AADAM",] blank you are saying you want to keep all columns.
Sometimes it can be a little difficult to remember that you have to write dframe both inside and outside the brackets.
As for exactly why row comes first and column comes second, I do not know, but row had to be either first or second.
dframe <- read.table(text = '
Year Name Frequency
1 ADAM 4
3 BOB 10
7 SALLY 5
2 ADAM 12
4 JIM 3
12 ADAM 7
', header = TRUE)
dframe[,dframe$Name=="ADAM"]
# Error in `[.data.frame`(dframe, , dframe$Name == "ADAM") :
# undefined columns selected
dframe[dframe$Name=="ADAM",]
# Year Name Frequency
# 1 1 ADAM 4
# 4 2 ADAM 12
# 6 12 ADAM 7
dframe[,'Name']
# [1] ADAM BOB SALLY ADAM JIM ADAM
# Levels: ADAM BOB JIM SALLY
dframe[dframe$Name=="ADAM",'Name']
# [1] ADAM ADAM ADAM
# Levels: ADAM BOB JIM SALLY

Excluding values in cross table [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
R filtering out a subset
I have an R dataset. In this dataset, I wish to create a crosstable using the package gmodels for two categorial variables, and then run a chisq.test on them.
The two variables are witness and agegroup. witness consists of observations that has value 1,2 and 9. agegroup consists of values 1,2.
I wish to exclude values if witness=9, or/and a 3rd variable EMS=2 from the table but I am not sure how to proceed.
library(gmodels)
CrossTable (mydata$witness, mydata$agegroup)
chisq.test (mydata$witness, mydata$agegroup)
...so my question is, how can i do the above with the conditions that witness!=9 and EMS!=2
data:
witness agegroup EMS
1 1 2
2 2 2
1 1 2
2 1 2
9 2 2
2 2 2
1 2 2
9 2 2
2 1 2
#save the data in your current working directory
data <- read.table("data", header=TRUE, sep = " ")
data$witness[data$witness == "9"] <- NA
mydata <- data[!is.na(data$witness),]
library("gmodels")
CrossTable(mydata$witness, mydata$agegroup, chisq=TRUE)
You can leave the variable "EMS" in "mydata". It does no harm to your analysis!
HTH
I expect this question to be closed as it really seems like a duplicate. But as both Chase and I suggested, I think some form of subsetting is the simplest way to go about this, e.g.
mydata[mydata$witness !=9 & mydata$EMS !=2,]

filtering large data sets to exclude an identical element across all columns

I am a relatively new R user, and most of the complex coding (and packages) looks like Greek to me. It has been a long time since I used a programming language (Java/Perl) and I have only used R for very simple manipulations in the past (basic loading data from file, subsetting, ANOVA/T-Test). However, I am working on a project where I had no control over the data layout and the data file is very lengthy.
In my data, I have 172 rows which feature the Participant to a survey and 158 columns, each which represents the question number. The answers for each are 1-5. The raw data includes the number "99" to indicate that a question was not answered. I need to exclude any questions where a Participant did not answer without excluding the entire participant.
Part Q001 Q002 Q003 Q004
1 2 4 99 2
2 3 99 1 3
3 4 4 2 5
4 99 1 3 2
5 1 3 4 2
In the past I have used the subset feature to filter my data
data.filter <- subset(data, Q001 != 99)
Which works fine when I am working with sets where all my answers are contained in one column. Then this would just delete the whole row where the answer was not available.
However, with the answers in this set spread across 158 columns, if I subset out 99 in column 1 (Q001), I also filter out that entire Participant.
I'd like to know if there is a way to filter/subset the data such that my large data set would end up having 'blanks' when the "99" occured so that these 99's would not inflate or otherwise interfere with the statistics I run of the rest of the numbers. I need to be able to calculate means per question and run ANOVAs and T-Tests on various questions.
Resp Q001 Q002 Q003 Q004
1 2 4 2
2 3 1 3
3 4 4 2 5
4 1 3 2
5 1 3 4 2
Is this possible to do in R? I've tried to filter it before submitting to R, but it won't read the data file in when I have blanks, and I'd like to be able to use the whole data set without creating a subset for each question (which I will do if I have to... it's just time consuming if there is a better code or package to use)
Any assistance would be greatly appreciated!
You could replace the "99" by "NA" and the calculate the colMeans omitting NAs:
df <- replicate(20, sample(c(1,2,3,99), 4))
colMeans(df) # nono
dfc <- df
dfc[dfc == 99] <- NA
colMeans(dfc, na.rm = TRUE)
You can also indicate which values are NA's when you read your data base. For your particular case:
mydata <- read.table('dat_base', na.strings = "99")

Resources