Combining data using R (or maybe Excel) -- looping to match stimuli - r
I have two sets of data, which correspond to different experiment tasks that I want to merge for analysis. The problem is that I need to search and match up certain rows for particular stimuli and for particular participants. I'd like to use a script to save some trouble. This is probably quite simple, but I've never done it before.
Here's my problem more specifically:
In the first data set, each row corresponds to a two-alternative forced choice task where two stimuli are presented at a time and the participant selects one. In the second data set, each row corresponds to a single item task where the participants are asked if they have ever seen the stimulus before. The stimuli in the second task match the stimuli in the pairs on the first task (twice as many rows). I want to be able to match up and add two columns to the first dataset--one that states if the leftside item was recognized later and one for the rightside stimulus.
I assume this could be done with nested loops, but I'm not sure if there is a elegant way to do this or perhaps a package.
As I understand it, your first dataset looks something like this:
(dat1 <- data.frame(person=1:2, stim1=1:2, stim2=3:4))
# person stim1 stim2
# 1 1 1 3
# 2 2 2 4
This would mean person 1 got stimuli 1 and 3 and person 2 got stimuli 2 and 4. Then your second dataset looks something like this:
(dat2 <- data.frame(person=c(1, 1, 2, 2), stim=c(1, 3, 4, 2), responded=c(0, 1, 0, 1)))
# person stim responded
# 1 1 1 0
# 2 1 3 1
# 3 2 4 0
# 4 2 2 1
This gives information about how each person responded to each stimulus they were given.
You can merge these two by matching person/stimulus pairs with the match function:
dat1$response1 <- dat2$responded[match(paste(dat1$person, dat1$stim1), paste(dat2$person, dat2$stim))]
dat1$response2 <- dat2$responded[match(paste(dat1$person, dat1$stim2), paste(dat2$person, dat2$stim))]
dat1
# person stim1 stim2 response1 response2
# 1 1 1 3 0 1
# 2 2 2 4 1 0
Another option (starting from the original dat1 and dat2) would be to merge twice with the merge function. You have a little less control on the names of the output columns, but it requires a bit less typing:
merged <- merge(dat1, dat2, by.x=c("person", "stim1"), by.y=c("person", "stim"))
merged <- merge(merged, dat2, by.x=c("person", "stim2"), by.y=c("person", "stim"))
Related
Complex data calculation for consecutive zeros at row level in R (lag v/s lead)
I have a complex calculation that needs to be done. It is basically at a row level, and i am not sure how to tackle the same. If you can help me with the approach or any functions, that would be really great. I will break my problem into two sub-problems for simplicity. Below is how my data looks like Group,Date,Month,Sales,lag7,lag6,lag5,lag4,lag3,lag2,lag1,lag0(reference),lead1,lead2,lead3,lead4,lead5,lead6,lead7 Group1,42005,1,2503,1,1,0,0,0,0,0,0,0,0,0,0,1,0,1 Group1,42036,2,3734,1,1,1,1,1,0,0,0,0,1,1,0,0,0,0 Group1,42064,3,6631,1,0,0,1,0,0,0,0,0,0,1,1,1,1,0 Group1,42095,4,8606,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0 Group1,42125,5,1889,0,1,1,0,1,0,0,0,0,0,0,0,1,1,0 Group1,42156,6,4819,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0 Group1,42186,7,5120,0,0,1,1,1,1,1,0,0,1,1,0,1,1,0 I have data for each Group at Monthly Level. I would like to capture the below two things. 1. The count of consecutive zeros for each row to-and-fro from lag0(reference) The highlighted yellow are the cases, that are consecutive with lag0(reference) to a certain point, that it reaches first 1. I want to capture the count of zero's at row level, along with the corresponding Sales value. Below is the output i am looking for the part1. Output: Month,Sales,Count 1,2503,9 2,3734,3 3,6631,5 4,8606,0 5,1889,6 6,4819,1 7,5120,1 2. Identify the consecutive rows(row:1,2 and 3 & similarly row:5,6) where overlap of any lag or lead happens for any 0 within the lag0(reference range), and capture their Sales and Month value. For example, for row 1,2 and 3, the overlap happens at atleast lag:3,2,1 & lead: 1,2, this needs to be captured and tagged as case1 (or 1). Similarly, for row 5 and 6 atleast lag1 is overlapping, hence this needs to be captured, and tagged as Case2(or 2), along with Sales and Month value. Now, row 7 is not overlapping with the previous or later consecutive row,hence it will not be captured. Below is the result i am looking for part2. Month,Sales,Case 1,2503,1 2,3734,1 3,6631,1 5,1889,2 6,4819,2 I want to run this for multiple groups, hence i will either incorporate dplyr or loop to get the result. Currently, i am simply looking for the approach. Not sure how to solve this problem. First time i am looking to capture things at row level in R. I am not looking for any solution. Simply looking for a first step to counter this problem. Would appreciate any leads.
An option using rle for the 1st part of the calculation can be as: df$count <- apply(df[,-c(1:4)],1,function(x){ first <- rle(x[1:7]) second <- rle(x[9:15]) count <- 0 if(first$values[length(first$values)] == 0){ count = first$lengths[length(first$values)] } if(second$values[1] == 0){ count = count+second$lengths[1] } count }) df[,c("Month", "Sales", "count")] # Month Sales count # 1 1 2503 9 # 2 2 3734 3 # 3 3 6631 5 # 4 4 8606 0 # 5 5 1889 6 # 6 6 4819 1 # 7 7 5120 1 Data: df <- read.table(text = "Group,Date,Month,Sales,lag7,lag6,lag5,lag4,lag3,lag2,lag1,lag0(reference),lead1,lead2,lead3,lead4,lead5,lead6,lead7 Group1,42005,1,2503,1,1,0,0,0,0,0,0,0,0,0,0,1,0,1 Group1,42036,2,3734,1,1,1,1,1,0,0,0,0,1,1,0,0,0,0 Group1,42064,3,6631,1,0,0,1,0,0,0,0,0,0,1,1,1,1,0 Group1,42095,4,8606,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0 Group1,42125,5,1889,0,1,1,0,1,0,0,0,0,0,0,0,1,1,0 Group1,42156,6,4819,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0 Group1,42186,7,5120,0,0,1,1,1,1,1,0,0,1,1,0,1,1,0", header = TRUE, stringsAsFactors = FALSE, sep = ",")
Mutate Cumsum with Previous Row Value
I am trying to run a cumsum on a data frame on two separate columns. They are essentially tabulation of events for two different variables. Only one variable can have an event recorded per row in the data frame. The way I attacked the problem was to create a new variable, holding the value ‘1’, and create two new columns to sum the variables totals. This works fine, and I can get the correct total amount of occurrences, but the problem I am having is that in my current ifelse statement, if the event recorded is for variable “A”, then variable “B” is assigned 0. But, for every row, I want to have the previous variable’s value assigned to the current row, so that I don’t end up with gaps where it goes from 1 to 2, to 0, to 3. I don't want to run summarize on this either, I would prefer to keep each recorded instance and run new columns through mutate. CURRENT DF: Event Value Variable Total.A Total.B 1 1 A 1 0 2 1 A 2 0 3 1 B 0 1 4 1 A 3 0 DESIRED RESULT: Event Value Variable Total.A Total.B 1 1 A 1 0 2 1 A 2 0 3 1 B 2 1 4 1 A 3 1 Thanks!
You can use the property of booleans that you can sum them as ones and zeroes. Therefore, you can use the cumsum-function: DF$Total.A <- cumsum(DF$variable=="A") Or as a more general approach, provided by #Frank you can do: uv = unique(as.character(DF$Variable)) DF[, paste0("Total.",uv)] <- lapply(uv, function(x) cumsum(DF$V == x))
If you have many levels to your factor, you can get this in one line by dummy coding and then cumsuming the matrix. X <- model.matrix(~Variable+0, DF) apply(X, 2, cumsum)
create new dataframe based on 2 columns
I have a large dataset "totaldata" containing multiple rows relating to each animal. Some of them are LactationNo 1 readings, and others are LactationNo 2 readings. I want to extract all animals that have readings from both LactationNo 1 and LactationNo 2 and store them in another dataframe "lactboth" There are 16 other columns of variables of varying types in each row that I need to preserve in the new dataframe. I have tried merge, aggregate and %in%, but perhaps I'm using them incorrectly eg. (lactboth <- totaldata[totaldata$LactationNo %in% c(1,2), ]) Animal Id is column 1, and lactationno is column 2. I can't figure out how to select only those AnimalId with LactationNo=1&2 Have also tried lactboth <- totaldata[ which(totaldata$LactationNo==1 & totaldata$LactationNo ==2), ] I feel like this should be simple, but couldn't find an example to follow quite the same. Help appreciated!!
If I understand your question correctly, then your dataset looks something like this: AnimalId LactationNo 1 A 1 2 B 2 3 E 2 4 A 2 5 E 2 and you'd like to select animals that happen to have both lactation numbers 1 & 2 (like A in this particular example). If that's the case, then you can simply use merge: lactboth <- merge(totaldata[totaldata$LactationNo == 1,], totaldata[totaldata$LactationNo == 2,], by.x="AnimalId", by.y="AnimalId")[,"AnimalId"]
finding "almost" duplicates indices in a data table and calculate the delta
i have a smallish (2k) data set that contains questionnaire answers filled out by students there were sampled twice a year. not all the students that were present for the first wave were there for the second wave and vice versa. for each student, a unique id was created that consisted of the school code, the class code, the student number and the wave as a decimal point. for example 100612.1 is a student from school 10, grade 6, 12 on the names list and this was the first wave. the idea behind the decimal point was a way to identify the same student again in the data set (the only value which differs less than abs(1) from a given id is the same student on the other wave).at least that was the idea. i was thinking of a script that would do the following: - find the rows who's unique id is less than abs(1) from one another - for those rows, generate a new row (in a new table) that consists of the student id and the delta of the measured variables( i.e value in the wave 2 - value in wave 1). i a new to R but i have a tiny bit of background in other OOP. i thought about creating a for loop that runs from 1 to length(df) and just looks for it's "brother". my gut feeling tells me that this not the way things are done in R. any ideas? all i need is a quick way of sifting through the data looking for the second wave row. i think the rest should be straight forward from there. thank you for helping PS. since this is my first post here i apologize beforehand for any wrongdoings in this post... :)
The question alludes to data.table, so here is a way to adapt #jed's answer using that package. ids <- c(100612.1,100612.2,100613.1,100613.2,110714.1,201802.2) answers <- c(5,4,3,4,1,0) Example data as before, now instead of data.frame and tapply you can do this: library(data.table) surveyDT <- data.table(ids, answers) surveyDT[, `:=` (child = substr(ids, 1, 6), wave = substr(ids, 8, 8))] # split ID's # note multiple assign-by-reference := syntax above setkey(surveyDT, child, wave) # order data # calculate delta on keyed data, grouping by child surveyDT[, delta := diff(answers), by = child] unique(surveyDT[, delta, by = child]) # list results child delta 1: 100612 -1 2: 100613 1 3: 110714 NA 4: 201802 NA To remove rows with NA values for delta: unique(surveyDT[, .SD[(!is.na(delta))], by = child]) child ids answers wave delta 1: 100612 100612.1 5 1 -1 2: 100613 100613.1 3 1 1 Use .SDcols to output only specific columns (in addition to the by columns), for example, unique(surveyDT[, .SD[(!is.na(delta))], by = child, .SDcols = 'delta']) child delta 1: 100612 -1 2: 100613 1 It took me some time to get acquainted with data.table syntax, but now I find it more intuitive, and it's fast for big data.
There are two ways that come to mind. The easiest is to use the function floor(), which returns the integer For example: floor(100612.1) #[1] 100612 floor(9.9) #[1] 9 Alternatively, you could write a fairly simple regex expression to get rid of the decimal place too. Then you can use unique() to find the rows that are or are not duplicated entries.
Lets make some fake data so we can see our problem easily: ids <- c(100612.1,100612.2,100613.1,100613.2,110714.1,201802.2) answers <- c(5,4,3,4,1,0) survey <- data.frame(ids,answers) Now lets split our ids into two different columns: survey$child_id <- substr(survey$ids,1,6) survey$wave_id <- substr(survey$ids,8,8) Then we'll order by child and wave, and compute differences based on child: survey[order(survey$child_id, survey$wave_id),] survey$delta <- unlist(tapply(survey$answers, survey$child_id, function(x) c(NA,diff(x)))) Output: ids answers child_id wave_id delta 1 100612.1 5 100612 1 NA 2 100612.2 4 100612 2 -1 3 100613.1 3 100613 1 NA 4 100613.2 4 100613 2 1 5 110714.1 1 110714 1 NA 6 201802.2 0 201802 2 NA
Unit of Analysis Conversion
We are working on a social capital project so our data set has a list of an individual's organizational memberships. So each person gets a numeric ID and then a sub ID for each group they are in. The unit of analysis, therefore, is the group they are in. One of our variables is a three point scale for the type of group it is. Sounds simple enough? We want to bring the unit of analysis to the individual level and condense the type of group it is into a variable signifying how many different types of groups they are in. For instance, person one is in eight groups. Of those groups, three are (1s), three are (2s), and two are (3s). What the individual level variable would look like, ideally, is 3, because she is in all three types of groups. Is this possible in the least?
##simulate data ##individuals n <- 10 ## groups g <- 5 ## group types gt <- 3 ## individuals*group membership N <- 20 ## inidividuals data frame di <- data.frame(individual=sample(1:n,N,replace=TRUE), group=sample(1:g,N, replace=TRUE)) ## groups data frame dg <- data.frame(group=1:g, type=sample(1:gt,g,replace=TRUE)) ## merge dm <- merge(di,dg) ## order - not necessary, but nice dm <- dm[order(dm$individual),] ## group type per individual library(plyr) dr <- ddply(dm, "individual", function(x) length(unique(x$type))) > head(dm) group individual type 2 2 1 2 8 2 1 2 20 5 1 1 9 3 3 2 12 3 3 2 17 4 3 2 > head(dr) individual V1 1 1 2 2 3 1 3 4 2 4 5 1 5 6 1 6 7 1
I think what you're asking is whether it is possible to count the number of unique types of group to which an individual belongs. If so, then that is certainly possible. I wouldn't be able to tell you how to do it in R since I don't know a lot of R, and I don't know what your data looks like. But there's no reason why it wouldn't be possible. Is this data coming from a database? If so, then it might be easier to write a SQL query to compute the value you want, rather than to do it in R. If you describe your schema, there should be lots of people here who could give you the query you need.