Creating a new column in R - r

I have a data.frame like the following:
regions admit men_age group
1 1234 34 2
2 3416 51 1
3 2463 26 3
4 1762 29 2
5 2784 31 4
6 999 42 1
7 2111 23 2
8 1665 36 3
9 2341 21 4
10 1723 33 1
I would like to create new columns using admit and group as follows:
regions admit men_age group admit1 admit2 admit3 admit4
1 1234 34 2 0 1234 0 0
2 3416 51 1 3416 0 0 0
3 2463 26 3 0 0 2463 0
4 1762 29 2 0 1762 0 0
5 2784 31 4 0 0 0 2784
6 999 42 1 999 0 0 0
7 2111 23 2 0 2111 0 0
8 1665 36 3 0 0 1665 0
9 2341 21 4 0 0 0 2341
10 1723 33 1 1723 0 0 0
In fact, what I want to do is to create four new admit columns according to group column as follows: in admit 1 column, the value for rows where group is 1, put the corresponding admit number, other wise put zero. In admit 2 column, the values for rows where group is 2, put the corresponding admit number, otherwise put zero ans this applies for two other column as well.
I tried a couple of ways to solve it, but failed.
May please someone help me to solve this?

A solution using tidyverse. We can create the columns and then spread them with fill = 0.
library(tidyverse)
dat2 <- dat %>%
mutate(group2 = str_c("admit", group), admit2 = admit) %>%
spread(group2, admit2, fill = 0)
dat2
# regions admit men_age group admit1 admit2 admit3 admit4
# 1 1 1234 34 2 0 1234 0 0
# 2 2 3416 51 1 3416 0 0 0
# 3 3 2463 26 3 0 0 2463 0
# 4 4 1762 29 2 0 1762 0 0
# 5 5 2784 31 4 0 0 0 2784
# 6 6 999 42 1 999 0 0 0
# 7 7 2111 23 2 0 2111 0 0
# 8 8 1665 36 3 0 0 1665 0
# 9 9 2341 21 4 0 0 0 2341
# 10 10 1723 33 1 1723 0 0 0
DATA
dat <- read.table(text = "regions admit men_age group
1 1234 34 2
2 3416 51 1
3 2463 26 3
4 1762 29 2
5 2784 31 4
6 999 42 1
7 2111 23 2
8 1665 36 3
9 2341 21 4
10 1723 33 1",
header = TRUE)

A Base R solution would be using ifelse(). Supposed you data.frame is x, you could do this:
# create the columns with the selected values
for( i in 1:4 ) x[ i + 4 ] <- ifelse( x$group == i, x$admit, 0 )
# rename the columns to your liking
colnames( x )[ 5:8 ] <- c( "admit1", "admit2", "admit3", "admit4" )
This gives you
> x
regions admit men_age group admit1 admit2 admit3 admit4
1 1 1234 34 2 0 1234 0 0
2 2 3416 51 1 3416 0 0 0
3 3 2463 26 3 0 0 2463 0
4 4 1762 29 2 0 1762 0 0
5 5 2784 31 4 0 0 0 2784
6 6 999 42 1 999 0 0 0
7 7 2111 23 2 0 2111 0 0
8 8 1665 36 3 0 0 1665 0
9 9 2341 21 4 0 0 0 2341
10 10 1723 33 1 1723 0 0 0
If you don't like the explicit naming, you could do it in the for() loop already:
for( i in 1:4 )
{
adm <- paste ( "admit", i, sep = "" )
x[ adm ] <- ifelse( x$group == i, x$admit, 0 )
}

Related

R: table frequencies of letters in string based on Alphabet

I need to compute letter frequencies of a large list of words. For each of the locations in the word (first, second,..), I need to find how many times each letter (a-z) appeared in the list and then table the data according to the word positon.
For example, if my word list is: words <- c("swims", "seems", "gills", "draws", "which", "water")
then the result table should like that:
letter
first position
second position
third position
fourth position
fifth position
a
0
1
1
0
0
b
0
0
0
0
0
c
0
0
0
1
0
d
1
0
0
0
0
e
0
1
1
1
0
f
0
0
0
0
0
...continued until z
...
...
...
...
...
All words are of same length (5).
What I have so far is:
alphabet <- letters[1:26]
words.df <- data.frame("Words" = words)
words.df <- words.df %>% mutate("First_place" = substr(words.df$words,1,1))
words.df <- words.df %>% mutate("Second_place" = substr(words.df$words,2,2))
words.df <- words.df %>% mutate("Third_place" = substr(words.df$words,3,3))
words.df <- words.df %>% mutate("Fourth_place" = substr(words.df$words,4,4))
words.df <- words.df %>% mutate("Fifth_place" = substr(words.df$words,5,5))
x1 <- words.df$First_place
x1 <- table(factor(x1,alphabet))
x2 <- words.df$Second_place
x2 <- table(factor(x2,alphabet))
x3 <- words.df$Third_place
x3 <- table(factor(x3,alphabet))
x4 <- words.df$Fourth_place
x4 <- table(factor(x4,alphabet))
x5 <- words.df$Fifth_place
x5 <- table(factor(x5,alphabet))
My code is not effective and gives tables to each letter position sepretely. All help will be appreicated.
in base R use table:
table(let = unlist(strsplit(words,'')),pos = sequence(nchar(words)))
pos
let 1 2 3 4 5
a 0 1 1 0 0
c 0 0 0 1 0
d 1 0 0 0 0
e 0 1 1 1 0
g 1 0 0 0 0
h 0 1 0 0 1
i 0 1 2 0 0
l 0 0 1 1 0
m 0 0 0 2 0
r 0 1 0 0 1
s 2 0 0 0 4
t 0 0 1 0 0
w 2 1 0 1 0
Note that if you need all the values from a-z then use
table(factor(unlist(strsplit(words,'')), letters), sequence(nchar(words)))
Also to get a dataframe you could do:
d <- table(factor(unlist(strsplit(words,'')), letters), sequence(nchar(words)))
cbind(letters = rownames(d), as.data.frame.matrix(d))
Here is a tidyverse solution using dplyr, purrr, and tidyr:
strsplit(words.df$Words, "") %>%
map_dfr(~setNames(.x, seq_along(.x))) %>%
pivot_longer(everything(),
values_drop_na = T,
names_to = "pos",
values_to = "letter") %>%
count(pos, letter) %>%
pivot_wider(names_from = pos,
names_glue = "pos{pos}",
id_cols = letter,
values_from = n,
values_fill = 0L)
Output
letter pos1 pos2 pos3 pos4 pos5 pos6 pos7 pos8 pos9 pos10 pos11
1 a 65 127 88 38 28 17 14 5 3 0 0
2 b 58 4 7 9 2 4 2 0 1 0 0
3 c 83 14 45 37 20 19 8 3 2 0 0
4 C 2 0 0 0 0 0 0 0 0 0 0
5 d 43 8 33 47 21 22 9 3 1 1 0
6 e 45 156 81 132 114 69 48 23 14 2 2
7 f 54 11 18 10 5 2 1 0 0 0 0
8 g 23 7 27 21 15 8 7 1 0 0 0
9 h 38 56 6 28 21 10 3 3 1 1 0
10 i 25 106 51 58 38 28 8 4 1 0 0
11 j 6 0 2 2 0 0 0 0 0 0 0
12 k 9 1 6 22 12 0 0 0 0 0 0
13 l 45 41 54 54 36 9 7 6 0 2 0
14 m 45 8 31 19 8 8 4 2 0 0 0
15 n 23 42 75 53 34 41 16 16 4 2 0
16 o 28 167 76 41 38 9 11 2 1 0 0
17 p 72 20 34 30 8 3 1 1 1 0 0
18 q 7 2 1 0 0 0 0 0 0 0 0
19 r 46 74 92 59 56 45 12 9 1 1 0
20 s 119 8 67 35 31 22 18 4 1 0 0
21 t 65 30 73 83 57 42 31 9 6 3 1
22 u 12 66 39 36 20 7 7 2 0 0 0
23 v 8 7 20 12 5 5 1 0 0 0 0
24 w 53 8 13 10 2 3 0 1 0 0 0
25 y 6 4 16 15 17 15 10 5 6 1 1
26 x 0 12 5 0 0 0 0 0 0 0 0
27 z 0 0 1 0 0 0 1 1 0 0 0

Use value in colum as argument in function

I have two data frames, one with a list with 3 index variables: User, Log and Pass, and one of which has many values for each of these variables.
I'm trying to pass the many values from the big DF into a list within the smaller DF, so that I can perform summary statistics later.
Small.DF
User,Log,Pass,Valid.Event.Pass
1 11 76 Yes
1 11 46 Yes
1 15 38 Yes
1 15 47 Yes
1 15 386 Yes
1 15 388 Yes
1 8 119 Yes
1 8 120 Yes
1 8 121 Yes
1 8 122 Yes
1 8 123 Yes
1 16 35 Yes
1 16 37 Yes
1 17 22 Yes
1 17 102 Yes
1 12 203 Yes
1 12 205 Yes
1 12 207 Yes
1 12 209 Yes
1 12 24 Yes
2 13 29 Yes
2 1 31 Yes
Big.DF
User,Log,Pass,Passing.Distance
1 11 0 739.5
1 11 0 411.5
1 11 0 0
1 11 0 739.5
1 11 0 0
1 11 0 739.5
1 11 0 0
1 0 0 739.5
1 0 0 0
1 0 0 739.5
1 0 0 0
1 0 0 739.5
1 0 0 0
1 0 0 739.5
1 15 76 371.5
1 15 76 371.5
1 15 76 370.5
1 15 767 368.5
1 15 76 367.5
1 15 76 366.5
1 15 76 365.5
1 15 76 364.5
1 15 76 364.5
1 15 76 363.5
1 15 76 364.5
1 15 76 0
1 15 76 739.5
1 15 76 369.5
1 15 76 0
1 15 76 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
1 15 0 739.5
1 15 0 0
I'm interested in subsetting the values that match for these three variables in Big.DF but also the 100 values before and 100 values after.
To achieve this I've written a function that will create such a list:
newfn<- function(User,Log,Pass){
test<-subset(Sensor.Data[(min(which(Big.DF$User==User&Big.DF$Log==Log & Big.DF$Pass==Pass))-100):(max(which(Big.DF$User==User&Big.DF$Log==Log & Big.DF$Pass==Pass))+100),],select=Passing.Distance)
}
But I can't figure out how to apply this function over each row in smalldf.
The simplest explanation I can think of would be
Small.df$listofvalues<- newfn(Small.df$User,Small.df$Log,Small.df$Pass)
But that won't work for several reasons I can see....
If it were apply it would be something like this
Small.df$listofvalues<-apply(smalldf,1,newfn)
But this doesn't quite work....and sweep doesn't seem quite right either. Is there any function I'm missing?
Figured it out....
rowfinder<- function(User,Log,Pass){
subset(Sensor.Data[(min(which(Sensor.Data$User==User&Sensor.Data$Log==Log & Sensor.Data$Pass==Pass))-100):(max(which(Sensor.Data$User==User&Sensor.Data$Log==Log & Sensor.Data$Pass==Pass))+100),],select=LH.passing.distance)
}
SmallDF$LHvalues<-apply(SmallDF[,c('User','Log','Pass')], 1, function(y) rowfinder(y['User'],y['Log'],y['Pass']))

How to combine rows into one row in TermDocumentMatrix?

Iam trying to combine rows into on row in TermDocumentMatrix
(I know every row represents each words)
ex) cabin, staff -> crews
Because 'cabin, staff and crew' mean samething,
Iam trying to combine rows which represent 'cabin, staff'
into one row which represent 'crew.
but, it doesn't work at all.
R said argument "weighting" is missing, with no default
The codes I typed is below
r=GET('http://www.airlinequality.com/airline-reviews/cathay-pacific-airways/')
base_url=('http://www.airlinequality.com/airline-reviews/cathay-pacific-airways/')
h<-read_html(base_url)
all.reviews = c()
for (i in 1:10){
print(i)
url = paste(base_url, 'page/', i, '/', sep="")
r = GET(url)
h = read_html(r)
comment_area = html_nodes(h, '.tc_mobile')
comments= html_nodes(comment_area, '.text_content')
reviews = html_text(comments)
all.reviews=c(all.reviews, reviews)}
cps <- Corpus(VectorSource(all.reviews))
cps <- tm_map(cps, content_transformer(tolower))
cps <- tm_map(cps, content_transformer(stripWhitespace))
cps <- tm_map(cps, content_transformer(removePunctuation))
cps <- tm_map(cps, content_transformer(removeNumbers))
cps <- tm_map(cps, removeWords, stopwords("english"))
tdm <- TermDocumentMatrix(cps, control=list(
wordLengths=c(3, 20),
weighting=weightTf))
rows.cabin = grep('cabin|staff', row.names(tdm))
rows.cabin
# [1] 235 1594
count.cabin = as.array(rollup(tdm[rows.cabin,], 1))
count.cabin
#Docs
#Terms 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
#1 0 1 1 0 0 2 2 0 0 1 1 0 4 0 1 0 1 0 2 1 0 0 1 3 1 4 2 0 3 0 1 1 4 0 0 2 1 0 0 2 1 0 2 1 3 3 1
#Docs
#Terms 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91
#1 0 1 0 1 2 3 2 2 1 1 0 2 0 0 0 0 0 2 0 1 0 0 4 0 2 2 1 3 1 1 1 1 0 0 0 5 3 0 2 1 0 1 0 0
#Docs
#Terms 92 93 94 95 96 97 98 99 100
#1 1 5 2 1 0 0 0 1 0
row.crews = grep('crews', row.names(tdm))
row.crews
#[1] 408
tdm[row.crews,] = count.cabin
rows.cabin = setdiff(rows.cabin, row.crews) # ok
tdm = tdm[-rows.cabin,] # ok
dtm = as.DocumentTermMatrix(tdm)
# Error in .TermDocumentMatrix(t(x), weighting) :
# argument "weighting" is missing, with no default
maybe it is not right approach to combine rows in TermDocumentMatrix
Please fix this codes or suggest better approach to solve this problem.
Thanks in advance.
Hmm I wonder why you stick to your approach, which obviously does not work, instead of just copying+pasting+adjusting* my suggestion from here?
library(tm)
library(httr)
library(rvest)
library(slam)
# [...] # your code
inspect(tdm[grep("cabin|staff|crew", Terms(tdm), ignore.case=TRUE), 1:15])
# Docs
# Terms 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# cabin 0 0 0 0 0 1 1 0 0 1 0 0 3 0 0
# crew 0 0 0 1 1 1 1 0 2 1 0 1 0 2 0
# crews 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
# staff 0 1 1 0 0 1 1 0 0 0 1 0 1 0 1
dict <- list(
"CREW" = grep("cabin|staff|crew", Terms(tdm), ignore.case=TRUE, value = TRUE)
)
terms <- Terms(tdm)
for (x in seq_along(dict))
terms[terms %in% dict[[x]] ] <- names(dict)[x]
tdm <- slam::rollup(tdm, 1, terms, sum)
inspect(tdm[grep("cabin|staff|crew", Terms(tdm), ignore.case=TRUE), 1:15])
# Docs
# Terms 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# CREW 0 1 1 1 1 3 3 0 2 2 1 1 4 2 1
*I only adjusted the line inside the dict definition...

Calculate post error slowing in R

For my research, I would like to calculate the post-error slowing in the stop signal task to find out whether people become slower after they failed to inhibit their response. Here is some data and I would like to do the following:
For each subject determine first if it was a stop-trial (signal = 1)
For each stop-trial, determine if it is correct (signal = 1 & correct = 2) and then determine whether the next trial (thus the trial directly after the stop-trial) is a go-trial (signal = 0)
Then calculate the average reaction time for all these go-trials that directly follow a stop trial when the response is correct (signal = 0 & correct = 2).
For each incorrect stop trial (signal = 1 & correct = 0) determine whether the next trial (thus the trial directly after the stop-trial) is a go-trial (signal = 0)
Then calculate the average reaction time for all these go-trials that directly follow a stop-trial when the response is correct (correct = 2).
Then calculate the difference between the RTs calculated in step 2 and 3 (= post-error slowing).
I'm not that experienced in R to achieve this. I hope someone can help me with this script.
subject trial signal correct RT
1 1 0 2 755
1 2 0 2 543
1 3 1 0 616
1 4 0 2 804
1 5 0 2 594
1 6 0 2 705
1 7 1 2 0
1 8 1 2 0
1 9 0 2 555
1 10 1 0 604
1 11 0 2 824
1 12 0 2 647
1 13 0 2 625
1 14 0 2 657
1 15 1 0 578
1 16 0 2 810
1 17 1 2 0
1 18 0 2 646
1 19 0 2 574
1 20 0 2 748
1 21 0 0 856
1 22 0 2 679
1 23 0 2 738
1 24 0 2 620
1 25 0 2 715
1 26 1 2 0
1 27 0 2 675
1 28 0 2 560
1 29 1 0 584
1 30 0 2 564
1 31 0 2 994
1 32 1 2 0
1 33 0 2 715
1 34 0 2 644
1 35 0 2 545
1 36 0 2 528
1 37 1 2 0
1 38 0 2 636
1 39 0 2 684
1 40 1 2 0
1 41 0 2 653
1 42 0 2 766
1 43 0 2 747
1 44 0 2 821
1 45 0 2 612
1 46 0 2 624
1 47 0 2 665
1 48 1 2 0
1 49 0 2 594
1 50 0 2 665
1 51 1 0 658
1 52 0 2 800
1 53 1 2 0
1 54 1 0 738
1 55 0 2 831
1 56 0 2 815
1 57 0 2 776
1 58 0 2 710
1 59 0 2 842
1 60 1 0 516
1 61 0 2 758
1 62 1 2 0
1 63 0 2 628
1 64 0 2 713
1 65 0 2 835
1 66 1 0 791
1 67 0 2 871
1 68 0 2 816
1 69 0 2 769
1 70 0 2 930
1 71 0 2 676
1 72 0 2 868
2 1 0 2 697
2 2 0 2 689
2 3 0 2 584
2 4 1 0 788
2 5 0 2 448
2 6 0 2 564
2 7 0 2 587
2 8 1 0 553
2 9 0 2 706
2 10 0 2 442
2 11 1 0 245
2 12 0 2 601
2 13 0 2 774
2 14 1 0 579
2 15 0 2 652
2 16 0 2 556
2 17 0 2 963
2 18 0 2 725
2 19 0 2 751
2 20 0 2 709
2 21 0 2 741
2 22 1 0 613
2 23 0 2 781
2 24 1 2 0
2 25 0 2 634
2 26 1 2 0
2 27 0 2 487
2 28 1 2 0
2 29 0 2 692
2 30 0 2 745
2 31 1 2 0
2 32 0 2 610
2 33 0 2 836
2 34 1 0 710
2 35 0 2 757
2 36 0 2 781
2 37 0 2 1029
2 38 0 2 832
2 39 1 0 626
2 40 1 2 0
2 41 0 2 844
2 42 0 2 837
2 43 0 2 792
2 44 0 2 789
2 45 0 2 783
2 46 0 0 0
2 47 0 0 468
2 48 0 2 686
This may be too late to be useful but here's my solution: (i.e. I first split the data frame by subject, and then apply the same algorithm to each subject; the result is:
# 1 2
# -74.60317 23.39286
X <- read.table(
text=" subject trial signal correct RT
1 1 0 2 755
1 2 0 2 543
1 3 1 0 616
1 4 0 2 804
1 5 0 2 594
1 6 0 2 705
1 7 1 2 0
1 8 1 2 0
1 9 0 2 555
1 10 1 0 604
1 11 0 2 824
1 12 0 2 647
1 13 0 2 625
1 14 0 2 657
1 15 1 0 578
1 16 0 2 810
1 17 1 2 0
1 18 0 2 646
1 19 0 2 574
1 20 0 2 748
1 21 0 0 856
1 22 0 2 679
1 23 0 2 738
1 24 0 2 620
1 25 0 2 715
1 26 1 2 0
1 27 0 2 675
1 28 0 2 560
1 29 1 0 584
1 30 0 2 564
1 31 0 2 994
1 32 1 2 0
1 33 0 2 715
1 34 0 2 644
1 35 0 2 545
1 36 0 2 528
1 37 1 2 0
1 38 0 2 636
1 39 0 2 684
1 40 1 2 0
1 41 0 2 653
1 42 0 2 766
1 43 0 2 747
1 44 0 2 821
1 45 0 2 612
1 46 0 2 624
1 47 0 2 665
1 48 1 2 0
1 49 0 2 594
1 50 0 2 665
1 51 1 0 658
1 52 0 2 800
1 53 1 2 0
1 54 1 0 738
1 55 0 2 831
1 56 0 2 815
1 57 0 2 776
1 58 0 2 710
1 59 0 2 842
1 60 1 0 516
1 61 0 2 758
1 62 1 2 0
1 63 0 2 628
1 64 0 2 713
1 65 0 2 835
1 66 1 0 791
1 67 0 2 871
1 68 0 2 816
1 69 0 2 769
1 70 0 2 930
1 71 0 2 676
1 72 0 2 868
2 1 0 2 697
2 2 0 2 689
2 3 0 2 584
2 4 1 0 788
2 5 0 2 448
2 6 0 2 564
2 7 0 2 587
2 8 1 0 553
2 9 0 2 706
2 10 0 2 442
2 11 1 0 245
2 12 0 2 601
2 13 0 2 774
2 14 1 0 579
2 15 0 2 652
2 16 0 2 556
2 17 0 2 963
2 18 0 2 725
2 19 0 2 751
2 20 0 2 709
2 21 0 2 741
2 22 1 0 613
2 23 0 2 781
2 24 1 2 0
2 25 0 2 634
2 26 1 2 0
2 27 0 2 487
2 28 1 2 0
2 29 0 2 692
2 30 0 2 745
2 31 1 2 0
2 32 0 2 610
2 33 0 2 836
2 34 1 0 710
2 35 0 2 757
2 36 0 2 781
2 37 0 2 1029
2 38 0 2 832
2 39 1 0 626
2 40 1 2 0
2 41 0 2 844
2 42 0 2 837
2 43 0 2 792
2 44 0 2 789
2 45 0 2 783
2 46 0 0 0
2 47 0 0 468
2 48 0 2 686", header=TRUE)
sapply(split(X, X["subject"]), function(D){
PCRT <- with(D, RT[which(c(signal[-1],NA)==1 & c(correct[-1], NA)==2 & signal==0) ])
PERT <- with(D, RT[which(c(signal[-1],NA)==1 & c(correct[-1], NA)==0 & signal==0) ])
mean(PERT) - mean(PCRT)
})
This is ok if you can be sure that every respondent has at least 1 correct and 1 incorrect "stop" trial followed by a "go" trial. A more general case would be (giving NA if they are either always correct or always mistaken):
sapply(split(X, X["subject"]), function(D){
PCRT <- with(D, RT[which(c(signal[-1],NA)==1 & c(correct[-1], NA)==2 & signal==0) ])
PERT <- with(D, RT[which(c(signal[-1],NA)==1 & c(correct[-1], NA)==0 & signal==0) ])
if(length(PCRT)>0 & length(PERT)>0) mean(PERT) - mean(PCRT) else NA
})
Does that help you? A little bit redundant maybe, but I tried to follow your steps as best as possible (not sure whether I mixed something up, please check for yourself looking at the table). The idea is to put the data in a csv file first and treat it as a data frame. Find the csv raw file here: http://pastebin.com/X5b2ysmQ
data <- read.csv("datatable.csv",header=T)
data[,"condition1"] <- data[,"signal"] == 1
data[,"condition2"] <- data[,"condition1"] & data[,"correct"] == 2
data[,"RT1"] <- NA
for(i in which(data[,"condition2"])){
if( nrow(data)>i && !data[i+1,"condition1"] && data[i+1,"correct"] == 2 )
# next is a go trial
data[i+1,"RT1"] <- data[i+1,"RT"]
}
averageRT1 <- mean( data[ !is.na(data[,"RT1"]) ,"RT1"] )
data[,"RT2"] <- NA
for(i in which(data[,"condition1"] & data[,"correct"] == 0)){
if( nrow(data)>i && !data[i+1,"condition1"] && data[i+1,"correct"] == 2 )
# next is a go trial
data[i+1,"RT2"] <- data[i+1,"RT"]
}
averageRT2 <- mean( data[ !is.na(data[,"RT2"]) ,"RT2"] )
postErrorSlowing <- abs(averageRT2-averageRT1)
#Nilsole I just tried it and it is almost perfect. How could the code be improved that for each subject the postErrorSlowing is calculated and placed in a dataframe? Thus that a new data frame is created which consists of subject number (1,2,3 etc.) and the postErrorSlowing variable? Something like this (postErrorSlowing are made up numbers)
subject postErrorSlowing
1 50
2 75
....

How to replace values from a matrix with another matrix based on column/row names?

I have a small matrix:
SMALL<-matrix(c(1:9),3, 3)
colnames(SMALL)<-c("25","36","48")
rownames(SMALL)<-c("18","25","48")
looks like:
25 36 48
18 1 4 7
25 2 5 8
48 3 6 9
And a large matrix:
LARGE<-matrix(0,4, 4)
colnames(LARGE)<-c("12","25","36","48")
rownames(LARGE)<-c("18","25","38","48")
looks like:
12 25 36 48
18 0 0 0 0
25 0 0 0 0
38 0 0 0 0
48 0 0 0 0
I would like to replace values from the large matrix by those from the small one based on the column/row names.
Looking for this result:
12 25 36 48
18 0 1 4 7
25 0 2 5 8
38 0 0 0 0
48 0 3 6 9
Any ideas ?
Assuming there is a match for each col and row name of SMALL in LARGE:
i <- match(rownames(SMALL), rownames(LARGE))
j <- match(colnames(SMALL), colnames(LARGE))
LARGE[i,j] <- SMALL
# 12 25 36 48
#18 0 1 4 7
#25 0 2 5 8
#38 0 0 0 0
#48 0 3 6 9

Resources