R DataTable Solution Fast Reshape - r

data1=data.frame("StudentID"=c(1,2,3,4,5),
"a1cat"=c(9,10,2,0,10),
"a2cat"=c(0,2,8,6,7),
"a3cat"=c(4,2,1,6,5),
"a1dog"=c(8,4,4,5,8),
"a2dog"=c(1,9,10,5,7),
"a3dog"=c(9,3,2,7,7),
"q20fox"=c(2,8,6,1,9),
"q22fox"=c(8,10,9,6,6),
"q24fox"=c(5,0,2,9,7))
data2=data.frame("StudentID" = sort(rep(1:5,each=3)),
"timeX" = c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3),
"meow" = c(9,0,4,10,2,2,2,8,1,0,6,6,10,7,5),
"bark" = c(8,1,9,4,9,3,4,10,2,5,5,7,8,7,7),
"woof"=c(2,8,5,8,10,0,6,9,2,1,6,9,9,6,7))
I have 'data1' and wish to get 'data2' using data.table to reshape the data and give new names for each column.
data1x=data.frame("StudentID"=c(1,2,3,4,5),
"a1cat"=c(9,10,2,0,10),
"a2cat"=c(0,2,8,6,7),
"a3cat"=c(4,2,1,6,5),
"a1dog"=c(8,4,4,5,8),
"a2dog"=c(1,9,10,5,7),
"a3dog"=c(9,3,2,7,7),
"fox20"=c(2,8,6,1,9),
"fox22"=c(8,10,9,6,6),
"fox24"=c(5,0,2,9,7))

We can use melt with measure patterns
library(data.table)
melt(setDT(data1), measure = patterns("cat$", "dog$", "fox\\d*$"),
value.name = c("meow", "bark", "woof"),
variable.name = 'timeX')[order(StudentID)]
# StudentID timeX meow bark woof
# 1: 1 1 9 8 2
# 2: 1 2 0 1 8
# 3: 1 3 4 9 5
# 4: 2 1 10 4 8
# 5: 2 2 2 9 10
# 6: 2 3 2 3 0
# 7: 3 1 2 4 6
# 8: 3 2 8 10 9
# 9: 3 3 1 2 2
#10: 4 1 0 5 1
#11: 4 2 6 5 6
#12: 4 3 6 7 9
#13: 5 1 10 8 9
#14: 5 2 7 7 6
#15: 5 3 5 7 7

Related

Convert a small dataset written in SPSS to CSV

I have a small dataset written in SPSS syntax which comes from Table 5.3 p. 189 of this book (type 210 in the page slot to see the table).
I was wondering if there might be a way to convert this data to .csv file? (I want to use the data in R afterwards)
# SPSS Code:
DATA LIST FREE/gpid anx socskls assert.
BEGIN DATA.
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4
END DATA.
EDIT - in order to check answers I am adding here the actual way the data looks after reading it in SPSS :
gpid anx socskls assert
1 5 3 3
1 5 4 3
1 4 5 4
1 4 5 4
1 3 5 5
1 4 5 4
1 4 5 5
1 4 4 4
1 5 4 3
1 5 4 3
1 4 4 4
2 6 2 1
2 6 2 2
2 5 2 3
2 6 2 2
2 4 4 4
2 7 1 1
2 5 4 3
2 5 2 3
2 5 3 3
2 5 4 3
2 6 2 3
3 4 4 4
3 4 3 3
3 4 4 4
3 4 5 5
3 4 5 5
3 4 4 4
3 4 5 4
3 4 6 5
3 4 4 4
3 5 3 3
3 4 4 4
If I understand correctly, the 1st, 5th, 9th, and 13th column of the dataset belong to variable gpid, the 2nd, 6th, 10th, and 14th column belong to variable anx, and so on. So, we need to
reshape from wide to long format
with multiple measure variables
where each measure variable spans several columns
and where some values are missing.
Many roads lead to Rome.
This is what I would do using my favourite tools. In particular, this approach uses the feature of data.table::melt() to reshape multiple measure columns simultaneously. There is no manual cleanup of the data section in a text editor required.
The resulting dataset result can be used directly afterwards in any subsequent R code as requested by the OP. There is no need to take a detour using a .csv file (However, feel free to save result as a .csv file).
library(data.table)
library(magrittr)
cols <- c("gpid", "anx", "socskls", "assert")
raw <- fread(text = "
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4",
fill = TRUE)
mv <- colnames(raw) %>%
matrix(ncol = 4L, byrow = TRUE) %>%
as.data.table() %>%
setnames(new = cols)
result <- melt(raw, measure.vars = mv, na.rm = TRUE)[
order(rowid(variable))][
, variable := NULL]
result
gpid anx socskls assert
1: 1 5 3 3
2: 1 5 4 3
3: 1 4 5 4
4: 1 4 5 4
5: 1 3 5 5
6: 1 4 5 4
7: 1 4 5 5
8: 1 4 4 4
9: 1 5 4 3
10: 1 5 4 3
11: 1 4 4 4
12: 2 6 2 1
13: 2 6 2 2
14: 2 5 2 3
15: 2 6 2 2
16: 2 4 4 4
17: 2 7 1 1
18: 2 5 4 3
19: 2 5 2 3
20: 2 5 3 3
21: 2 5 4 3
22: 2 6 2 3
23: 3 4 4 4
24: 3 4 3 3
25: 3 4 4 4
26: 3 4 5 5
27: 3 4 5 5
28: 3 4 4 4
29: 3 4 5 4
30: 3 4 6 5
31: 3 4 4 4
32: 3 5 3 3
33: 3 4 4 4
gpid anx socskls assert
Some explanations
fread() returns a data.table raw with default column names V1, V2, ... V16 and with missing values filled with NA
mv is a data.table which indicates which columns of raw belong to each target variable:
mv
gpid anx socskls assert
1: V1 V2 V3 V4
2: V5 V6 V7 V8
3: V9 V10 V11 V12
4: V13 V14 V15 V16
This informations is used by melt(). melt() also removes rows with missing values from the resulting long format.
After reshaping, the rows are ordered by the variable number but need to be reordered in the original row order by using rowid(variable). Finally, the variable column is removed.
EDIT: Improved version
Giving a second thought, here is a streamlined version of the code which skips the creation of mv and uses data.table chaining:
library(data.table)
cols <- c("gpid", "anx", "socskls", "assert")
result <- fread(
text = "
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4",
fill = TRUE, col.names = rep(cols, 4L))[
, melt(.SD, measure.vars = patterns(cols), value.name = cols, na.rm = TRUE)][
order(rowid(variable))][
, variable := NULL][]
result
Here, the columns are renamed within the call to fread(). In this case, duplicated column names are desirable (as opposed to the usual use case) because the patterns() function in the subsequent call to melt() use the duplicated column names to combine the columns which belong to one measure variable.
This requires some manual clean-up in Notepad or similar to place the data in the right format. But essentially, this could be imported using the following
df <- data.frame(
gpid = c(1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,
2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3),
anx = c(5,5,4,4,3,4,4,4,5,5,4,6,6,5,6,
4,7,5,5,5,5,6,4,4,4,4,4,4,4,4,4,5,4),
socskls = c(3,4,5,5,5,5,5,4,4,4,4,2,2,2,2,
4,1,4,2,3,4,2,4,3,4,5,5,4,5,6,4,3,4),
assert = c(3,3,4,4,5,4,5,4,3,3,4,1,2,3,2,
4,1,3,3,3,3,3,4,3,4,5,5,4,4,5,4,3,4)
)
write.csv(df, "df.csv", row.names = F)
Note that the first 4 values (1, 5, 3, 3) are the gpid, anx, socskls, and assert values for row 1. Whereas the values 1, 5, 4, 3 which appear to be in the next column of the pasted data in SPSS syntax (i.e. the next 4 values reading the syntax left to right) are actually the values for participant 10.
Note: I'm assuming you don't have SPSS installed. If you did the easiest option would using SPSS syntax to create the dataset in SPSS and then just export to R.
Using readLines and some string manipulating tools.
tmp <- readLines("spss1.txt") ## read from .txt
tmp <- trimws(gsub("[A-Z/.]", "", tmp)) ## remove caps and specials
nm <- strsplit(tmp[[1]], " ")[[1]] ## split names
tmp <- unlist(strsplit(tmp[3:11], "\\s{2,}") ) ## split data blocks
Finally, splitting at the spaces gives the result.
dat <- setNames(
type.convert(do.call(rbind.data.frame, strsplit(tmp, "\\s"))),
nm)
Result
dat
# gpid anx socskls assert
# 1 1 5 3 3
# 2 1 5 4 3
# 3 1 4 5 4
# 4 1 4 5 4
# 5 1 3 5 5
# 6 1 4 5 4
# 7 1 4 5 5
# 8 1 4 4 4
# 9 1 5 4 3
# 10 1 5 4 3
# 11 1 4 4 4
# 12 2 6 2 1
# 13 2 6 2 2
# 14 2 5 2 3
# 15 2 6 2 2
# 16 2 4 4 4
# 17 2 7 1 1
# 18 2 5 4 3
# 19 2 5 2 3
# 20 2 5 3 3
# 21 2 5 4 3
# 22 2 6 2 3
# 23 3 4 4 4
# 24 3 4 3 3
# 25 3 4 4 4
# 26 3 4 5 5
# 27 3 4 5 5
# 28 3 4 4 4
# 29 3 4 5 4
# 30 3 4 6 5
# 31 3 4 4 4
# 32 3 5 3 3
# 33 3 4 4 4
Note: Results in the same Wilks' lambda as #emily-kothe's method. Maybe the authors used different data or your manova method is flawed?

Spread data table by id

I have the following data.table:
> df
month student A B C D
1: 1 Amy 9 6 1 11
2: 1 Bob 8 5 5 2
3: 2 Amy 7 7 2 4
4: 2 Bob 6 6 6 6
5: 3 Amy 6 8 10 7
6: 3 Bob 9 7 11 3
I want to transform this data.table to this format: > df1
month cols Amy Bob
1: 1 A 9 8
2: 1 B 6 5
3: 1 C 1 5
4: 1 D 11 2
5: 2 A 7 6
6: 2 B 7 6
7: 2 C 2 6
8: 2 D 4 6
9: 3 A 6 9
10: 3 B 8 7
11: 3 C 10 11
12: 3 D 7 3
I tried multiple ways using dcast etc. but I couldn't transform the data. Help please!
You have to melt the dataframe and then dcast -
tmp = melt(df, id = c("month", "student"), , variable.name = "cols")
df1 = dcast(tmp, month + cols ~ student, value.var = "value")
Both are from the data.table library
A tidyr approach.
> library(tidyr)
> df %>%
gather(cols, values, A:D) %>%
spread(student, values)
month cols Amy Bob
1 1 A 9 8
2 1 B 6 5
3 1 C 1 5
4 1 D 11 2
5 2 A 7 6
6 2 B 7 6
7 2 C 2 6
8 2 D 4 6
9 3 A 6 9
10 3 B 8 7
11 3 C 10 11
12 3 D 7 3

data.table manipulation and merging

I have data
dat1 <- data.table(id=1:8,
group=c(1,1,2,2,2,3,3,3),
value=c(5,6,10,11,12,20,21,22))
dat2 <- data.table(group=c(1,2,3),
value=c(3,6,13))
and I would like to subtract dat2$value from each of the dat1$value, based on group.
Is this possible using data.table or does it require additional packages?
With data.table, you could do:
library(data.table)
dat1[dat2, on = "group"][, new.value := value - i.value, by = "group"][]
Which returns:
id group value i.value new.value
1: 1 1 5 3 2
2: 2 1 6 3 3
3: 3 2 10 6 4
4: 4 2 11 6 5
5: 5 2 12 6 6
6: 6 3 20 13 7
7: 7 3 21 13 8
8: 8 3 22 13 9
Alternatively, you can do this in one step as akrun mentions:
dat1[dat2, newvalue := value - i.value, on = .(group)]
id group value newvalue
1: 1 1 5 2
2: 2 1 6 3
3: 3 2 10 4
4: 4 2 11 5
5: 5 2 12 6
6: 6 3 20 7
7: 7 3 21 8
8: 8 3 22 9

Shifting row values by lag value in another column

I have a rather large dataset and I am interested in "marching" values forward through time based on values from another column. For example, if I have a Value = 3 at Time = 0 and a DesiredShift = 2, I want the 3 to shift down two rows to be at Time = 2. Here is a reproducible example.
Build reproducible fake data
library(data.table)
set.seed(1)
rowsPerID <- 8
dat <- CJ(1:2, 1:rowsPerID)
setnames(dat, c("ID","Time"))
dat[, Value := rpois(.N, 4)]
dat[, Shift := sample(0:2, size=.N, replace=TRUE)]
Fake Data
# ID Time Value Shift
# 1: 1 1 3 2
# 2: 1 2 3 2
# 3: 1 3 4 1
# 4: 1 4 7 2
# 5: 1 5 2 2
# 6: 1 6 7 0
# 7: 1 7 7 1
# 8: 1 8 5 0
# 9: 2 1 5 0
# 10: 2 2 1 1
# 11: 2 3 2 0
# 12: 2 4 2 1
# 13: 2 5 5 2
# 14: 2 6 3 1
# 15: 2 7 5 1
# 16: 2 8 4 1
I want each Value to shift forward according the the Shift column. So the
DesiredOutput column for row 3 will be equal to 3 since the value at Time=1 is
Value = 3 and Shift = 2.
Row 4 shows 3+4=7 since 3 shifts down 2 and 4 shifts down 1.
I would like to be able to do this by ID group and hopefully take advantage
of data.table since speed is of interest for this problem.
Desired Result
# ID Time Value Shift DesiredOutput
# 1: 1 1 3 2 NA
# 2: 1 2 3 2 NA
# 3: 1 3 4 1 3
# 4: 1 4 7 2 3+4 = 7
# 5: 1 5 2 2 NA
# 6: 1 6 7 0 7+7 = 14
# 7: 1 7 7 1 2
# 8: 1 8 5 0 7+5 = 12
# 9: 2 1 5 0 5
# 10: 2 2 1 1 NA
# 11: 2 3 2 0 1+2 = 3
# 12: 2 4 2 1 NA
# 13: 2 5 5 2 2
# 14: 2 6 3 1 NA
# 15: 2 7 5 1 3+5=8
# 16: 2 8 4 1 5
I was hoping to get this working using the data.table::shift function, but I am unsure how to make this work using multiple lag parameters.
Try this:
dat[, TargetIndex:= .I + Shift]
toMerge = dat[, list(Out = sum(Value)), by='TargetIndex']
dat[, TargetIndex:= .I]
# dat = merge(dat, toMerge, by='TargetIndex', all=TRUE)
dat[toMerge, on='TargetIndex', DesiredOutput:= i.Out]
> dat
# ID Time Value Shift TargetIndex DesiredOutput
# 1: 1 1 3 2 1 NA
# 2: 1 2 3 2 2 NA
# 3: 1 3 4 1 3 3
# 4: 1 4 7 2 4 7
# 5: 1 5 2 2 5 NA
# 6: 1 6 7 0 6 14
# 7: 1 7 7 1 7 2
# 8: 1 8 5 0 8 12
# 9: 2 1 5 0 9 5
# 10: 2 2 1 1 10 NA
# 11: 2 3 2 0 11 3
# 12: 2 4 2 1 12 NA
# 13: 2 5 5 2 13 2
# 14: 2 6 3 1 14 NA
# 15: 2 7 5 1 15 8
# 16: 2 8 4 1 16 5

Number of copies (duplicates) in R data.table

I want to add a column to a data.table which shows how many copies of each row exist. Take the following example:
library(data.table)
DT <- data.table(id = 1:10, colA = c(1,1,2,3,4,5,6,7,7,7), colB = c(1,1,2,3,4,5,6,7,8,8))
setkey(DT, colA, colB)
DT[, copies := length(colA), by = .(colA, colB)]
The output it gives is
id colA colB copies
1: 1 1 1 1
2: 2 1 1 1
3: 3 2 2 1
4: 4 3 3 1
5: 5 4 4 1
6: 6 5 5 1
7: 7 6 6 1
8: 8 7 7 1
9: 9 7 8 1
10: 10 7 8 1
Desired output is:
id colA colB copies
1: 1 1 1 2
2: 2 1 1 2
3: 3 2 2 1
4: 4 3 3 1
5: 5 4 4 1
6: 6 5 5 1
7: 7 6 6 1
8: 8 7 7 1
9: 9 7 8 2
10: 10 7 8 2
How should I do it?
I also want to know why my approach doesn't. work. Isn't it true that when you group by colA and colB, the first group should contain two rows of data? I understand if "length" is not the function to use, but I cannot think of any other function to use. I thought of "nrow" but what can I pass to it?
DT[, copies := .N, by=.(colA,colB)]
# id colA colB copies
# 1: 1 1 1 2
# 2: 2 1 1 2
# 3: 3 2 2 1
# 4: 4 3 3 1
# 5: 5 4 4 1
# 6: 6 5 5 1
# 7: 7 6 6 1
# 8: 8 7 7 1
# 9: 9 7 8 2
# 10: 10 7 8 2
As mentioned in the comments, .N will calculate the length of the grouped object as defined in the by argument.

Resources