From table to data.frame - r

I have a table that looks like:
dat = data.frame(expand.grid(x = 1:10, y = 1:10),
z = sample(LETTERS[1:3], size = 100, replace = TRUE))
tabl <- with(dat, table(z, y))
tabl
y
z 1 2 3 4 5 6 7 8 9 10
A 5 3 1 1 3 6 3 7 2 4
B 4 5 3 6 5 1 3 1 4 4
C 1 2 6 3 2 3 4 2 4 2
Now how do I transform it into a data.frame that looks like
1 2 3 4 5 6 7 8 9 10
A 5 3 1 1 3 6 3 7 2 4
B 4 5 3 6 5 1 3 1 4 4
C 1 2 6 3 2 3 4 2 4 2

Here are a couple of options.
The reason as.data.frame(tabl) doesn't work is that it dispatches to the S3 method as.data.frame.table() which does something useful but different from what you want.
as.data.frame.matrix(tabl)
# 1 2 3 4 5 6 7 8 9 10
# A 5 4 3 1 1 3 3 2 6 2
# B 1 4 3 4 5 3 4 4 3 3
# C 4 2 4 5 4 4 3 4 1 5
## This will also work
as.data.frame(unclass(tabl))

Related

filtering scores from one variable and placing them in a new variable

##So I have this variable test scores is coded on a scale from 1-9.
I have to take those who score 1-3 as low, 4-6 as good and 7-9 as high in new variables.
then have to make a new variable that compares low and high and a variable that compares low and good.
test_scores<- c(sample(1:10, 122, replace = TRUE)
test_scores<-as.data.frame(test_scores)
low<- filter(test_scores,test_scores1 > 3)
high<- filter(test_scores, test_scores< 7)
good<-filter(test_scores,test_scores== 4:6)
##but the N of in the new variables are not counting up to 122
##I thought of using the if function:
low<- ifelse(test_scores$test_scores == 1:3 , 1:3 , 0)
mods<- ifelse(test_scores$test_scores == 4:6, 4:6, 0)
high<- ifelse(test_scores$test_scores == 7:9, 7:9, 0)
##but some scores are not getting filter instead they become 0 even tho the score matches. any ideas?
You can use "cut" to generate the new bins:
set.seed(123)
test_scores <- sample(1:9, 122, T)
test_scores
#> [1] 3 3 2 6 5 4 6 9 5 3 9 9 9 3 8 7 9 3 4 1 7 5 7 9 9 7 5 7 5 6 9 2 5 8 2 1 9
#> [38] 9 6 5 9 4 6 8 6 6 7 1 6 2 1 2 4 5 6 3 9 4 6 9 9 7 3 8 9 3 7 3 7 6 5 5 8 3
#> [75] 2 2 6 4 1 6 3 8 3 8 1 7 7 7 6 7 5 6 8 5 7 4 3 9 7 6 9 7 2 3 8 4 7 4 1 8 4
#> [112] 9 8 6 4 8 3 4 4 6 1 4
cuts <- cut(test_scores, c(0,3,6,9), labels = F)
cuts
#> [1] 1 1 1 2 2 2 2 3 2 1 3 3 3 1 3 3 3 1 2 1 3 2 3 3 3 3 2 3 2 2 3 1 2 3 1 1 3
#> [38] 3 2 2 3 2 2 3 2 2 3 1 2 1 1 1 2 2 2 1 3 2 2 3 3 3 1 3 3 1 3 1 3 2 2 2 3 1
#> [75] 1 1 2 2 1 2 1 3 1 3 1 3 3 3 2 3 2 2 3 2 3 2 1 3 3 2 3 3 1 1 3 2 3 2 1 3 2
#> [112] 3 3 2 2 3 1 2 2 2 1 2
if you want a variable for each bin, and zero otherwise, you must use %in%, not ==
low<- ifelse(test_scores$test_scores %in% 1:3 , test_scores$test_scores , 0)
mods<- ifelse(test_scores$test_scores %in% 4:6, test_scores$test_scores, 0)
high<- ifelse(test_scores$test_scores %in% 7:9, test_scores$test_scores, 0)

Convert a small dataset written in SPSS to CSV

I have a small dataset written in SPSS syntax which comes from Table 5.3 p. 189 of this book (type 210 in the page slot to see the table).
I was wondering if there might be a way to convert this data to .csv file? (I want to use the data in R afterwards)
# SPSS Code:
DATA LIST FREE/gpid anx socskls assert.
BEGIN DATA.
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4
END DATA.
EDIT - in order to check answers I am adding here the actual way the data looks after reading it in SPSS :
gpid anx socskls assert
1 5 3 3
1 5 4 3
1 4 5 4
1 4 5 4
1 3 5 5
1 4 5 4
1 4 5 5
1 4 4 4
1 5 4 3
1 5 4 3
1 4 4 4
2 6 2 1
2 6 2 2
2 5 2 3
2 6 2 2
2 4 4 4
2 7 1 1
2 5 4 3
2 5 2 3
2 5 3 3
2 5 4 3
2 6 2 3
3 4 4 4
3 4 3 3
3 4 4 4
3 4 5 5
3 4 5 5
3 4 4 4
3 4 5 4
3 4 6 5
3 4 4 4
3 5 3 3
3 4 4 4
If I understand correctly, the 1st, 5th, 9th, and 13th column of the dataset belong to variable gpid, the 2nd, 6th, 10th, and 14th column belong to variable anx, and so on. So, we need to
reshape from wide to long format
with multiple measure variables
where each measure variable spans several columns
and where some values are missing.
Many roads lead to Rome.
This is what I would do using my favourite tools. In particular, this approach uses the feature of data.table::melt() to reshape multiple measure columns simultaneously. There is no manual cleanup of the data section in a text editor required.
The resulting dataset result can be used directly afterwards in any subsequent R code as requested by the OP. There is no need to take a detour using a .csv file (However, feel free to save result as a .csv file).
library(data.table)
library(magrittr)
cols <- c("gpid", "anx", "socskls", "assert")
raw <- fread(text = "
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4",
fill = TRUE)
mv <- colnames(raw) %>%
matrix(ncol = 4L, byrow = TRUE) %>%
as.data.table() %>%
setnames(new = cols)
result <- melt(raw, measure.vars = mv, na.rm = TRUE)[
order(rowid(variable))][
, variable := NULL]
result
gpid anx socskls assert
1: 1 5 3 3
2: 1 5 4 3
3: 1 4 5 4
4: 1 4 5 4
5: 1 3 5 5
6: 1 4 5 4
7: 1 4 5 5
8: 1 4 4 4
9: 1 5 4 3
10: 1 5 4 3
11: 1 4 4 4
12: 2 6 2 1
13: 2 6 2 2
14: 2 5 2 3
15: 2 6 2 2
16: 2 4 4 4
17: 2 7 1 1
18: 2 5 4 3
19: 2 5 2 3
20: 2 5 3 3
21: 2 5 4 3
22: 2 6 2 3
23: 3 4 4 4
24: 3 4 3 3
25: 3 4 4 4
26: 3 4 5 5
27: 3 4 5 5
28: 3 4 4 4
29: 3 4 5 4
30: 3 4 6 5
31: 3 4 4 4
32: 3 5 3 3
33: 3 4 4 4
gpid anx socskls assert
Some explanations
fread() returns a data.table raw with default column names V1, V2, ... V16 and with missing values filled with NA
mv is a data.table which indicates which columns of raw belong to each target variable:
mv
gpid anx socskls assert
1: V1 V2 V3 V4
2: V5 V6 V7 V8
3: V9 V10 V11 V12
4: V13 V14 V15 V16
This informations is used by melt(). melt() also removes rows with missing values from the resulting long format.
After reshaping, the rows are ordered by the variable number but need to be reordered in the original row order by using rowid(variable). Finally, the variable column is removed.
EDIT: Improved version
Giving a second thought, here is a streamlined version of the code which skips the creation of mv and uses data.table chaining:
library(data.table)
cols <- c("gpid", "anx", "socskls", "assert")
result <- fread(
text = "
1 5 3 3 1 5 4 3 1 4 5 4 1 4 5 4
1 3 5 5 1 4 5 4 1 4 5 5 1 4 4 4
1 5 4 3 1 5 4 3 1 4 4 4
2 6 2 1 2 6 2 2 2 5 2 3 2 6 2 2
2 4 4 4 2 7 1 1 2 5 4 3 2 5 2 3
2 5 3 3 2 5 4 3 2 6 2 3
3 4 4 4 3 4 3 3 3 4 4 4 3 4 5 5
3 4 5 5 3 4 4 4 3 4 5 4 3 4 6 5
3 4 4 4 3 5 3 3 3 4 4 4",
fill = TRUE, col.names = rep(cols, 4L))[
, melt(.SD, measure.vars = patterns(cols), value.name = cols, na.rm = TRUE)][
order(rowid(variable))][
, variable := NULL][]
result
Here, the columns are renamed within the call to fread(). In this case, duplicated column names are desirable (as opposed to the usual use case) because the patterns() function in the subsequent call to melt() use the duplicated column names to combine the columns which belong to one measure variable.
This requires some manual clean-up in Notepad or similar to place the data in the right format. But essentially, this could be imported using the following
df <- data.frame(
gpid = c(1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,
2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3),
anx = c(5,5,4,4,3,4,4,4,5,5,4,6,6,5,6,
4,7,5,5,5,5,6,4,4,4,4,4,4,4,4,4,5,4),
socskls = c(3,4,5,5,5,5,5,4,4,4,4,2,2,2,2,
4,1,4,2,3,4,2,4,3,4,5,5,4,5,6,4,3,4),
assert = c(3,3,4,4,5,4,5,4,3,3,4,1,2,3,2,
4,1,3,3,3,3,3,4,3,4,5,5,4,4,5,4,3,4)
)
write.csv(df, "df.csv", row.names = F)
Note that the first 4 values (1, 5, 3, 3) are the gpid, anx, socskls, and assert values for row 1. Whereas the values 1, 5, 4, 3 which appear to be in the next column of the pasted data in SPSS syntax (i.e. the next 4 values reading the syntax left to right) are actually the values for participant 10.
Note: I'm assuming you don't have SPSS installed. If you did the easiest option would using SPSS syntax to create the dataset in SPSS and then just export to R.
Using readLines and some string manipulating tools.
tmp <- readLines("spss1.txt") ## read from .txt
tmp <- trimws(gsub("[A-Z/.]", "", tmp)) ## remove caps and specials
nm <- strsplit(tmp[[1]], " ")[[1]] ## split names
tmp <- unlist(strsplit(tmp[3:11], "\\s{2,}") ) ## split data blocks
Finally, splitting at the spaces gives the result.
dat <- setNames(
type.convert(do.call(rbind.data.frame, strsplit(tmp, "\\s"))),
nm)
Result
dat
# gpid anx socskls assert
# 1 1 5 3 3
# 2 1 5 4 3
# 3 1 4 5 4
# 4 1 4 5 4
# 5 1 3 5 5
# 6 1 4 5 4
# 7 1 4 5 5
# 8 1 4 4 4
# 9 1 5 4 3
# 10 1 5 4 3
# 11 1 4 4 4
# 12 2 6 2 1
# 13 2 6 2 2
# 14 2 5 2 3
# 15 2 6 2 2
# 16 2 4 4 4
# 17 2 7 1 1
# 18 2 5 4 3
# 19 2 5 2 3
# 20 2 5 3 3
# 21 2 5 4 3
# 22 2 6 2 3
# 23 3 4 4 4
# 24 3 4 3 3
# 25 3 4 4 4
# 26 3 4 5 5
# 27 3 4 5 5
# 28 3 4 4 4
# 29 3 4 5 4
# 30 3 4 6 5
# 31 3 4 4 4
# 32 3 5 3 3
# 33 3 4 4 4
Note: Results in the same Wilks' lambda as #emily-kothe's method. Maybe the authors used different data or your manova method is flawed?

Imputing based on specific columns

I'm about to do imputation for missing values and I use the mice-package. I need to do imputation based on specific column content. So basically, I have 24 columns that are used to measure 4 Latent Variables (using the plspm-package). I wish to impute N/A's based on specific column content. So for cols 1-6 I wish to impute NAs in those specific columns based only on the content within these 6. (and so forth for cols 7-12, 13-18 and 19-24).
I hope it makes sense for you guys.
My data structure is:
p1 p2 p3 p4 p5 p6 l1 l2 l3 l4 l5 l6
4 3 5 4 5 N/A 2 1 4 5 1 N/A
4 4 1 3 1 2 1 1 1 1 1 1
5 4 5 4 4 4 4 4 5 5 4 4
5 4 5 5 4 5 4 4 N/A 5 4 4
5 5 5 5 5 5 3 2 5 5 2 2
4 3 4 3 3 3 3 2 3 4 3 2
5 4 5 5 3 4 4 1 5 5 5 4
5 5 5 5 5 5 5 3 4 5 3 4
4 4 4 4 3 N/A 4 4 5 4 3 3
5 4 4 4 3 2 1 3 2 5 1 1
4 4 4 4 5 5 3 4 5 5 3 3
4 3 2 N/A 1 2 N/A 1 2 N/A 1 N/A
3 3 4 4 3 2 1 3 3 3 1 3
5 3 4 4 4 2 3 4 4 4 3 3
4 4 4 5 2 2 2 2 2 2 3 3
5 4 4 4 4 4 4 4 5 5 4 3
4 3 3 3 5 2 2 2 4 4 1 1
5 4 5 4 5 3 1 1 5 5 2 3
4 3 1 3 4 4 2 1 4 3 2 3
4 3 1 4 3 1 2 1 4 4 3 2
3 3 5 4 5 1 2 2 4 5 3 2
4 4 5 3 5 5 2 2 3 4 2 3
4 4 2 3 2 3 2 2 3 4 2 2
5 5 5 5 5 5 4 3 3 3 3 3
5 5 5 5 5 4 4 N/A 5 5 N/A N/A
So I guess it's essentially splitting data into 4 blocks and then imputing. I read about the blocks()-function in the help(mice), but I'm not sure I can actually use that for this specific task.
The code i've been using so far is:
temp_pmm <- mice(data_predict,
m = 3,
maxit = 10,
method = "pmm",
seed = 2374)
But the way I understand the package, it imputes based on entire row content (so my latent variable constructs overlap, which I am trying to mitigate).
Hope you can help me out and I appreciate any help.
Thanks in advance!
Tobias
So Dominix' suggestion of simply running separate imputations seems to be the right way to go. Thanks a lot!
For any future reference, this is how I worked it out:
test_pmm_firstv <- mice(data_predict[,c(1:6)],
m = 10,
maxit = 20,
method = "pmm",
seed = 127493)
test_pmm_secondv <- mice(data_predict[,c(7:12)],
m = 10,
maxit = 20,
method = "pmm",
seed = 1239754111)
test_pmm_thirdv <- mice(data_predict[,c(13:18)],
m = 10,
maxit = 20,
method = "pmm",
seed = 1238603)
test_pmm_fourthv <- mice(data_predict[,c(19:24)],
m = 10,
maxit = 20,
method = "pmm",
seed = 356811)
data_pmm_firstv <- mice::complete(test_pmm_firstv, 1)
data_pmm_secondv <- mice::complete(test_pmm_secondv, 1)
data_pmm_thirdv <- mice::complete(test_pmm_thirdv, 1)
data_pmm_fourthv <- mice::complete(test_pmm_fourthv, 1)
data_fixed <- as.data.frame(cbind(data_pmm_firstv, data_pmm_secondv, data_pmm_thirdv, data_pmm_fourthv))
anyNA(data_fixed)
[1] FALSE

Comparing each element in subsets of a large data

I have a large data with raw responses and wanted to compare each element for subject 1 in group 1 with its corresponding element for subject 1 in group 2. Of course, the comparison needs to be kept between subject 2 in group 1 and subject 2 in group 2, and between subject 3 in group 1 and subject 3 in group 2, and so on. What makes the problem even complex is that there are 100 groups, which in turn are 50 paired groups.
The output needs to keep the original raw response if they are the same. If they are different, the raw response needs to be replaced with '9'.
I'm pretty sure I could do it with for-loop, but wondering if there is anything better than for-loop in r, such as ifelse or apply?
As making my data simple, it would look like below.
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
Thanks for any help.
#Initialization of data
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
>df
V1 V2 V3 V4 V5 subject group
1 3 3 3 4 5 1 1
2 4 4 3 1 3 2 1
3 3 2 2 4 2 3 1
4 4 4 3 5 3 1 2
5 3 2 1 5 1 2 2
6 2 5 4 4 1 3 2
7 3 2 3 2 2 1 3
8 1 2 3 3 3 2 3
9 2 2 2 2 5 3 3
10 3 3 3 5 4 1 4
11 5 3 5 4 2 2 4
12 5 3 1 1 3 3 4
Processing without for loop
#processing without for loop
# assumption: initial data is sorted by group (can be easily done)
coloumns<-!dimnames(x)[[2]] %in% c('group','subject');
subjects<-df[, 'subject']
tabl<-table(subjects)
rows<-order(subjects)
rows2<-cumsum(tabl)
rows1<-rows2-tabl+1
df[rows[-rows1],coloumns][df[rows[-rows1],coloumns]!=df[rows[-rows2],coloumns]]<-9
>df
V1 V2 V3 V4 V5 subject group
1 3 3 3 4 5 1 1
2 4 4 3 1 3 2 1
3 3 2 2 4 2 3 1
4 9 9 3 9 9 1 2
5 9 9 9 9 9 2 2
6 9 9 9 4 9 3 2
7 9 9 3 9 9 1 3
8 9 2 9 9 9 2 3
9 2 9 9 9 9 3 3
10 3 9 3 9 9 1 4
11 9 9 9 9 9 2 4
12 9 9 9 9 9 3 4
Below is what I did to get the output. Again, thanks to Stanislav
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
> df
V1 V2 V3 V4 V5 subject group
1 1 4 3 1 5 1 1
2 2 1 4 1 5 2 1
3 1 2 5 4 5 3 1
4 5 4 1 4 3 1 2
5 5 1 3 2 2 2 2
6 1 2 2 4 5 3 2
7 5 4 2 3 1 1 3
8 2 3 4 3 5 2 3
9 2 5 3 5 3 3 3
10 4 2 1 4 1 1 4
11 2 3 3 5 5 2 4
12 5 3 3 4 5 3 4
col<-!dimnames(df)[[2]] %in% c('subject','group')
n<-length(df[,1])
temp<-table(df$group)
n.sub<-temp[1]
temp<-seq(1,n,by=2*n.sub)
s1<-c(sapply(temp, function(x) seq.int(x, length.out=n.sub)))
temp<-seq(n.sub+1,n,by=2*n.sub)
s2<-c(sapply(temp, function(x) seq.int(x, length.out=n.sub)))
df[s2,col][df[s1,col]!=df[s2,col]]<-9
> df
V1 V2 V3 V4 V5 subject group
1 1 4 3 1 5 1 1
2 2 1 4 1 5 2 1
3 1 2 5 4 5 3 1
4 9 4 9 9 9 1 2
5 9 1 9 9 9 2 2
6 1 2 9 4 5 3 2
7 5 4 2 3 1 1 3
8 2 3 4 3 5 2 3
9 2 5 3 5 3 3 3
10 9 9 9 9 1 1 4
11 2 3 9 9 5 2 4
12 9 9 3 9 9 3 4

using Reduce/do.call with ifelse

This is purely a curiosity (learning more about Reduce). There are way better methods to achieve what I'm doing and I am not interested in them.
Some people use a series of nested ifelse commands to recode/look up something. Maybe it looks like this:
set.seed(10); x <- sample(letters[1:10], 300, T)
ifelse(x=="a", 1,
ifelse(x=="b", 2,
ifelse(x=="c", 3,
ifelse(x=="d", 4, 5))))
Is there a way to use either do.call or Reduce with the ifelse to get the job done a little more eloquently?
Try this:
> library(gsubfn)
> strapply(x, ".", list(a = 1, b = 2, c = 3, d = 4, 5), simplify = TRUE)
[1] 5 4 5 5 1 3 3 3 5 5 5 5 2 5 4 5 1 3 4 5 5 5 5 4 5 5 5 3 5 4 5 1 2 5 5 5 5
[38] 5 5 5 3 3 1 5 3 2 1 5 2 5 4 5 3 5 2 5 5 5 4 5 1 2 5 4 5 5 5 5 1 3 1 5 5 5
[75] 1 5 4 5 3 3 5 5 3 5 3 1 5 3 2 2 5 5 5 5 4 5 3 5 5 1 4 1 4 5 5 5 5 5 5 5 5
[112] 5 2 5 5 5 3 5 5 5 2 4 4 5 3 3 5 4 5 5 5 1 5 3 4 3 5 5 2 5 5 3 1 5 2 5 5 5
[149] 1 5 5 2 1 2 4 2 2 3 5 2 5 5 5 5 5 3 5 5 5 5 5 5 5 5 5 5 2 3 5 4 4 2 5 5 5
[186] 5 5 5 5 2 1 1 1 5 5 5 5 3 5 5 3 5 5 5 2 5 5 5 3 5 5 5 5 5 1 5 5 5 5 2 2 5
[223] 5 5 4 3 4 5 5 4 5 5 5 3 5 3 5 5 5 5 4 5 5 1 5 5 2 5 5 5 2 5 5 3 2 5 4 5 2
[260] 5 5 3 5 5 1 4 3 5 4 5 2 5 5 3 5 5 5 5 5 1 1 5 2 5 1 5 5 5 5 5 5 5 5 5 5 5
[297] 5 1 5 2
Here is an attempt. It is neither beautiful nor does it use ifelse:
f <- function(w,s) {
if(is.null(s$old))
w$output[is.na(w$output)] <- s$new
else
w$output[w$input==s$old] <- s$new
return(w)
}
set.seed(10); x <- sample(letters[1:10], 300, T)
subst <- list(
list(old="a", new=1),
list(old="b", new=2),
list(old="c", new=3),
list(old="d", new=4),
list(old=NULL, new=5)
)
workplace <- list(
input=x,
output=rep(NA, length(x))
)
Reduce(f, subst, workplace)

Resources