In R, when doing table of two variables, you'll get a frequency table
> table(data$Var1, data$Var2)
1 2 3 4 5
0 0 1 5 6 12
1 1 10 6 7 0
2 2 6 7 6 3
3 2 9 8 3 2
4 4 9 5 3 3
5 3 4 9 4 4
6 2 7 7 4 4
7 2 7 7 6 2
8 5 7 5 5 2
9 5 4 5 6 4
is there a way such that you include the mean and SD in each row, something like
1 2 3 4 5 mean SD
0 0 1 5 6 12 4.20833 0.93153
1 1 10 6 7 0 .. ..
2 2 6 7 6 3
3 2 9 8 3 2
4 4 9 5 3 3
5 3 4 9 4 4
6 2 7 7 4 4
7 2 7 7 6 2
8 5 7 5 5 2
9 5 4 5 6 4
Save the table in something called T, and then:
For the mean and sd:
> cbind(T,
mean=apply(T,1,function(x){
(sum(x*(1:5)))/sum(x)}),
sd=apply(T,1,function(x){sd(rep(1:5,x))}))
1 2 3 4 5 mean sd
0 4 3 1 1 1 2.200000 1.3984118
1 1 2 3 3 3 3.416667 1.3113722
2 2 2 1 2 1 2.750000 1.4880476
3 0 1 2 4 1 3.625000 0.9161254
So 2.2 and 1.3984 is mean and sd of (c(1,1,1,1,2,2,2,3,4,5))
Its probably inefficient to compute the sd by reconstructing the original vector with rep - but its late and working out all the sums of squares and squares of sums for the sd is not something my brain can do at 1am.
Related
I would like to create a frequency Table of all Categorical Variables as a Data Frame in R. I would like to find the frequency and percentage of each survey response (grouped by condition, as well as the total frequency). I would like to generate this as a data frame.
An example of the desired frequency count out for just ONE variable ("q1"). I want a similar freq count for most of the variables in my data:
I have data such as this. The actual data has many more categorical variables.
library(readr)
data_in <- read_table2("treatment_cur q13_3 q14_1 q14_2 q14_3 q14_4 q14_5 q14_6 q14_7 q14_8 q14_9 q14_10 q14_11 q14_12 q14_13 q14_14 q14_15
Control 3 2 3 6 5 6 6 6 4 5 5 5 4 6 6 5
Control 2 4 5 6 5 6 5 5 6 4 5 5 6 5 4 6
Treatment 3 1 2 6 4 6 5 4 6 4 6 1 5 6 4 6
Control 3 2 3 6 4 6 6 6 6 6 6 6 6 5 5 6
Control NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Control 4 6 5 6 5 6 5 6 6 5 1 1 6 5 5 6
Control 3 3 2 2 3 3 6 6 4 6 5 5 3 6 6 2
Treatment 2 3 2 3 1 3 1 1 1 3 3 3 3 3 3 1
Control 3 5 5 6 3 6 3 3 3 2 2 1 4 2 3 4
Control 2 1 1 1 1 1 4 4 1 1 1 1 1 4 4 2
Control 4 3 4 6 6 6 6 6 6 6 6 6 6 6 6 6
Control 4 2 6 6 4 6 5 6 6 5 6 5 6 6 6 6
Control 2 2 3 3 2 3 5 6 5 3 3 3 3 5 3 2
Control 3 2 4 3 4 5 4 4 5 3 3 5 4 5 5 4
Treatment 2 2 2 2 2 3 1 1 2 2 3 2 3 3 2 3
Control 4 3 3 3 5 6 6 6 6 6 6 6 6 6 6 6
Treatment 2 1 3 3 2 1 3 4 2 2 3 3 2 3 3 3
Treatment 4 2 6 4 4 2 3 5 4 5 1 1 5 4 4 5
Control 3 3 3 4 4 4 4 5 3 2 5 4 5 5 4 4
Control 4 6 6 6 6 6 6 6 6 6 6 6 5 6 6 5
Control 2 2 3 6 2 5 1 2 4 4 1 1 6 4 4 6
Treatment 4 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6
Treatment 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
Treatment 1 1 2 4 4 4 1 1 1 1 1 1 6 1 1 6
Treatment 3 2 3 3 2 6 6 6 6 3 3 2 4 5 5 6
Control 2 1 1 1 1 1 1 2 1 1 1 1 1 2 2 1
Control 1 3 3 3 1 1 5 5 2 4 5 5 4 1 2 5
Treatment 3 4 4 5 5 4 4 4 3 5 3 4 4 6 6 5
Control NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Control 2 2 4 6 2 4 2 2 3 5 4 4 4 3 3 5
Treatment 1 1 2 1 1 1 1 1 6 1 1 1 6 2 3 6
Treatment 2 6 1 4 4 1 1 2 2 2 1 2 1 2 2 2
Treatment 3 3 4 4 4 6 6 5 4 6 3 5 5 6 6 4
Treatment 2 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Control 4 3 4 6 4 6 4 5 6 3 4 4 6 6 4 6
Control 4 4 3 6 2 5 2 2 4 3 1 6 5 5 5 5
Control NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Treatment 2 3 3 6 5 6 1 2 6 5 4 4 5 5 5 6
Control 4 6 6 6 6 6 5 5 5 5 5 6 5 5 5 5
Treatment 2 1 1 3 1 3 4 4 4 4 1 4 3 4 4 4
Treatment 2 1 3 3 3 3 4 6 5 4 5 5 4 6 6 5
Control 4 6 6 6 6 6 5 5 5 6 6 5 5 5 6 6
Control NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Control 4 2 2 4 2 4 6 6 6 6 4 6 5 6 6 5
Control 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Treatment 3 4 2 5 5 5 6 5 5 5 5 5 5 6 6 6
Control NA 2 4 4 4 4 4 3 4 6 4 5 4 6 4 4
Control 2 2 2 3 1 3 4 1 1 1 2 1 3 3 3 3
Treatment 2 2 2 3 2 2 3 3 2 2 2 2 2 2 2 2
Control 3 3 3 6 6 6 6 6 6 6 5 6 6 6 6 6
Treatment 2 1 2 2 2 1 2 2 1 1 2 1 2 2 1 3
Treatment 4 5 5 6 6 5 5 6 5 5 4 5 5 4 4 5
Control 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Treatment 3 3 4 4 4 6 3 2 5 3 2 2 5 6 5 6
Control 4 4 3 3 6 3 6 6 3 2 4 4 4 4 4 4
Treatment 4 1 3 4 4 4 5 6 6 6 6 6 6 6 6 6
Control 4 4 5 6 5 5 4 6 6 6 6 5 6 6 6 6
Treatment 3 3 4 6 6 6 6 6 5 6 6 5 4 6 6 4
Control 4 4 6 6 4 6 6 6 6 4 4 3 5 6 6 6
Control 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
Treatment 4 5 5 6 6 6 6 6 5 5 6 6 5 5 6 6
Treatment 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
Control 2 1 2 1 1 1 1 3 1 4 4 1 1 1 1 1
Treatment 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Treatment 4 6 5 5 5 5 5 6 5 4 5 4 4 5 5 4
Treatment 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
Control 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
Treatment 4 5 6 6 6 5 6 6 6 5 6 6 6 6 6 6
Control 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Treatment 3 3 2 5 4 4 5 6 6 4 5 5 4 5 4 6
Treatment 4 5 4 4 4 5 5 6 4 5 4 3 6 6 6 6
Control 1 2 3 2 1 4 1 1 3 1 3 3 3 3 4 4
Control 3 6 6 6 6 6 5 1 5 6 5 6 6 6 6 6
Control 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Control 4 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
")
My current solution is too complicated. If I wanted to know the frequency of variables from q13_3:q14_9, I know that I can do something like this to find it:
library(tables)
varList <- 2:11
data_in[varList] <- lapply(data_in[varList], factor,exclude = NULL)
lapply(varList,function(x,df,byVar){
tabular((Factor(df[[x]],paste(colnames(df)[x])) + 1) ~ ((Factor(df[[byVar]],paste(byVar)))*((n=1) + Percent("col"))),
data= df)
},data_in,"treatment_cur")
Below is a snippet of what my current output looks like. The problem is that the output is a list of a list which cannot be exported into a single excel sheet. I have to manually copy everything from the console onto an excel file.
treatment_cur
Control Treatment
q14_8 n Percent n Percent
1 6 13.953 4 12.50
2 4 9.302 4 12.50
3 5 11.628 2 6.25
4 6 13.953 4 12.50
5 5 11.628 7 21.88
6 13 30.233 11 34.38
NA 4 9.302 0 0.00
All 43 100.000 32 100.00
[[10]]
treatment_cur
Control Treatment
q14_9 n Percent n Percent
1 6 13.953 4 12.50
2 6 13.953 4 12.50
3 4 9.302 4 12.50
4 6 13.953 5 15.62
5 5 11.628 8 25.00
6 12 27.907 7 21.88
NA 4 9.302 0 0.00
All 43 100.000 32 10
This works alright, but I want to:
Find the total frequency of each variable value as well (treatment + condition) as an additional column (as seen in the image above);
I do not like the function I am using to produce this output. I want to export this into an excel file, but since this output is actually a list of lists (it cannot be exported to excel), and I am finding it quite cumbersome to copy and paste these values from the console into excel. I would like an easier way of finding these frequencies! Surely R has a better way of doing this...
Any help is MUCH appreciated!!
One way to do this would be to explore using the gtsummary package.
using your code above you can produce a table quite easily with counts and percentages:
library(gtsummary)
library(readr)
library(flextable)
tbl_summary(data_in, by = "treatment_cur") %>%
add_overall() %>%
as_flex_table() %>%
flextable::save_as_docx(., path = "G:/test.docx")
If you just run:
tbl_summary(data_in, by = "treatment_cur") %>%
add_overall()
you will see the table it generates for you. The extra code after that makes it so that it is able to be exported to a docx file. From there you can copy that into excel. This generates the counts you requested and you can determine if it is a simpler implementation.
Another alternative is to write directly to a csv file:
tbl_summary(data_in, by = "treatment_cur") %>%
add_overall() %>%
as_tibble() %>%
readr::write_csv( .,path = "G:/test.csv")
OR
if you really need everything in separate columns you can separate the n and percents into two tables, merge them and then write to csv.
#keep counts only
ncount <- tbl_summary(data_in, by = "treatment_cur",
statistic = all_categorical()~ "{n}") %>%
add_overall()
#keep pcts only
pctdata <- tbl_summary(data_in, by = "treatment_cur",
statistic = all_categorical()~ "{p}%") %>%
add_overall()
#combine and output
tbl_merge(list(ncount, pctdata)) %>%
as_tibble() %>%
readr::write_csv(., "G:/test2.csv")
Edit:
Another way to approach this is with the janitor package. You can adorn counts and percentages pretty easily and merge the datasets together. After that it is easy to export to a csv/Excel. One downside here is you have to loop through your variables to get a table for each and then combine them together, however the code below is a good start to create it:
library(janitor)
datatry <- data_in %>%
janitor::tabyl( q13_3,treatment_cur) %>%
adorn_totals("col") %>%
adorn_totals("row")
datatry2 <- data_in %>%
janitor::tabyl( q13_3,treatment_cur) %>%
janitor::adorn_percentages(denominator = 'col') %>%
adorn_totals("row") %>%
adorn_totals("col") %>%
mutate(Total = ifelse(is.na(q13_3), Total, ifelse(q13_3 == 'Total',1, Total)))
datatry3 <- inner_join(datatry, datatry2, by = 'q13_3') %>%
mutate(variable ='q13_3')
Assuming that you constructed data_in as above:
library(dplyr)
library(purrr)
# reformat
tt <- data_in$treatment_cur
data_in$treatment_cur <- NULL
data_in %>% map(function(a)
{
ret <- data.frame(Treatment.n=rep(0, 6), Control.n=rep(0, 6))
b <- table(a[tt=="Treatment"])
ret[names(b), "Treatment.n"] <- b
b <- table(a[tt=="Control"])
ret[names(b), "Control.n"] <- b
ret$Treatment.percent <- ret$Treatment.n / sum(ret$Treatment.n)
ret$Control.percent <- ret$Control.n / sum(ret$Control.n)
ret
}) %>% do.call(what=cbind)
It assumes answers data is \in 1..6 and NA are ignored.
I wish to load a CHC or NEF file from the SHAPE software in order to read the contours under MOMOCS and make the related analyses (Plot, PCA...)
I have difficulties under R as well as under the package.
How should I proceed?
Here is an extract of an outline. The first one is from a CHC and the other one from NEF.
Thank you in advance, any help is welcome.
CHC
IMG_20200710_0001_1 1045 531 1,428639E-02 46736 5 5 4 4 4 5 4 4 4 5 5 4 5 5 5 5 5 5 4 5 6 5 4 5 5 5 6 5 5 6 5 5 5 5 6 5 6 5 5 6 5 5 6 5 6 5 5 6 6 5 6 5 6 5 6 5 6 5 6 6 5 6 5 6 6 5 6 6 5 6 6 5 6 6 5 6 6 5 6 6 5 6 6 5 6 6 5 6 5 6 5 6 6 5 6 6 6 5 5 6 6 5 6 6 5 6 6 5 6 6 5 5 6 6 6 6 5 6 5 6 6 6 5 6 5 6 6 6 6 5 6 6 6 5 6 5 6 6 6 6 6 5 6 6 6 5 6 6 5 6 6 6 5 6 6 5 6 6 5 6 6 5 6 6 6 5 6 6 6 6 5 6 6 5 6 6 5 6 6 6 6 5 6 6 5 6 6 6 5 6 6 6 6 6 6 6 5 6 6 6 5 6 5 6 6 6 6 6 6 6 5 6 6 5 6 6 6 5 6 6 6 5 6 6 5 6 6 6 6 5 6 6 6 6 6 5 6 6 6 6 6 6 6 6 6 6 6 6 6 5 6 6 6 6 6 6 6 6 6 6 5 6 6 6 6 6 6 6 6 5 6 6 6 6 6 6 6 6 5 6 6 6 5 6 6 6 6 6 6 6 5 6 6 6 6 6 6 6 6 6 6 6 5 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 5 6 6 6 6 7 6 6 5 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 6 6 5 6 6 7 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 7 6 6 6 6 6 6 7 6 6 6 6 6 6 6 6 6 6 7 6 6 6 6 6 7 6 6 6 6 7 6 6 6 6 6 6 7 6 6 6 6 6 6 7 6 6 6 7 6 7 6 6 6 6 6 7 6 6 6 6 6 6 7 6 6 7 6 6 7 6 6 6 6 6 7 6 7 6 6 6 6 7 6 6 6 6 6 6 7 6 6 6 6 7 6 6 6 6 6 7 6 6 7 6 6 6 6 7 6 6 6 6 6 6 7 6 6 6 6 7 6 6 7 6 6 6 6 6 6 7 6 6 6 6 7 6 6 6 6 7 6 6 6 6 6 7 6 6 6 7 6 6 6 7 6 6 6 7 6 6 6 7 5 6 6 7 6 6 6 7 6 7 6 6 6 6 7 6 6 6 6 7 6 6 6 6 6 7 6 6 7 6 6 6 6 7 6 6 6 6 6 6 6 7 6 6 7 6 6 6 6 7 6 6 6 7 6 6 7 6 6 6 6 6 7 6 6 6 7 6 6 6 6 6 7 6 7 6 6 6 6 6 7 6 6 6 6 7 6 6 7 6 6 7 6 6 6 6 7 6 6 7 7 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 1 0 2 1 1 0 0 0 0 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 4 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 3 3 1 2 2 4 3 1 0 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 3 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 1 2 2 2 3 1 2 2 3 2 2 2 2 2 2 2 3 1 2 2 2 2 3 1 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 3 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 1 2 2 2 1 2 2 2 2 2 2 1 2 2 2 2 1 2 2 2 1 2 2 2 1 2 2 2 2 1 2 2 2 2 2 1 2 2 2 2 2 1 2 2 2 2 1 2 2 2 2 2 2 1 2 2 1 2 2 2 2 1 2 2 2 1 2 2 2 2 1 2 2 2 1 2 2 1 2 2 2 2 2 2 1 2 2 2 2 2 2 2 1 2 2 1 2 2 1 2 2 1 2 2 2 1 2 1 1 2 2 2 2 1 2 2 1 2 2 1 2 2 2 1 2 2 1 2 2 2 1 2 2 1 2 2 2 1 2 2 1 2 2 2 1 2 2 2 1 2 2 2 2 1 0 2 2 2 2 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 3 1 0 2 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 3 1 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 1 2 2 3 2 2 2 2 3 2 2 2 2 2 2 2 2 2 3 2 2 2 2 3 2 2 2 3 2 2 3 2 2 3 2 2 3 2 3 2 3 2 2 3 3 2 4 3 2 3 3 4 4 -1
NEF
#CONST 1 1 1 0
#HARMO 15
IMG_20200710_0001_1
1.0000000e+00 3.5723303e-18 -2.2782437e-17 1.4062862e-01
-3.5345700e-03 -2.9677476e-04 -7.7529255e-02 2.8661995e-02
1.0149772e-01 3.3103762e-03 1.0055048e-03 3.3982778e-02
-5.6327330e-04 -3.2074328e-03 -5.6057424e-03 4.8512662e-03
3.8131276e-02 -1.5129959e-05 2.2552160e-03 1.4017338e-02
1.1789915e-04 -2.9600305e-03 -8.4295737e-04 -3.4299796e-04
1.7271413e-02 -7.2651006e-04 -1.2037379e-04 7.6870442e-03
-2.0549746e-04 -1.3433972e-03 -1.8426509e-03 -5.7843862e-04
1.0074513e-02 -9.3238318e-04 4.8671011e-05 5.6731905e-03
-7.5994187e-04 -1.0418345e-03 -7.7574542e-04 -2.8944747e-04
6.4933570e-03 -4.0572074e-04 -7.6458810e-04 4.0939891e-03
-5.4921053e-04 -8.4817676e-04 1.6206266e-04 -4.6108536e-04
4.5195469e-03 -5.4106090e-04 1.6321172e-04 3.1721989e-03
-1.8313986e-04 -3.5979483e-04 4.8276592e-04 -9.0690768e-04
3.0959844e-03 -6.1876139e-04 -6.3948596e-04 2.2105213e-03
In Momocs version 1.3.2 installed from GitHub https://github.com/MomX/Momocs/ I see the function chc2pix() that should import SHAPE's chain codes to R. In earlier Momocs documentation (https://www.rdocumentation.org/packages/Momocs/versions/0.2-03/topics/nef2Coe), you can find (in Examples!) the function nef2Coe(). I do not see this function in the current Momocs 1.3.2 but you can copy the whole function to your R script or save it to .txt and call it from R as source('path/to/your/NEF2COE.txt'). When the Momocs will be fully resurrected as MomX (https://momx.github.io/MomX/), perhaps the function again will be a part of the package.
sorry for the delay.
If you want to use CHC, then chc2pix is your friend indeed. You end up with an traditionnal outline as defined by its (x, y) coordinates.
For the NEF, it's "just" a simple string manipulation task. The clue pointed by #ordynets would likely work (or at least be very useful). Another option would be to go around :
# read the nef
m <- read.table("~/Desktop/nef.txt", skip=3)
# transpose to get in the "right" order
coe <- m %>% t %>% c %>% matrix(nrow=1)
# name columns to make thngs clear
colnames(coe) <- paste0(rep(LETTERS[1:4], times=nrow(m)),
rep(1:nrow(m), each=4))
# from now you can do whatever you want in Momocs
# eg rbind many to have an OutCoe, calculate the inverse EFT:
coe %>% coeff_rearrange("name") %>% coeff_split(nb.h=10) %>% efourier_i() %>% coo_plot()
(by the way, I'm still working on MomX, and such wrapper would fit in Momit but I struggle to find time to finish it).
do not hesitate to contact me directly !
I have a dataset of 10 variables and 150 observation.
ebi anago maguro ika uni sake tamago toro tekka.maki kappa.maki
7 4 5 1 0 2 8 3 9 6
1 4 5 7 2 0 8 6 9 3
7 2 5 4 8 1 0 3 9 6
4 7 5 1 2 0 3 8 6 9
4 5 7 2 0 3 8 1 6 9
4 5 7 2 0 3 1 8 6 9
5 7 4 1 0 2 3 8 9 6
5 4 1 6 7 2 0 8 3 9
5 7 2 3 8 4 9 0 6 1
1 7 2 0 8 3 5 4 6 9
4 7 5 1 8 2 3 9 6 0
7 5 0 4 2 3 8 6 1 9
4 7 0 5 2 1 8 3 6 9
4 5 7 0 3 1 2 6 8 9
7 4 0 2 5 3 1 8 9 6
7 5 4 0 2 3 8 1 6 9
2 7 0 8 6 3 1 9 5 4
7 2 5 4 3 0 8 1 6 9
7 5 0 2 1 6 8 9 3 4
7 4 5 0 3 1 2 8 6 9
Every variable is the rank of the agent preference of sushi type and I'd like to create multiple plot in the same image like the one in the photo.
Any help?
Something like this, maybe?
library(ggplot2)
library(reshape2)
rank=read.csv('rank.csv')
melt=melt(rank, id.vars=NULL)
melt$value=factor(melt$value, c(9:0))
ggplot(melt, aes(value)) + geom_bar() + facet_wrap(~variable, 5,5)
I have a large data with raw responses and wanted to compare each element for subject 1 in group 1 with its corresponding element for subject 1 in group 2. Of course, the comparison needs to be kept between subject 2 in group 1 and subject 2 in group 2, and between subject 3 in group 1 and subject 3 in group 2, and so on. What makes the problem even complex is that there are 100 groups, which in turn are 50 paired groups.
The output needs to keep the original raw response if they are the same. If they are different, the raw response needs to be replaced with '9'.
I'm pretty sure I could do it with for-loop, but wondering if there is anything better than for-loop in r, such as ifelse or apply?
As making my data simple, it would look like below.
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
Thanks for any help.
#Initialization of data
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
>df
V1 V2 V3 V4 V5 subject group
1 3 3 3 4 5 1 1
2 4 4 3 1 3 2 1
3 3 2 2 4 2 3 1
4 4 4 3 5 3 1 2
5 3 2 1 5 1 2 2
6 2 5 4 4 1 3 2
7 3 2 3 2 2 1 3
8 1 2 3 3 3 2 3
9 2 2 2 2 5 3 3
10 3 3 3 5 4 1 4
11 5 3 5 4 2 2 4
12 5 3 1 1 3 3 4
Processing without for loop
#processing without for loop
# assumption: initial data is sorted by group (can be easily done)
coloumns<-!dimnames(x)[[2]] %in% c('group','subject');
subjects<-df[, 'subject']
tabl<-table(subjects)
rows<-order(subjects)
rows2<-cumsum(tabl)
rows1<-rows2-tabl+1
df[rows[-rows1],coloumns][df[rows[-rows1],coloumns]!=df[rows[-rows2],coloumns]]<-9
>df
V1 V2 V3 V4 V5 subject group
1 3 3 3 4 5 1 1
2 4 4 3 1 3 2 1
3 3 2 2 4 2 3 1
4 9 9 3 9 9 1 2
5 9 9 9 9 9 2 2
6 9 9 9 4 9 3 2
7 9 9 3 9 9 1 3
8 9 2 9 9 9 2 3
9 2 9 9 9 9 3 3
10 3 9 3 9 9 1 4
11 9 9 9 9 9 2 4
12 9 9 9 9 9 3 4
Below is what I did to get the output. Again, thanks to Stanislav
df<-as.data.frame(matrix(sample(c(1:5),60,replace=T),nrow=12))
df$subject<-rep(1:3)
df$group<-rep(1:4, each=3)
> df
V1 V2 V3 V4 V5 subject group
1 1 4 3 1 5 1 1
2 2 1 4 1 5 2 1
3 1 2 5 4 5 3 1
4 5 4 1 4 3 1 2
5 5 1 3 2 2 2 2
6 1 2 2 4 5 3 2
7 5 4 2 3 1 1 3
8 2 3 4 3 5 2 3
9 2 5 3 5 3 3 3
10 4 2 1 4 1 1 4
11 2 3 3 5 5 2 4
12 5 3 3 4 5 3 4
col<-!dimnames(df)[[2]] %in% c('subject','group')
n<-length(df[,1])
temp<-table(df$group)
n.sub<-temp[1]
temp<-seq(1,n,by=2*n.sub)
s1<-c(sapply(temp, function(x) seq.int(x, length.out=n.sub)))
temp<-seq(n.sub+1,n,by=2*n.sub)
s2<-c(sapply(temp, function(x) seq.int(x, length.out=n.sub)))
df[s2,col][df[s1,col]!=df[s2,col]]<-9
> df
V1 V2 V3 V4 V5 subject group
1 1 4 3 1 5 1 1
2 2 1 4 1 5 2 1
3 1 2 5 4 5 3 1
4 9 4 9 9 9 1 2
5 9 1 9 9 9 2 2
6 1 2 9 4 5 3 2
7 5 4 2 3 1 1 3
8 2 3 4 3 5 2 3
9 2 5 3 5 3 3 3
10 9 9 9 9 1 1 4
11 2 3 9 9 5 2 4
12 9 9 3 9 9 3 4
I am looking for the Matlab way of doing the following:
> merge(2:4,3:7)
x y
1 2 3
2 3 3
3 4 3
4 2 4
5 3 4
6 4 4
7 2 5
8 3 5
9 4 5
10 2 6
11 3 6
12 4 6
13 2 7
14 3 7
15 4 7
> expand.grid(2:4,3:7)
Var1 Var2
1 2 3
2 3 3
3 4 3
4 2 4
5 3 4
6 4 4
7 2 5
8 3 5
9 4 5
10 2 6
11 3 6
12 4 6
13 2 7
14 3 7
15 4 7
I usually do it with meshgrid:
>> [x y] = meshgrid(2:4, 3:7);
>> [x(:) y(:)]
ans =
2 3
2 4
2 5
2 6
2 7
3 3
3 4
3 5
3 6
3 7
4 3
4 4
4 5
4 6
4 7
Use ndgrid for n variables (2 and more). For example (4-D space)
[X,Y,Z,T] = ndgrid(2:4, 3:7, 1:2, 1:10);