I was looking at this question: Find how many times duplicated rows repeat in R data frame, which provides the following code:
library(plyr)
ddply(df,.(a,b),nrow)
However, I have a dataset with many variables, so I can't type them out like a,b in this case. I've tried using names(data) with the paste function, but it doesn't seem to work. I tried this:
var_names=paste(names(data),collapse=",")
ddply(data,.(paste(a)),nrow)
It instead gives this output:
However, if I manually type them out, I get the proper output:
What do I need to do differently here?
Instead of paste and evaluating, make use of count from dplyr, which can take multiple columns with across and select-helpers - everything()
library(dplyr)
df %>%
count(across(everything()))
A reproducible example with mtcars dataset
data(mtcars)
df <- mtcars %>%
select(vs:carb)
count(df, across(everything()))
vs am gear carb n
1 0 0 3 2 4
2 0 0 3 3 3
3 0 0 3 4 5
4 0 1 4 4 2
5 0 1 5 2 1
6 0 1 5 4 1
7 0 1 5 6 1
8 0 1 5 8 1
9 1 0 3 1 3
10 1 0 4 2 2
11 1 0 4 4 2
12 1 1 4 1 4
13 1 1 4 2 2
14 1 1 5 2 1
Also, in ddply, we can just pass a vector of column names i.e. no need to create a single string
library(plyr)
ddply(df, names(df), nrow)
vs am gear carb V1
1 0 0 3 2 4
2 0 0 3 3 3
3 0 0 3 4 5
4 0 1 4 4 2
5 0 1 5 2 1
6 0 1 5 4 1
7 0 1 5 6 1
8 0 1 5 8 1
9 1 0 3 1 3
10 1 0 4 2 2
11 1 0 4 4 2
12 1 1 4 1 4
13 1 1 4 2 2
14 1 1 5 2 1
Or if we are creating a single string from names, also paste the whole expression and then evaluate (which is not recommended as there are standard ways of dealing this)
eval(parse(text = paste('ddply(df, .(', toString(names(df)), '), nrow)')))
vs am gear carb V1
1 0 0 3 2 4
2 0 0 3 3 3
3 0 0 3 4 5
4 0 1 4 4 2
5 0 1 5 2 1
6 0 1 5 4 1
7 0 1 5 6 1
8 0 1 5 8 1
9 1 0 3 1 3
10 1 0 4 2 2
11 1 0 4 4 2
12 1 1 4 1 4
13 1 1 4 2 2
14 1 1 5 2 1
You can use aggregate by grouping all the columns and counting it's length.
aggregate(1:nrow(df)~., df, length)
This question already has an answer here:
How to use Pivot_longer to reshape from wide-type data to long-type data with multiple variables
(1 answer)
Closed 2 years ago.
I have a dataset of adolescents over 3 waves. I need to reshape the data from wide to long, but I haven't been able to figure out how to use pivot_longer (I've checked other questions, but maybe I missed one?). Below is sample data:
HAVE DATA:
id c1sports c2sports c3sports c1smoker c2smoker c3smoker c1drinker c2drinker c3drinker
1 1 1 1 1 1 4 1 5 2
2 1 1 1 5 1 3 4 1 4
3 1 0 0 1 1 5 2 3 2
4 0 0 0 1 3 3 4 2 3
5 0 0 0 2 1 2 1 5 3
6 0 0 0 4 1 4 4 3 1
7 1 0 1 2 2 3 1 4 1
8 0 1 1 4 4 1 4 5 4
9 1 1 1 3 2 2 3 4 2
10 0 1 0 2 5 5 4 2 3
WANT DATA:
id wave sports smoker drinker
1 1 1 1 1
1 2 1 1 5
1 3 1 4 2
2 1 1 5 4
2 2 1 1 1
2 3 1 3 4
3 1 1 1 2
3 2 0 1 3
3 3 0 5 2
4 1 0 1 4
4 2 0 3 2
4 3 0 3 3
5 1 0 2 1
5 2 0 1 5
5 3 0 2 3
6 1 0 4 4
6 2 0 1 3
6 3 0 4 1
7 1 1 2 1
7 2 0 2 4
7 3 1 3 1
8 1 0 4 4
8 2 1 4 5
8 3 1 1 4
9 1 1 3 3
9 2 1 2 4
9 3 1 2 2
10 1 0 2 4
10 2 1 2 2
10 3 0 5 3
So far the only think that I've been able to run is:
long_dat <- wide_dat %>%
pivot_longer(., cols = c1sports:c3drinker)
But this doesn't get me separate columns for sports, smoker, drinker.
You could use names_pattern argument in pivot_longer.
tidyr::pivot_longer(df,
cols = -id,
names_to = c('wave', '.value'),
names_pattern = 'c(\\d+)(.*)')
# id wave sports smoker drinker
# <int> <chr> <int> <int> <int>
# 1 1 1 1 1 1
# 2 1 2 1 1 5
# 3 1 3 1 4 2
# 4 2 1 1 5 4
# 5 2 2 1 1 1
# 6 2 3 1 3 4
# 7 3 1 1 1 2
# 8 3 2 0 1 3
# 9 3 3 0 5 2
#10 4 1 0 1 4
# … with 20 more rows
So far I have done the discriminant analysis. I generated the posterior probabilities, structure loadings, and group centroids.
I have 1 grouping variable : history
I have 3 discriminant variables : mhpg, exercise, and control
here is the code so far
td <- read.delim("H:/Desktop/TAB DATA.txt")
td$history<-factor(td$history)
fit<-lda(history~mhpg+exercise+control, data=td)
git<-predict(fit)
xx<-subset(td, select=c(mhpg, control, exercise))
cor(xx,git$x)
aggregate(git$x~history,data=td,FUN=mean)
tst<-lm(cbind(mhpg,control,exercise)~history,data=td)
Basically, the above code is for discriminant analysis.
Now I want generate frequency classification and percent classification tables for classification analysis.
my attempted code (which i sampled from someone else to no avail) is:
td[6] <- git$class
td$V6<-factor(td$V6)
ftab<-table(td$history,dt$V6)
prop.table(ftab,1)
Where column 6 is my grouping variable history.
I get the following error when trying to make td$V6 a categorical variable with factor
Error in `$<-.data.frame`(`*tmp*`, "V6", value = integer(0)) :
replacement has 0 rows, data has 50
Can anyone steer me in the right direction? I really don't know why the sample code used a capital V out of nowhere. Below is the data. Column 6 is the grouping variable, history. Column 5 is the discriminant variable, control. column 7 is the discriminant variable, exercise. Column 8 is the discriminant variable, mhpg.
1 3 6 0 2 0 4 2 4 3 0 6 0
1 4 5 0 0 1 2 5 4 6 1 4 1
1 4 4 0 2 1 1 8 6 7 1 2 1
2 4 9 0 2 1 0 6 7 8 1 4 1
2 4 3 1 4 1 2 6 6 6 1 4 1
2 5 7 0 1 1 3 6 7 7 1 1 1
2 5 8 0 1 1 1 6 6 7 1 5 1
2 6 7 0 1 1 0 9 8 8 1 3 1
2 6 4 1 2 1 2 5 7 6 1 5 1
3 4 10 0 1 1 1 8 5 7 1 4 1
3 4 4 0 1 1 1 8 9 8 1 3 1
3 4 7 0 1 0 1 6 3 4 0 8 0
3 5 4 1 4 1 2 5 4 5 0 5 1
3 5 7 0 2 1 1 7 5 7 1 4 1
3 5 6 0 0 1 0 10 9 10 1 3 1
3 5 6 0 2 1 1 9 10 9 1 2 1
3 5 5 1 2 1 2 5 4 4 0 9 1
3 6 2 1 4 1 3 6 4 4 0 7 1
3 6 3 1 2 1 2 7 5 5 0 6 1
3 6 5 1 2 1 2 6 7 6 1 6 1
3 6 7 1 3 1 3 5 4 4 0 8 1
3 6 5 1 2 1 2 5 3 3 0 10 1
3 7 8 0 0 1 1 7 6 7 1 5 1
3 7 5 1 2 1 1 5 5 5 0 6 1
3 7 6 1 2 0 4 3 1 2 0 9 0
3 8 6 1 2 1 1 6 5 5 0 7 1
3 8 9 0 0 1 0 7 5 6 1 3 1
4 5 5 1 2 1 1 5 6 5 0 6 1
4 5 5 1 2 0 2 3 3 4 0 8 0
4 6 8 0 0 1 2 8 7 7 1 4 1
4 6 6 1 3 1 2 5 4 4 0 7 0
4 6 5 1 3 1 2 4 3 2 0 8 0
4 7 2 0 3 0 4 3 6 6 1 4 1
4 7 4 1 3 0 3 4 2 1 0 7 0
4 7 7 1 3 0 4 4 5 5 0 7 0
4 7 6 1 3 0 3 3 6 5 0 4 0
5 7 5 1 1 0 4 1 7 4 0 7 1
5 8 1 1 3 0 3 4 8 7 1 5 0
5 8 3 1 3 0 3 4 5 6 1 5 1
5 9 4 1 4 0 3 2 7 5 0 5 1
5 9 6 1 4 0 3 4 6 6 1 7 0
5 10 4 1 3 0 3 4 2 3 0 6 0
1 1 8 0 1 0 2 5 6 5 0 6 1
1 2 7 0 1 1 1 7 8 9 1 5 0
1 2 7 0 1 1 0 7 5 6 1 5 1
1 3 5 0 1 1 2 7 8 8 1 5 0
2 3 3 1 2 1 2 6 7 6 1 6 0
2 3 6 1 1 1 2 7 6 4 0 7 0
2 4 6 1 3 1 3 6 5 5 0 6 0
2 5 4 1 3 1 3 4 4 3 0 6 0
Try:
tbl <- table(td$history,git$class)
tbl
# 0 1
# 0 13 2
# 1 1 34
prop.table(tbl)
# 0 1
# 0 0.26 0.04
# 1 0.02 0.68
These are the classification tables.
Regarding why your "borrowed" code does not run, there are too many possibilities.
First, if you import the data set you provided without column names, R will assign names Vn where n is 1,2,3, etc. But if this was the case none of your code would run as you refer to columns history, control, etc. So at least those must be named properly.
Second, in the line:
ftab<-table(td$history,dt$V6)
you refer to dt$V6. AFAICT there is no dt (is this a typo?).
I want to accumulate the values of a column till the end of the group, though starting the addition when a specific value occurs in another column. I am only interested in the first instance of the specific value within a group. So if that value occurs again within the group, the addition column should continue to add the values. I know this sounds like a rather strange problem, so hopefully the example table makes sense.
The following data frame is what I have now:
> df = data.frame(group = c(1,1,1,1,2,2,2,2,2,3,3,3,4,4,4),numToAdd = c(1,1,3,2,4,2,1,3,2,1,2,1,2,3,2),occurs = c(0,0,1,0,0,1,0,0,0,0,1,1,0,0,0))
> df
group numToAdd occurs
1 1 1 0
2 1 1 0
3 1 3 1
4 1 2 0
5 2 4 0
6 2 2 1
7 2 1 0
8 2 3 0
9 2 2 0
10 3 1 0
11 3 2 1
12 3 1 1
13 4 2 0
14 4 3 0
15 4 2 0
Thus, whenever a 1 occurs within a group, I want a cumulative sum of the values from the column numToAdd, until a new group starts. This would look like the following:
> finalDF = data.frame(group = c(1,1,1,1,2,2,2,2,2,3,3,3,4,4,4),numToAdd = c(1,1,3,2,4,2,1,3,2,1,2,1,2,3,2),occurs = c(0,0,1,0,0,1,0,0,0,0,1,1,0,0,0),added = c(0,0,3,5,0,2,3,6,8,0,2,3,0,0,0))
> finalDF
group numToAdd occurs added
1 1 1 0 0
2 1 1 0 0
3 1 3 1 3
4 1 2 0 5
5 2 4 0 0
6 2 2 1 2
7 2 1 0 3
8 2 3 0 6
9 2 2 0 8
10 3 1 0 0
11 3 2 1 2
12 3 1 1 3
13 4 2 0 0
14 4 3 0 0
15 4 2 0 0
Thus, the added column is 0 until a 1 occurs within the group, then accumulates the values from numToAdd until it moves to a new group, turning the added column back to 0. In group three, a value of 1 is found a second time, yet the cumulated sum continues. Additionally, in group 4, a value of 1 is never found, thus the value within the added column remains 0.
I've played around with dplyr, but can't get it to work. The following solution only outputs the total sum, and not the increasing cumulated number at each row.
library(dplyr)
df =
df %>%
mutate(added=ifelse(occurs == 1,cumsum(numToAdd),0)) %>%
group_by(group)
Try
df %>%
group_by(group) %>%
mutate(added= cumsum(numToAdd*cummax(occurs)))
# group numToAdd occurs added
# 1 1 1 0 0
# 2 1 1 0 0
# 3 1 3 1 3
# 4 1 2 0 5
# 5 2 4 0 0
# 6 2 2 1 2
# 7 2 1 0 3
# 8 2 3 0 6
# 9 2 2 0 8
# 10 3 1 0 0
# 11 3 2 1 2
# 12 3 1 1 3
# 13 4 2 0 0
# 14 4 3 0 0
# 15 4 2 0 0
Or using data.table
library(data.table)#v1.9.5+
i1 <-setDT(df)[, .I[(rleid(occurs) + (occurs>0))>1], group]$V1
df[, added:=0][i1, added:=cumsum(numToAdd), by = group]
Or a similar option as in dplyr
setDT(df)[,added := cumsum(numToAdd * cummax(occurs)) , by = group]
You can use split-apply-combine in base R with something like:
df$added <- unlist(lapply(split(df, df$group), function(x) {
y <- rep(0, nrow(x))
pos <- cumsum(x$occurs) > 0
y[pos] <- cumsum(x$numToAdd[pos])
y
}))
df
# group numToAdd occurs added
# 1 1 1 0 0
# 2 1 1 0 0
# 3 1 3 1 3
# 4 1 2 0 5
# 5 2 4 0 0
# 6 2 2 1 2
# 7 2 1 0 3
# 8 2 3 0 6
# 9 2 2 0 8
# 10 3 1 0 0
# 11 3 2 1 2
# 12 3 1 1 3
# 13 4 2 0 0
# 14 4 3 0 0
# 15 4 2 0 0
To add another base R approach:
df$added <- unlist(lapply(split(df, df$group), function(x) {
c(x[,'occurs'][cumsum(x[,'occurs']) == 0L],
cumsum(x[,'numToAdd'][cumsum(x[,'occurs']) != 0L]))
}))
# group numToAdd occurs added
# 1 1 1 0 0
# 2 1 1 0 0
# 3 1 3 1 3
# 4 1 2 0 5
# 5 2 4 0 0
# 6 2 2 1 2
# 7 2 1 0 3
# 8 2 3 0 6
# 9 2 2 0 8
# 10 3 1 0 0
# 11 3 2 1 2
# 12 3 1 1 3
# 13 4 2 0 0
# 14 4 3 0 0
# 15 4 2 0 0
Another base R:
df$added <- unlist(lapply(split(df,df$group),function(x){
cumsum((cumsum(x$occurs) > 0) * x$numToAdd)
}))
Starting with some sample two-way frequency table:
a <- c(1,2,3,4,4,3,4,2,2,2)
b <- c(1,2,3,4,1,2,4,3,2,2)
tab <- table(a,b)
> tab
b
a 1 2 3 4
1 1 0 0 0
2 0 3 1 0
3 0 1 1 0
4 1 0 0 2
I need to transform the table into the following format:
goal <- data.frame(a=c(1,2,3,4),b=c(1,2,3,4),count=c(1,3,1,2))
> goal
a b count
1 1 1 1
2 2 2 3
3 3 3 1
4 4 4 2
. . . .
How can I form all pairwise combinations from the two-way table and add the frequency counts in the third column?
Intuition tells me there should be a simple kind of 'reverse' function for table, but I could not find anything on SO or Google.
Naturally, after posting the question I found the right search query for Google...
> as.data.frame(tab)
a b Freq
1 1 1 1
2 2 1 0
3 3 1 0
4 4 1 1
5 1 2 0
6 2 2 3
7 3 2 1
8 4 2 0
9 1 3 0
10 2 3 1
11 3 3 1
12 4 3 0
13 1 4 0
14 2 4 0
15 3 4 0
16 4 4 2