Either it's late, or I've found a bug, or cast doesn't like colnames with "." in them. This all happens inside a function, but it "doesn't work" outside of a function as much as it doesn't work inside of it.
x <- structure(list(df.q6 = structure(c(1L, 1L, 1L, 11L, 11L, 9L,
4L, 11L, 1L, 1L, 2L, 2L, 11L, 5L, 4L, 9L, 4L, 4L, 1L, 9L, 4L,
10L, 1L, 11L, 9L), .Label = c("a", "b", "c", "d", "e", "f", "g",
"h", "i", "j", "k"), class = "factor"), df.s5 = structure(c(4L,
4L, 1L, 2L, 4L, 4L, 4L, 3L, 4L, 1L, 2L, 1L, 2L, 4L, 1L, 3L, 4L,
2L, 2L, 4L, 4L, 4L, 2L, 2L, 1L), .Label = c("a", "b", "c", "d",
"e"), class = "factor")), .Names = c("df.q6", "df.s5"), row.names = c(NA,
25L), class = "data.frame")
cast(x, df.q6 + df.s5 ~., length)
No worky.
However, if:
colnames(x) <- c("variable", "value")
cast(x, variable + value ~., length)
Works like a charm.
For me I use a similar solution to what Spacedman points out.
#take your data.frame x with it's two columns
#add a column
x$value <- 1
#apply your cast verbatim
cast(x, df.q6 + df.s5 ~., length)
df.q6 df.s5 (all)
1 a a 2
2 a b 2
3 a d 3
4 b a 1
5 b b 1
6 d a 1
7 d b 1
8 d d 3
9 e d 1
10 i a 1
11 i c 1
12 i d 2
13 j d 1
14 k b 3
15 k c 1
16 k d 1
Hopefully that helps!
Jay
Nothing to do with the dots in the colnames (easily shown!).
If your dataframe doesnt have a column called 'value' then cast() guesses what column is the value - in this case it guesses 'df.s5' as it is the last column. This is what you get when you melt() data. It then renames that column to 'value' before calling reshape1. Now the column 'df.s5' is no more, yet it's there on the left of your formula. Uh oh.
You are using the value in the formula, which is an odd thing to do. None of the cast examples do that. What are you trying to do here?
You could add an ad-hoc column as a dummy value:
> cast(cbind(x,1), df.q6+s5~., length)
Using 1 as value column. Use the value argument to cast to override this choice
df.q6 s5 (all)
1 a a 2
2 a b 2
3 a d 3
4 b a 1
5 b b 1
[etc]
But I suspect there's a better way to get the number of repeated observations (rows) in a data frame - which is your real question!
if you are looking for an easy solution, dcast in reshape2 package can help you:
library(reshape2)
dcast(x, df.q6 + df.s5 ~., length)
Related
I am working on a dataframe likes:
groups . values
a . 1
a . 1
a 2
b . 2
b . 3
b . 3
c . 4
c . 5
c . 6
d . 6
d . 7
d . 2
The problem is to turn it into something like:
groups . values
a . 1
a . 1
b . 3
b . 3
c . 4
c . 5
d . 7
I want to keep rows whose values only occur in ONE group. For example, value 2 is deleted because it occurs in three different groups, but value 1 is kept although it occur twice in ONLY ONE group.
Is there any functions from dplyr package can handle this problem? or I have to write my own function?
As you asked for a dplyr solution:
df %>% group_by(values) %>% filter(n_distinct(groups) == 1)
# # A tibble: 7 x 2
# # Groups: values [5]
# groups values
# <chr> <int>
#1 a 1
#2 a 1
#3 b 3
#4 b 3
#5 c 4
#6 c 5
#7 d 7
with
df <- structure(list(groups = c("a", "a", "a", "b", "b", "b", "c", "c", "c", "d", "d", "d"),
values = c(1L, 1L, 2L, 2L, 3L, 3L, 4L, 5L, 6L, 6L, 7L, 2L)),
row.names = c(NA, -12L), class = "data.frame")
Group by values and see if column groups has only one element. This can be done with ave.
i <- as.logical(with(df1, ave(as.numeric(groups), values, FUN = function(x) length(unique(x)) == 1)))
df1[i, ]
# groups values
#1 a 1
#2 a 1
#5 b 3
#6 b 3
#7 c 4
#8 c 5
#11 d 7
Data in dput format.
df1 <-
structure(list(groups = structure(c(1L, 1L, 1L, 2L,
2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L), .Label = c("a", "b",
"c", "d"), class = "factor"), values = c(1L, 1L, 2L,
2L, 3L, 3L, 4L, 5L, 6L, 6L, 7L, 2L)),
class = "data.frame", row.names = c(NA, -12L))
x[x$values %in% names(which(colSums(table(x)>0)==1)),]
where
x = structure(list(groups = c("a", "a", "a", "b", "b", "b", "c",
"c", "c", "d", "d", "d"), values = c(1L, 1L, 2L, 2L, 3L, 3L,
4L, 5L, 6L, 6L, 7L, 2L)), row.names = c(NA, -12L), class = "data.frame")
or, a data.table solution:
setDT(x)[, .SD[uniqueN(groups)==1], values]
Using sqldf package for your original data frame df:
library(sqldf)
result <- sqldf("SELECT * FROM df
WHERE `values` IN (
SELECT `values` from (
SELECT `values`, groups, count(*) as num from df
GROUP BY `values`, groups) t
GROUP BY `values`
HAVING COUNT(1) = 1
)")
CustomerID MarkrtungChannel OrderID
1 A 1
2 B 2
3 A 3
4 B 4
5 C 5
1 C 6
1 A 7
2 C 8
3 B 9
3 B 10
Hi, I want to know which combinations of marketing channels are used by how many customers .
How can I calculate this with R?
E.g. The combination of Marketing channels A and C is used by 1 customer (ID 1)
the combination of Marketing channels C and B is also used by 1 customer (ID 2)
And so on...
and here's a tidyverse way.
library(tidyverse)
data.df%>%
group_by(CustomerID)%>%
summarize(combo=paste0(sort(unique(MarkrtungChannel)),collapse=""))%>%
ungroup()%>%
group_by(combo)%>%
summarize(n.users=n())
counting the number of people using each combo at the end.
You can do it multiple ways. Here is data.table way:
# Here is your data
df<-structure(list(CustomerID = c(1L, 2L, 3L, 4L, 5L, 1L, 1L, 2L,
3L, 3L), MarkrtungChannel = structure(c(1L, 2L, 1L, 2L, 3L, 3L,
1L, 3L, 2L, 2L), .Label = c("A", "B", "C"), class = "factor"),
OrderID = 1:10), .Names = c("CustomerID", "MarkrtungChannel",
"OrderID"), class = "data.frame", row.names = c(NA, -10L))
df[]<-lapply(df[],as.character)
# Here is the combination field
library(data.table)
setDT(df)
df[,Combo:=.(list(unique(MarkrtungChannel))), by=CustomerID]
# Or (to get the combination counts)
df[,list(combo=(list(unique(MarkrtungChannel)))), by=CustomerID][,uniqueN(CustomerID),by=combo]
My data looks like this:
Group Feature_A Feature_B Feature_C Feature_D
1 1 0 3 2 4
2 1 5 2 2 8
3 1 9 8 6 5
4 2 5 7 8 8
5 2 2 6 8 1
6 2 3 8 6 4
7 3 1 5 3 5
8 3 1 4 3 4
df <- structure(list(Group = c(1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L), Feature_A = c(0L,
5L, 9L, 5L, 2L, 3L, 1L, 1L), Feature_B = c(3L, 2L, 8L, 7L, 6L,
8L, 5L, 4L), Feature_C = c(2L, 2L, 6L, 8L, 8L, 6L, 3L, 3L), Feature_D = c(4L,
8L, 5L, 8L, 1L, 4L, 5L, 4L)), .Names = c("Group", "Feature_A",
"Feature_B", "Feature_C", "Feature_D"), class = "data.frame", row.names = c(NA,
-8L))
For every Feature I want to generate a plot (e.g., boxplot) that would higlight difference between Groups.
# Get unique Feature and Group
Features<-unique(colnames(df[,-1]))
Group<-unique(colnames(df$Group))
But how can I do the rest?
Pseudo-code might look like this:
Select Feature from Data
Split Data according Group
Boxplot
for (i in 1:levels(df$Features)){
for (o in 1:length(Group)){
}}
How can I achieve this? Hope someone can help me.
I would put py data in the long format. Then Using ggplot2 you can do some nice things.
library(reshape2)
library(ggplot2)
library(gridExtra)
## long format using Group as id
dat.m <- melt(dat,id='Group')
## bar plot
p1 <- ggplot(dat.m) +
geom_bar(aes(x=Group,y=value,fill=variable),stat='identity')
## box plot
p2 <- ggplot(dat.m) +
geom_boxplot(aes(x=factor(Group),y=value,fill=variable))
## aggregate the 2 plots
grid.arrange(p1,p2)
This is easy to do. I do this all the time
The code below will generate the charts using ggplot and save them as ch_Feature_A ....
you can wrap the answer in a pdf statement to send them to pdf as well
library(ggplot2)
df$Group <- as.factor(df$Group)
for (i in 2:dim(df)[2]) {
ch <- ggplot(df,aes_string(x="Group",y=names(df)[i],fill="Group"))+geom_boxplot()
assign(paste0("ch_",names(df)[i]),ch)
}
or even simpler, if you do not want separate charts
library(reshape2)
df1 <- melt(df)
ggplot(df1,aes(x=Group,y=value,fill=Group))+geom_boxplot()+facet_grid(.~variable)
I am trying to import some data (below) and checking to see if I have the appropriate number of rows for later analysis.
repexample <- structure(list(QueueName = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 3L, 3L, 3L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L
), .Label = c(" Overall", "CCM4.usci_retention_eng", "usci_helpdesk"
), class = "factor"), X8Tile = structure(c(1L, 2L, 3L, 4L, 5L,
6L, 7L, 8L, 9L, 1L, 2L, 3L, 4L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L,
9L), .Label = c(" Average", "1", "2", "3", "4", "5", "6", "7",
"8"), class = "factor"), Actual = c(508.1821504, 334.6994838,
404.9048759, 469.4068667, 489.2800416, 516.5744106, 551.7966176,
601.5103783, 720.9810622, 262.4622533, 250.2777778, 264.8281938,
272.2807882, 535.2466968, 278.25, 409.9285714, 511.6635101, 553,
641, 676.1111111, 778.5517241, 886.3666667), Calls = c(54948L,
6896L, 8831L, 7825L, 5768L, 7943L, 5796L, 8698L, 3191L, 1220L,
360L, 454L, 406L, 248L, 11L, 9L, 94L, 1L, 65L, 9L, 29L, 30L),
Pop = c(41L, 6L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 3L, 1L, 1L,
1L, 11L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L)), .Names = c("QueueName",
"X8Tile", "Actual", "Calls", "Pop"), class = "data.frame", row.names = c(NA,
-22L))
The data gives 5 columns and is one example of some data that I would typically import (via a .csv file). As you can see there are three unique values in the column "QueueName". For each unique value in "QueueName" I want to check that it has 9 rows, or the corresponding values in the column "X8Tile" ( Average, 1, 2, 3, 4, 5, 6, 7, 8). As an example the "QueueName" Overall has all of the necessary rows, but usci_helpdesk does not.
So my first priority is to at least identify if one of the unique values in "QueueName" does not have all of the necessary rows.
My second priority would be to remove all of the rows corresponding to a unique "QueueName" that does not meet the requirements.
Both these priorities are easily addressed using the Split-Apply-Combine paradigm, implemented in the plyr package.
Priority 1: Identify values of QueueName which don't have enough rows
require(plyr)
# Make a short table of the number of rows for each unique value of QueueName
rowSummary <- ddply(repexample, .(QueueName), summarise, numRows=length(QueueName))
print(rowSummary)
If you have lots of unique values of QueueName, you'll want to identify the values which are not equal to 9:
rowSummary[rowSummary$numRows !=9, ]
Priority 2: Eliminate rows for which QueueNamedoes not have enough rows
repexample2 <- ddply(repexample, .(QueueName), transform, numRows=length(QueueName))
repexampleEdit <- repexample2[repexample2$numRows ==9, ]
print(repxampleEdit)
(I don't quite understand the meaning of 'check that it has 9 rows, or the corresponding values in the column "X8Tile"). You could edit the repexampleEdit line based on your needs.
This is an approach that makes some assumptions about how your data are ordered. It can be modified (or your data can be reordered) if the assumption doesn't fit:
## Paste together the values from your "X8tile" column
## If all is in order, you should have "Average12345678"
## If anything is missing, you won't....
myMatch <- names(
which(with(repexample, tapply(X8Tile, QueueName, FUN=function(x)
gsub("^\\s+|\\s+$", "", paste(x, collapse = ""))))
== "Average12345678"))
## Use that to subset...
repexample[repexample$QueueName %in% myMatch, ]
# QueueName X8Tile Actual Calls Pop
# 1 Overall Average 508.1822 54948 41
# 2 Overall 1 334.6995 6896 6
# 3 Overall 2 404.9049 8831 5
# 4 Overall 3 469.4069 7825 5
# 5 Overall 4 489.2800 5768 5
# 6 Overall 5 516.5744 7943 5
# 7 Overall 6 551.7966 5796 5
# 8 Overall 7 601.5104 8698 5
# 9 Overall 8 720.9811 3191 5
# 14 CCM4.usci_retention_eng Average 535.2467 248 11
# 15 CCM4.usci_retention_eng 1 278.2500 11 2
# 16 CCM4.usci_retention_eng 2 409.9286 9 2
# 17 CCM4.usci_retention_eng 3 511.6635 94 2
# 18 CCM4.usci_retention_eng 4 553.0000 1 1
# 19 CCM4.usci_retention_eng 5 641.0000 65 1
# 20 CCM4.usci_retention_eng 6 676.1111 9 1
# 21 CCM4.usci_retention_eng 7 778.5517 29 1
# 22 CCM4.usci_retention_eng 8 886.3667 30 1
Similar approaches can be taken with aggregate+merge and similar tools.
I'm having an issue, but I'm sure it's super easy for someone who is very familiar with R.
I have a matrix that is 3008 x 3008. What I want is it to sum every 8 columns in each row. So essentially you'd end up with a new matrix that is now 367 x 367.
Here's a small example:
C.1 C.2 C.3 C.4 C.5 C.6
row1 1 2 1 2 5 6
row1 1 2 3 4 5 6
row1 2 6 3 4 5 6
row1 1 2 3 4 10 6
So say I wanted to sum these for every 3 columns in each row, I'd want to end up with:
C.1 C.2
row1 4 13
row1 6 15
row1 11 15
row1 6 20
# m is your matrix
n <- 8
grp <- seq(1, ncol(m), by=n)
sapply(grp, function(x) rowSums(m[, x:(x+n-1)]))
Some explanation if you're new to R. grp is a sequence of numbers that gives the starting points for each group of columns: 1, 9, 17, etc if you want to sum every 8 columns.
The sapply call can be understood as follows. For each number in grp, it calls the rowSums function, passing it those matrix columns corresponding to that group number. Thus when grp is 1, it gets the row sums for columns 1-8; when grp is 9, it gets the row sums for columns 9-16 and so on. These are vectors, which sapply then binds together into a matrix.
Transform your matrix to an array, then use apply and rowSums.
mat <- structure(c(1L, 1L, 2L, 1L, 2L, 2L, 6L, 2L, 1L, 3L, 3L, 3L, 2L, 4L, 4L, 4L, 5L, 5L, 5L, 10L, 6L, 6L, 6L, 6L),
.Dim = c(4L, 6L),
.Dimnames = list(c("row1", "row2", "row3", "row4"), c("C.1", "C.2", "C.3", "C.4", "C.5", "C.6")))
n <- 3 #this needs to be a factor of the number of columns
a <- array(mat,dim=c(nrow(mat),n,ncol(mat)/n))
apply(a,3,rowSums)
# [,1] [,2]
# [1,] 4 13
# [2,] 6 15
# [3,] 11 15
# [4,] 6 20
#Create sample data:
df <- matrix(rexp(200, rate=.1), ncol=20)
#Choose the number of columns you'd like to sum up (e.g., 3 or 8)
number_of_columns_to_sum <- 3
df2 <- NULL #Set to null so that you can use cbind on the first value below
for (i in seq(1,ncol(df), by = number_of_columns_to_sum)) {
df2 <- cbind(df2, rowSums(df[,i:(i+number_of_columns_to_sum-1)]))
}
Another option: Though it may not be as elegant
mat <- structure(c(1L, 1L, 2L, 1L, 2L, 2L, 6L, 2L, 1L, 3L, 3L, 3L, 2L, 4L, 4L, 4L, 5L, 5L, 5L, 10L, 6L, 6L, 6L, 6L),
.Dim = c(4L, 6L),
.Dimnames = list(c("row1", "row1", "row1", "row1"), c("C.1", "C.2", "C.3", "C.4", "C.5", "C.6")))
new<- data.frame((mat[,1]+mat[,2]+mat[,3]),(mat[,4]+mat[,5]+mat[,6]))
names(new)<- c("C.1","C.2")
new