Related
I would like to create a tree map based on the count of "names". However, I am not sure how to do so. Seeking you help on this matter.
names <- c("A", "B", "B", "C", "D", "A", "A", "A", "A", "G", "B", "F", "F", "H")
names <- names %>% as.factor()
ggplot(names, aes(area= names, fill= names) + geom_treemap()
Many thanks
names <- c("A", "B", "B", "C", "D", "A", "A", "A", "A", "G", "B", "F", "F", "H")
names <- data.frame(names)
names <- names %>%
count(names)
ggplot(names, aes(area= n, fill= names)) + geom_treemap()
I am trying to create a graph where the x axis (a factor) is reordered by descending order of the y axis (numerical values), but only for one of two levels of another factor.
Originally, I tried using the code below:
reorder(factor1, desc(value1))
However, this code only reorganizes the graph (in a descending order) by the sum of the two values under each factor2 (I presume); while I am only interested in reorganizing the data for one level (i.e. "A") under factor2.
Here is some sample data to illustrate better.
sampledata <- data.frame(factor1 = c("A", "A", "B", "B", "C", "C", "D", "D", "E", "E",
"F", "F", "G", "G", "H", "H", "I", "I", "J", "J"),
factor2 = c("A", "H", "A", "H", "A", "H", "A", "H", "A", "H",
"A", "H", "A", "H", "A", "H", "A", "H", "A", "H"),
value1 = c(1, 5, 6, 2, 6, 8, 10, 21, 30, 5,
3, 5, 4, 50, 4, 7, 15, 48, 20, 21))
Here is what I used previously:
sampledata %>%
ggplot(aes(x=reorder(factor1, desc(value1)), y=value1, group=factor2, color=factor2)) +
geom_point()
The reason why I would like to reorder by a specific level (say factor2=="A") is that I can view any deviance of the values for factor2=="H" away from "A" points.
I would appreciate using tidyverse or dplyr as means to solve this problem.
library(ggplto2)
library(dplyr)
sampledata %>%
mutate(value2 = +(factor2=="A")*value1) %>%
ggplot(aes(x=reorder(factor1, desc(value2 + value1/max(value1))), y=value1,
group=factor2, color=factor2)) +
geom_point() +
xlab("factor1")
I have a graph. One can see that the complect subgraph A<->B<->C and E<->D<->F (pattern) occurs twice in the graph. I found the motifs and took 1st and 7th motifs from the list of igraphs.
libraty(igraph)
el <- matrix( c("A", "B",
"A", "C",
"B", "A",
"B", "C",
"C", "A",
"C", "B",
"C", "E",
"E", "D",
"E", "F",
"D", "E",
"D", "F",
"F", "E",
"F", "D"),
nc = 2, byrow = TRUE)
graph <- graph_from_edgelist(el)
pattern <- graph.isocreate(size=3, number = 15, directed=TRUE)
iso <- subgraph_isomorphisms(pattern, graph)
motifs <- lapply(iso, function (x) { induced_subgraph(graph, x) })
V(graph)$id <- seq_len(vcount(graph))
V(graph)$color <- "white"
par(mfrow=c(1,2))
plot(graph, edge.curved=TRUE, main="Original graph")
m1 <- V(motifs[[1]])$id; m2 <- V(motifs[[7]])$id
V(graph)[m1]$color="red"; V(graph)[m2]$color="green"
plot(graph, edge.curved=TRUE, main="Highlight graph")
I have a solution by hand selection motifs[[1]], motifs[[7]].
Question.
How to find the vertex lists of the pattern subgraph (for example, complect subgraph) automatically?
I'm working on a project where I need to repeatedly subset a data.frame based on different combinations of attributes. Right now I'm subsetting the data.frame using the merge function as I don't know what the attributes input will be at run time, and this works. However, I'm wondering if there is a faster way to create the subsets.
require(data.table)
df <- structure(list(att1 = c("e", "a", "c", "a", "d", "e", "a", "d", "b", "a", "c", "a", "b", "e", "e", "c", "d", "d", "a", "e", "b"),
att2 = c("b", "d", "c", "a", "e", "c", "e", "d", "e", "b", "e", "e", "c", "e", "a", "a", "e", "c", "b", "b", "d"),
att3 = c("c", "b", "e", "b", "d", "d", "d", "c", "c", "d", "e", "a", "d", "c", "e", "a", "d", "e", "d", "a", "e"),
att4 = c("c", "a", "b", "a", "e", "c", "a", "a", "b", "a", "a", "e", "c", "d", "b", "e", "b", "d", "d", "b", "e")),
.Names = c("att1", "att2", "att3", "att4"), class = "data.frame", row.names = c(NA, -21L))
#create combinations of attributes
#attributes to search through
cnames <- colnames(df)
att_combos <- data.table()
for(i in 2:length(cnames)){
combos <- combn(cnames, i)
for(x in 1:ncol(combos)){
df_sub <- unique(df[,combos[1:nrow(combos), x]])
att_combos <- rbind(att_combos, df_sub, fill = T)
}
}
rm(df_sub, i, x, combos, cnames)
for(i in 1:nrow(att_combos)){
att_sub <- att_combos[i, ]
att_sub <- att_sub[, is.na(att_sub)==F, with = F]
#need to subset data.frame here - very slow on large data.frames
#anyway to speed this up?
df_subset_for_analysis <- merge(df, att_sub)
}
I would use data.table keys on the columns you want to subset on, and then generate a data.table (at runtime) with the combinations you are interested in, and then merge the two.
Here is an example with a single combination of attributes (simple_combinations) and one with multiple combinations of attributes (multiple_combinations):
require(data.table)
df <- structure(list(att1 = c("e", "a", "c", "a", "d", "e", "a", "d", "b", "a", "c", "a", "b", "e", "e", "c", "d", "d", "a", "e", "b"),
att2 = c("b", "d", "c", "a", "e", "c", "e", "d", "e", "b", "e", "e", "c", "e", "a", "a", "e", "c", "b", "b", "d"),
att3 = c("c", "b", "e", "b", "d", "d", "d", "c", "c", "d", "e", "a", "d", "c", "e", "a", "d", "e", "d", "a", "e"),
att4 = c("c", "a", "b", "a", "e", "c", "a", "a", "b", "a", "a", "e", "c", "d", "b", "e", "b", "d", "d", "b", "e")),
.Names = c("att1", "att2", "att3", "att4"), class = "data.frame", row.names = c(NA, -21L))
# Convert to data.table
dt <- data.table(df)
# Set key on the columns used for "subsetting"
setkey(dt, att1, att2, att3, att4)
# Simple subset on a single set of attributes
simple_combinations <- data.table(att1 = "d", att2 = "e", att3 = "d", att4 = "e")
setkey(simple_combinations, att1, att2, att3, att4)
# Merge to generate simple output subset (simple_combinations of att present in dt)
simple_subset <- merge(dt, simple_combinations)
# Complex (multiple) sets of attributes
multiple_combinations <- data.table(expand.grid(att1=c("d"), att2=c("c", "d", "e"),
att3 = c("d"), att4 = c("b", "e")))
setkey(multiple_combinations, att1, att2, att3, att4)
# Merge to generate output subset (multiple_combinations of att present in dt)
multiple_subset <- merge(dt, multiple_combinations)
The output is in simple_subset and multiple_subset.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm using R and need some help.
Background:
I video-recorded participants in a behavioural study. I then coded different aspects of their behaviour from the videos so that I now have one data frame per participant. The df has many unordered factors, each representing the discrete temporal sequence of the participant's states for one specific behavioural dimension (e.g. gaze direction). Each row holds the value for one second for that dimension. To simplify, let's assume one such vector might look like this:
p01.gaze = factor(x = c("a", "b", "b", "a", "a", "a", "a", "a", "a", "a", "a", "a", "a", "b", "b", "a", "d", "d", "d", "a", "a", "a", "e", "e", "d", "e", "e", "a","a", "e", "a", "a", "a", "e", "e", "e", "e", "e", "e", "e", "e", "e", "e", "d", "b", "b", "b", "d", "d", "d", "d", "d", "d", "d", "b", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "d", "a", "b", "a", "d", "d", "a", "c", "e", "e", "e", "c", "c", "a", "e", "e", "a", "a", "a"))
Problem:
For each vector I want to construct a 'state transition matrix' by calculating the frequency of transitions (using counts or alternatively proportion) between all possible pairs of states. So the matrix would be:
p01.gaze.m = matrix(nrow=5, ncol=5, dimnames = list(c("a", "b", "c", "d", "e"), c("a", "b", "c", "d", "e")))
NOTES:
1) I'm new to programming and couldn't find the right functions. I did search thoroughly but didn't find appropriate solutions so any help would be welcome.
2) The function markovchainFit (package markovchain) sounded tempting but I don't think I want/need to fit a Markov Model to my data (because of implications and commitments I don't want to make).
3) The function count.transitions (package RDS) also sounded tempting but I couldn't figure out how to coerce my data into rds.data object.
Many thanks =]
moe
Use the markovchain package for your #1 & #3.
Here is some sample code for your data that shows counting state transitions, and then graphing the transition probability matrix:
library(markovchain)
p01.gaze = factor(x = c("a", "b", "b", "a", "a", "a",
"a", "a", "a", "a", "a", "a",
"a", "b", "b", "a", "d", "d",
"d", "a", "a", "a", "e", "e",
"d", "e", "e", "a","a", "e",
"a", "a", "a", "e", "e", "e",
"e", "e", "e", "e", "e", "e",
"e", "d", "b", "b", "b", "d",
"d", "d", "d", "d", "d", "d",
"b", "d", "d", "d", "d", "d",
"d", "d", "d", "d", "d", "d",
"d", "d", "d", "d", "d", "d",
"d", "d", "d", "d", "d", "d",
"d", "d", "d", "d", "d", "a",
"b", "a", "d", "d", "a", "c",
"e", "e", "e", "c", "c", "a",
"e", "e", "a", "a", "a"))
p01_gaze_tpm = createSequenceMatrix(p01.gaze, toRowProbs = TRUE)
p01_gaze_mc = as(p01_gaze_tpm, "markovchain")
plot(p01_gaze_mc, edge.arrow.size = 0.2)
This gives the following graph:
Once you upload sample data for your second problem, I will update my answer to address that as well.