R - [DESeq2] - How use TMM normalized counts (from EdgeR) in inputs for DESeq2? - r

I have several RNAseq samples, from different experimental conditions. After sequencing, and alignment to reference genome, I merged the raw counts to get a dataframe that looks like this:
> df_merge
T0 DJ21 DJ24 DJ29 DJ32 Rec2 Rec6 Rec9
G10 421 200 350 288 284 198 314 165
G1000 17208 10608 11720 11421 10142 10768 10331 6121
G10000 37 16 19 21 28 12 9 4
G10002 45 13 44 27 12 35 74 14
G10003 136 79 162 429 184 112 192 162
G10004 54 162 73 169 102 300 429 180
G10006 1 0 1 0 0 0 0 0
G10007 3 4 7 2 1 1 1 0
G1001 9030 8366 10608 13604 9808 10654 11663 7985
... ... ... ... ... ... ... ... ...
I use EdgeR to perform TMM normalization, which is the normalization method I want to use, and is not available in DESeq2. For that I use the following script:
## Normalisation by the TMM method (Trimmed Mean of M-value)
dge <- DGEList(df_merge) # DGEList object created from the count data
dge2 <- calcNormFactors(dge, method = "TMM") # TMM normalization calculate the normfactors
I then obtain the following normalization factors:
> dge2$samples
group lib.size norm.factors
T0 1 129884277 1.1108130
DJ21 1 110429304 0.9453988
DJ24 1 126410256 1.0297216
DJ29 1 123008035 1.0553169
DJ32 1 118968544 0.9927826
Rec2 1 119000510 0.9465131
Rec6 1 114775318 1.0053686
Rec9 1 90693946 0.9275454
I normalize the raw counts with the normalization factors:
# Normalized pseudo counts are obtained with the function cpm and stored in a data frame:
pseudo_TMM <- log2(cpm(dge2) + 1)
df_TMM <- melt(pseudo_TMM, id = rownames(raw_counts_wn))
names(df_TMM)[1:2] <- c ("id", "sample")
df_TMM$method <- rep("TMM", nrow(df_TMM))
And I get TMM normalized counts, in a new dataframe:
> pseudo_TMM
T0 DJ21 DJ24 DJ29 DJ32 Rec2 Rec6 Rec9
G10 1.970115581 1.54384913 1.88316953 1.68642670 1.76745996 1.46356074 1.89575666 1.56628879
G1000 6.910138402 6.68101996 6.50839579 6.47542172 6.44077248 6.59395683 6.50032388 6.20481983
G10000 0.329354263 0.20571418 0.19656414 0.21632677 0.30692404 0.14605339 0.10835095 0.06701850
G10002 0.391657436 0.16931112 0.42010652 0.27261134 0.13960084 0.39037793 0.71483462 0.22209164
G10003 0.958011321 0.81287356 1.16642722 2.10593537 1.35494357 0.99592405 1.41354030 1.54881003
G10004 0.458675608 1.35147467 0.64230087 1.20281148 0.89809414 1.87320592 2.23810756 1.65064058
G10006 0.009964976 0.00000000 0.01104103 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
G10007 0.029690785 0.05424318 0.07556948 0.02205789 0.01216343 0.01275200 0.01244875 0.00000000
G1001 5.990679797 6.34224022 6.36623615 6.72515956 6.39302663 6.57876150 6.67346174 6.58377191
... ... ... ... ... ... ... ... ...
And this is where it gets complicated. Usually I do my DGE analysis with DESeq2 with the DESeqDataSetFromHTSeqCount() and DESeq() functions, which itself runs an RLE normalization. Now I would like to use DESeq2 directly to do the DGE analysis on my already normalized data. I saw that the DeseqDataSet object could be created from a matrix with the DESeqDataSetFromMatrix() function.
If someone has already succeeded in using DESeq2 with data from TMM normalization, I would appreciate some advice

I remembered I saw something about how the norm factors must be converted to the appropriate size factors in DESeq2 and I found the thread on Bioconductor:
https://support.bioconductor.org/p/p133964/
It was suggested to read the following in order to get a better understanding of the conversion necessary:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0157022
Essentially in the supplementary info, they give the following code snippet for the conversion:
tmm <- calcNormFactors(geneCount, method="TMM")
N <- colSums(geneCount) #vector of library size
tmm.counts <- N*tmm/exp(mean(log(N*tmm)))
Cheers

Related

How to find correlation coefficients in a loop?

I have a dataset like this:
Account_tenure_years = c(982,983,984,985,986,987,988)
N=c(12328,18990,21255,27996,32014,15487,4347)
Y=c(76,64,61,76,94,55,11)
df_table_account_tenure_vs_PPC = data.frame(Account_tenure_years,N,Y)
The dataset looks like this:
Account_tenure_years N Y
982 12328 76
983 18990 64
984 21255 61
985 27996 76
986 32014 94
987 15487 55
988 4347 11
What I want to do is this:
I want to find correlation between any two of the Account_tenure_years, example, 982,983 and find the correlation coefficient with N and Y columns i.e I want to find the correlation coefficient of the below table
Account_tenure_years N Y
982 12328 76
983 18990 64
Now I want to repeat this 8C2 times i.e 28 times. Taking different rows and finding the correlation coefficient in each case.
i.e in the next iteration I would want :
Account_tenure_years N Y
983 18990 64
984 21255 61
And find its correlation coefficient. Now after I have received all those 28 correlation coefficients, I average them out and find a mean correlation coefficient for the entire dataset.
How do I do this in R?
Ok lets get this straight if I find out the correlation coefficient between the columns
Account_tenure_years column, N
Also if I try to find out the correlation coefficient between the columns
Account_tenure_years column, Y
And if I find negative correlation coefficients in each case , can we infer anything from that?
It is not an ideal way to calculate correlation coefficient for each case. It should be calculated for the entire dataset:
Account_tenure_years = c(982,983,984,985,986,987,988)
N=c(12328,18990,21255,27996,32014,15487,4347)
Y=c(76,64,61,76,94,55,11)
df = data.frame(Account_tenure_years,N,Y)
cor(df$Account_tenure_years,df$N)
cor(df$Account_tenure_years,df$Y)
Output is as shown below:
> cor(df$Account_tenure_years,df$N)
[1] -0.1662244
> cor(df$Account_tenure_years,df$Y)
[1] -0.5332263
You can inferred that data is negatively correlated. It means increase in the value of Account_tenure_years will decrease the value of N and Y or vice-versa.
Please feel free to correct me!
It should be easier to do this to transpose your data, And the best part is that you don't even need to write a loop.
try this:
dt <- data.table::fread("
Account_tenure_years N Y
982 12328 76
983 18990 64
984 21255 61
985 27996 76
986 32014 94
987 15487 55
988 4347 11
")
dt.t <- as.data.frame(t(dt[, 2:3]))
colnames(dt.t) = dt$Account_tenure_years
# transpose
dt.t
#> 982 983 984 985 986 987 988
#> N 12328 18990 21255 27996 32014 15487 4347
#> Y 76 64 61 76 94 55 11
# calculate correlation matrix, read more help(cor)
cor(dt.t)
#> 982 983 984 985 986 987 988
#> 982 1 1 1 1 1 1 1
#> 983 1 1 1 1 1 1 1
#> 984 1 1 1 1 1 1 1
#> 985 1 1 1 1 1 1 1
#> 986 1 1 1 1 1 1 1
#> 987 1 1 1 1 1 1 1
#> 988 1 1 1 1 1 1 1
Created on 2018-07-20 by the reprex package (v0.2.0.9000).
I do not understand how you want to compute correlation coefficients between two variables with only one observation for each. Therefore, I assume you have more rows than provided here.
First define all combinations:
combinations <- combn(df_table_account_tenure_vs_PPC$Account_tenure_years, 2)
For each combination, you want to extract the corresponding rows and compute the correlation coefficients for each variable:
coefficients <- apply(combinations, 2, function(x, df_table_account_tenure_vs_PPC){
coef <- sapply(c("N", "Y"), function(v, x, df_table_account_tenure_vs_PPC){
c <- cor(df_table_account_tenure_vs_PPC[df_table_account_tenure_vs_PPC == x[1], v], df_table_account_tenure_vs_PPC[df_table_account_tenure_vs_PPC == x[2], v])
return(c)},
x, df_table_account_tenure_vs_PPC)
return(c(x, coef))},
df_table_account_tenure_vs_PPC)
Then, you can aggregate your results in a data.frame:
df <- as.data.frame(t(coefficients))
colnames(df) <- c("Year1", "Year2", "N_cor", "Y_cor")
This should work. Please tell me if you have any problem.
Again, make sure you have more than one observation in each condition if you want a meaningful correlation coefficient.

Creating a data set with paired data and converting it into a matrix

So, I'm using R to try and do a phylogenetic PCA on a dataset that I have using the phyl.pca function from the phytools package. However, I'm having issues organising my data in a way that the function will accept! And that's not all: I did a bit of experimenting and I know that there are more issues further down the line, which I will get into...
Getting straight to the issue, here's the data frame (with dummy data) that I'm using:
>all
Taxa Tibia Feather
1 Microraptor 138 101
2 Microraptor 139 114
3 Microraptor 145 141
4 Anchiornis 160 81
5 Anchiornis 14 NA
6 Archaeopteryx 134 82
7 Archaeopteryx 136 71
8 Archaeopteryx 132 NA
9 Archaeopteryx 14 NA
10 Scansoriopterygidae 120 85
11 Scansoriopterygidae 116 NA
12 Scansoriopterygidae 123 NA
13 Sapeornis 108 NA
14 Sapeornis 112 86
15 Sapeornis 118 NA
16 Sapeornis 103 NA
17 Confuciusornis 96 NA
18 Confuciusornis 107 30
19 Confuciusornis 148 33
20 Confuciusornis 128 61
The taxa are arranged into a tree (called "tree") with Microraptor being the most basal and then progressing in order through to Confuciusornis:
>summary(tree)
Phylogenetic tree: tree
Number of tips: 6
Number of nodes: 5
Branch lengths:
mean: 1
variance: 0
distribution summary:
Min. 1st Qu. Median 3rd Qu. Max.
1 1 1 1 1
No root edge.
Tip labels: Confuciusornis
Sapeornis
Scansoriopterygidae
Archaeopteryx
Anchiornis
Microraptor
No node labels.
And the function:
>phyl.pca(tree, all, method="BM", mode="corr")
And this is the error that is coming up:
Error in phyl.pca(tree, all, method = "BM", mode = "corr") :
number of rows in Y cannot be greater than number of taxa in your tree
Y being the "all" data frame. So I have 6 taxa in my tree (matching the 6 taxa in the data frame) but there are 20 rows in my data frame. So I used this function:
> all_agg <- aggregate(all[,-1],by=list(all$Taxa),mean,na.rm=TRUE)
And got this:
Group.1 Tibia Feather
1 Anchiornis 153 81
2 Archaeopteryx 136 77
3 Confuciusornis 120 41
4 Microraptor 141 119
5 Sapeornis 110 86
6 Scansoriopterygidae 120 85
It's a bit odd that the order of the taxa has changed... Is this ok?
In any case, I converted it into a matrix:
> all_agg_matrix <- as.matrix(all_agg)
> all_agg_matrix
Group.1 Tibia Feather
[1,] "Anchiornis" "153" "81"
[2,] "Archaeopteryx" "136" "77"
[3,] "Confuciusornis" "120" "41"
[4,] "Microraptor" "141" "119"
[5,] "Sapeornis" "110" "86"
[6,] "Scansoriopterygidae" "120" "85"
And then used the phyl.pca function:
> phyl.pca(tree, all_agg_matrix, method = "BM", mode = "corr")
[1] "Y has no names. function will assume that the row order of Y matches tree$tip.label"
Error in invC %*% X : requires numeric/complex matrix/vector arguments
So, now the order that the function is considering taxa in is all wrong (but I can fix that relatively easily). The issue is that phyl.pca doesn't seem to believe that my matrix is actually a matrix. Any ideas why?
I think you may have bigger problems. Most phylogenetic methods, I suspect including phyl.pca, assume that traits are fixed at the species level (i.e., they don't account for within-species variation). Thus, if you want to use phyl.pca, you probably need to collapse your data to a single value per species, e.g. via
dd_agg <- aggregate(dd[,-1],by=list(dd$Taxa),mean,na.rm=TRUE)
Extract the numeric columns and label the rows properly so that phyl.pca can match them up with the tips correctly:
dd_mat <- dd_agg[,-1]
rownames(dd_mat) <- dd_agg[,1]
Using these aggregated data, I can make up a tree (since you didn't give us one) and run phyl.pca ...
library(phytools)
tt <- rcoal(nrow(dd_agg),tip.label=dd_agg[,1])
phyl.pca(tt,dd_mat)
If you do need to do an analysis that takes within-species variation into account you might need to ask somewhere more specialized, e.g. the r-sig-phylo#r-project.org mailing list ...
The answer posted by Ben Bolker seems to work whereby the data (called "all") is collapsed into a single value per species before creating a matrix and running the function. As per so:
> all_agg <- aggregate(all[,-1],by=list(all$Taxa),mean,na.rm=TRUE)
> all_mat <- all_agg[,-1]
> rownames(all_mat) <- all_agg[,1]
> phyl.pca(tree,all_mat, method= "lambda", mode = "corr")
Thanks to everyone who contributed an answer and especially Ben! :)

Creating data continuously using rnorm until an outlier occurs in R

Sorry for the confusing title, but i wasn't sure how to title what i am trying to do. My objective is to create a dataset of 1000 obs each would be the length of the run. I have created a phase1 dataset, from which a set of control limits are produced. What i am trying to do now is create a phase2 dataset most likely using rnorm. what im trying to do is create a repeat loop that will continuously create values in the phase2 dataset until one of those values is outside of the control limits produced from the phase1 dataset. for example if i had 3.0 and -3.0 as control limits the phase2 dataset would create a bunch of observations until obs 398 when the value here happens to be 3.45, thus stopping the creation of data. my objective is then to record the number 398. Furthermore, I am then trying to loop the code back to the phase1 dataset/ control limits portion and create a new set of control limits and then run another phase2, until i have 1000 run lengths recorded. the code i have for the phase1/ control limits works fine and looks like this:
nphase1=50
nphase2=1000
varcount=1
meanshift= 0
sigmashift= 1
##### phase1 dataset/ control limits #####
phase1 <- matrix(rnorm(nphase1*varcount, 0, 1), nrow = nphase1, ncol=varcount)
mean_var <- apply(phase1, 2, mean)
std_var <- apply(phase1, 2, sd)
df_var <- data.frame(mean_var, std_var)
Upper_SPC_Limit_Method1 <- with(df_var, mean_var + 3 * std_var)
Lower_SPC_Limit_Method1 <- with(df_var, mean_var - 3 * std_var)
df_control_limits<- data.frame(Upper_SPC_Limit_Method1, Lower_SPC_Limit_Method1)
I have previously created this code in SAS and it looks like this. might be a better reference for what i am trying to achieve then me trying to explain it.
%macro phase2_dataset (n=,varcount=, meanshift=, sigmashift=, nphase1=,simID=,);
%do z=1 %to &n;
%phase1_dataset (n=&nphase1, varcount=&varcount);
data phase2; set control_limits n=lastobs;
call streaminit(0);
do until (phase2_var1<Lower_SPC_limit_method1_var1 or
phase2_var1>Upper_SPC_limit_method1_var1);
phase2_var1 = rand("normal", &meanshift, &sigmashift);
output;
end;
run;
ods exclude all;
proc means data=phase2;
var phase2_var1;
ods output summary=x;
run;
ods select all;
data run_length; set x;
keep Phase2_var1_n;
run;
proc append base= QA.Phase2_dataset&simID data=Run_length force; run;
%end;
%mend;
Also been doing research about using a while loop in replace of the repeat loop.
Im new to R so Any ideas you are able to throw my way are greatly appreciated. Thanks!
Using a while loop indeed seems to be the way to go. Here's what I think you're looking for:
set.seed(10) #Making results reproducible
replicate(100, { #100 is easier to display here
phase1 <- matrix(rnorm(nphase1*varcount, 0, 1), nrow = nphase1, ncol=varcount)
mean_var <- colMeans(phase1) #Slightly better than apply
std_var <- apply(phase1, 2, sd)
df_var <- data.frame(mean_var, std_var)
Upper_SPC_Limit_Method1 <- with(df_var, mean_var + 3 * std_var)
Lower_SPC_Limit_Method1 <- with(df_var, mean_var - 3 * std_var)
df_control_limits<- data.frame(Upper_SPC_Limit_Method1, Lower_SPC_Limit_Method1)
#Phase 2
x <- 0
count <- 0
while(x > Lower_SPC_Limit_Method1 && x < Upper_SPC_Limit_Method1) {
x <- rnorm(1)
count <- count + 1
}
count
})
The result is:
[1] 225 91 97 118 304 275 550 58 115 6 218 63 176 100 308 844 90 2758
[19] 161 311 1462 717 2446 74 175 91 331 210 118 1517 420 32 39 201 350 89
[37] 64 385 212 4 72 730 151 7 1159 65 36 333 97 306 531 1502 26 18
[55] 67 329 75 532 64 427 39 352 283 483 19 9 2 1018 137 160 223 98
[73] 15 182 98 41 25 1136 405 474 1025 1331 159 70 84 129 233 2 41 66
[91] 1 23 8 325 10 455 363 351 108 3
If performance becomes a problem, perhaps it would be interesting to explore some improvements, like creating more numbers with rnorm() at a time and then counting how many are necessary to exceed the limits and repeat if necessary.

Retrieving adjaceny values in a nng igraph object in R

edited to improve the quality of the question as a result of the (wholly appropriate) spanking received by Spacedman!
I have a k-nearest neighbors object (an igraph) which I created as such, by using the file I have uploaded here:
I performed the following operations on the data, in order to create an adjacency matrix of distances between observations:
W <- read.csv("/path/sim_matrix.csv")
W <- W[, -c(1,3)]
W <- scale(W)
sim_matrix <- dist(W, method = "euclidean", upper=TRUE)
sim_matrix <- as.matrix(sim_matrix)
mygraph <- nng(sim_matrix, k=10)
This give me a nice list of vertices and their ten closest neighbors, a small sample follows:
1 -> 25 26 28 30 32 144 146 151 177 183 2 -> 4 8 32 33 145 146 154 156 186 199
3 -> 1 25 28 51 54 106 144 151 177 234 4 -> 7 8 89 95 97 158 160 170 186 204
5 -> 9 11 17 19 21 112 119 138 145 158 6 -> 10 12 14 18 20 22 147 148 157 194
7 -> 4 13 123 132 135 142 160 170 173 174 8 -> 4 7 89 90 95 97 158 160 186 204
So far so good.
What I'm struggling with, however, is how to to get access to the values for the weights between the vertices that I can do meaningful calculations on. Shouldn't be so hard, this is a common thing to want from graphs, no?
Looking at the documentation, I tried:
degree(mygraph)
which gives me the sum of the weights for each node. But I don't want the sum, I want the raw data, so I can do my own calculations.
I tried
get.data.frame(mygraph,"E")[1:10,]
but this has none of the distances between nodes:
from to
1 1 25
2 1 26
3 1 28
4 1 30
5 1 32
6 1 144
7 1 146
8 1 151
9 1 177
10 1 183
I have attempted to get values for the weights between vertices out of the graph object, that I can work with, but no luck.
If anyone has any ideas on how to go about approaching this, I'd be grateful. Thanks.
It's not clear from your question whether you are starting with a dataset, or with a distance matrix, e.g. nng(x=mydata,...) or nng(dx=mydistancematrix,...), so here are solutions with both.
library(cccd)
df <- mtcars[,c("mpg","hp")] # extract from mtcars dataset
# knn using dataset only
g <- nng(x=as.matrix(df),k=5) # for each car, 5 other most similar mpg and hp
V(g)$name <- rownames(df) # meaningful names for the vertices
dm <- as.matrix(dist(df)) # full distance matrix
E(g)$weight <- apply(get.edges(g,1:ecount(g)),1,function(x)dm[x[1],x[2]])
# knn using distance matrix (assumes you have dm already)
h <- nng(dx=dm,k=5)
V(h)$name <- rownames(df)
E(h)$weight <- apply(get.edges(h,1:ecount(h)),1,function(x)dm[x[1],x[2]])
# same result either way
identical(get.data.frame(g),get.data.frame(h))
# [1] TRUE
So these approaches identify the distances from each vertex to it's five nearest neighbors, and set the edge weight attribute to those values. Interestingly, plot(g) works fine, but plot(h) fails. I think this might be a bug in the plot method for cccd.
If all you want to know is the distances from each vertex to the nearest neighbors, the code below does not require package cccd.
knn <- t(apply(dm,1,function(x)sort(x)[2:6]))
rownames(knn) <- rownames(df)
Here, the matrix knn has a row for each vertex and columns specifying the distance from that vertex to it's 5 nearest neighbors. It does not tell you which neighbors those are, though.
Okay, I've found a nng function in cccd package. Is that it? If so.. then mygraph is just an igraph object and you can just do E(mygraph)$whatever to get the names of the edge attributes.
Following one of the cccd examples to create G1 here, you can get a data frame of all the edges and attributes thus:
get.data.frame(G1,"E")[1:10,]
You can get/set individual edge attributes with E(g)$whatever:
> E(G1)$weight=1:250
> E(G1)$whatever=runif(250)
> get.data.frame(G1,"E")[1:10,]
from to weight whatever
1 1 3 1 0.11861240
2 1 7 2 0.06935047
3 1 22 3 0.32040316
4 1 29 4 0.86991432
5 1 31 5 0.47728632
Is that what you are after? Any igraph package tutorial will tell you more!

Find the non zero values and frequency of those values in R

I have a data which has two parameters, they are data/time and flow. The flow data is intermittent flow. Lets say at times there is zero flow and suddenly the flow starts and there will be non-zero values for sometime and then the flow will be zero again. I want to understand when the non-zero values occur and how long does each non-zero flow last. I have attached the sample dataset at this location https://www.dropbox.com/s/ef1411dq4gyg0cm/sampledataflow.csv
The data is 1 minute data.
I was able to import the data into R as follows:
flow <- read.csv("sampledataflow.csv")
summary(flow)
names(flow) <- c("Date","discharge")
flow$Date <- strptime(flow$Date, format="%m/%d/%Y %H:%M")
sapply(flow,class)
plot(flow$Date, flow$discharge,type="l")
I made plot to see the distribution but couldn't get a clue where to start to get the frequency of each non zero values. I would like to see a output table as follows:
Date Duration in Minutes
Please let me know if I am not clear here. Thanks.
Additional Info:
I think we need to check the non-zero value first and then find how many non zero values are there continuously before it reaches zero value again. What I want to understand is the flow release durations. For eg. in one day there might be multiple releases and I want to note at what time did the release start and how long did it continue before coming to value zero. I hope this explain the problem little better.
The first point is that you have too many NA in your data. In case you want to look into it.
If I understand correctly, you require the count of continuous 0's followed by continuous non-zeros, zeros, non-zeros etc.. for each date.
This can be achieved with rle of course, as also mentioned by #mnel under comments. But there are quite a few catches.
First, I'll set up the data with non-NA entries:
flow <- read.csv("~/Downloads/sampledataflow.csv")
names(flow) <- c("Date","discharge")
flow <- flow[1:33119, ] # remove NA entries
# format Date to POSIXct to play nice with data.table
flow$Date <- as.POSIXct(flow$Date, format="%m/%d/%Y %H:%M")
Next, I'll create a Date column:
flow$g1 <- as.Date(flow$Date)
Finally, I prefer using data.table. So here's a solution using it.
# load package, get data as data.table and set key
require(data.table)
flow.dt <- data.table(flow)
# set key to both "Date" and "g1" (even though, just we'll use just g1)
# to make sure that the order of rows are not changed (during sort)
setkey(flow.dt, "Date", "g1")
# group by g1 and set data to TRUE/FALSE by equating to 0 and get rle lengths
out <- flow.dt[, list(duration = rle(discharge == 0)$lengths,
val = rle(discharge == 0)$values + 1), by=g1][val == 2, val := 0]
> out # just to show a few first and last entries
# g1 duration val
# 1: 2010-05-31 120 0
# 2: 2010-06-01 722 0
# 3: 2010-06-01 138 1
# 4: 2010-06-01 32 0
# 5: 2010-06-01 79 1
# ---
# 98: 2010-06-22 291 1
# 99: 2010-06-22 423 0
# 100: 2010-06-23 664 0
# 101: 2010-06-23 278 1
# 102: 2010-06-23 379 0
So, for example, for 2010-06-01, there are 722 0's followed by 138 non-zeros, followed by 32 0's followed by 79 non-zeros and so on...
I looked a a small sample of the first two days
> do.call( cbind, tapply(flow$discharge, as.Date(flow$Date), function(x) table(x > 0) ) )
2010-06-01 2010-06-02
FALSE 1223 911
TRUE 217 529 # these are the cumulative daily durations of positive flow.
You may want this transposed in which case the t() function should succeed. Or you could use rbind.
If you jsut wante the number of flow-postive minutes, this would also work:
tapply(flow$discharge, as.Date(flow$Date), function(x) sum(x > 0, na.rm=TRUE) )
#--------
2010-06-01 2010-06-02 2010-06-03 2010-06-04 2010-06-05 2010-06-06 2010-06-07 2010-06-08
217 529 417 463 0 0 263 220
2010-06-09 2010-06-10 2010-06-11 2010-06-12 2010-06-13 2010-06-14 2010-06-15 2010-06-16
244 219 287 234 31 245 311 324
2010-06-17 2010-06-18 2010-06-19 2010-06-20 2010-06-21 2010-06-22 2010-06-23 2010-06-24
299 305 124 129 295 296 278 0
To get the lengths of intervals with discharge values greater than zero:
tapply(flow$discharge, as.Date(flow$Date), function(x) rle(x>0)$lengths[rle(x>0)$values] )
#--------
$`2010-06-01`
[1] 138 79
$`2010-06-02`
[1] 95 195 239
$`2010-06-03`
[1] 57 360
$`2010-06-04`
[1] 6 457
$`2010-06-05`
integer(0)
$`2010-06-06`
integer(0)
... Snipped output
If you want to look at the distribution of these durations you will need to unlist that result. (And remember that the durations which were split at midnight may have influenced the counts and durations.) If you just wanted durations without dates, then use this:
flowrle <- rle(flow$discharge>0)
flowrle$lengths[!is.na(flowrle$values) & flowrle$values]
#----------
[1] 138 79 95 195 296 360 6 457 263 17 203 79 80 85 30 189 17 270 127 107 31 1
[23] 2 1 241 311 229 13 82 299 305 3 121 129 295 3 2 291 278

Resources