How to generate random numbers in a data.frame with range - r

I have a data.frame which I want to generate random numbers each list by a sequence.
I used sample function to create random numbers but even I created random numbers for list [[1]], for set [[2]] same numbers produced again. So, here how can I create different random numbers for the set [[2]].
here is the simple code;
data.list <- lapply(1:2, function(x) {
nrep <- 1
time <- rep(seq(90,54000,by=90),times=nrep)
Mx <- rep(sort(sample(seq(0.012,-0.014,length.out = 600),replace=TRUE)), times=nrep)
My <- rep(sort(sample(seq(0.02,-0.02,length.out = 600),replace=TRUE)), times=nrep)
Mz <- rep(sort(sample(seq(-1,1,length.out=600),replace=TRUE)), times=nrep)
data.frame(time,Mx,My,Mz,set_nbr=x)
})
this is provide the 5 first lines of each of datasets
[[1]]
time Mx My Mz set_nbr
1 90 -1.391319e-02 -2.000000e-02 -1.000000000 1
2 180 -1.386978e-02 -1.986644e-02 -1.000000000 1
3 270 -1.386978e-02 -1.973289e-02 -0.996661102 1
4 360 -1.382638e-02 -1.973289e-02 -0.993322204 1
5 450 -1.382638e-02 -1.973289e-02 -0.979966611 1
.. .. .... .... .... ...
[[2]]
time Mx My Mz set_nbr
1 90 -1.395659e-02 -0.0200000000 -1.000000000 2
2 180 -1.391319e-02 -0.0199332220 -0.993322204 2
3 270 -1.386978e-02 -0.0199332220 -0.993322204 2
4 360 -1.386978e-02 -0.0199332220 -0.993322204 2
5 450 -1.382638e-02 -0.0199332220 -0.986644407 2
.. .. .... .... .... ...
EDIT 1:
regarding to #bgoldst answer now I can produce different numbers
set.seed(1);
data.list <- lapply(1:2, function(x) {
nrep <- 1;
time <- rep(seq(90,54000,by=90),times=nrep);
Mx <- rep(sort(runif(600,-0.014,0.012)),times=nrep);
My <- rep(sort(runif(600,-0.02,0.02)),times=nrep);
Mz <- rep(sort(runif(600,-1,1)),times=nrep);
data.frame(time,Mx,My,Mz,set_nbr=x);
});
On the other hand when I change nrep <- 3; same numbers are created for each nrep. This is the thing I want to avoid from the beginning.
EDIT 2:
#bgoldst showed that replicate does the job!

I think you may have some confusion about how sample() works.
First, let's examine sample()'s behavior with respect to this simple vector:
1:5;
## [1] 1 2 3 4 5
When you pass a multi-element vector to sample() it basically just randomizes the order. This means you'll get a different result every time, or rather, to state it more precisely, the longer the vector is, the less likely you are to get the same result twice:
set.seed(1); sample(1:5); sample(1:5); sample(1:5);
## [1] 2 5 4 3 1
## [1] 5 4 2 3 1
## [1] 2 1 3 4 5
This means if you sort it immediately after sampling, then you'll get the same result every time. And if the original vector was itself sorted, then the result will also be equal to that original vector. This will be true regardless how sample() randomized the order, because the order is always restored by sort():
set.seed(1); sort(sample(1:5)); sort(sample(1:5)); sort(sample(1:5));
## [1] 1 2 3 4 5
## [1] 1 2 3 4 5
## [1] 1 2 3 4 5
Now if you add replace=T (or just rep=T if you like to take advantage of partial matching for concision, which I do), then you're not just randomizing the order, you're selecting size elements with replacement, where size is the vector length if you didn't provide size explicitly. This means you can get repeated elements in the result:
set.seed(1); sample(1:5,rep=T); sample(1:5,rep=T); sample(1:5,rep=T);
## [1] 2 2 3 5 2
## [1] 5 5 4 4 1
## [1] 2 1 4 2 4
And so, if you sort the result, you (likely) won't get back the original vector, because some elements will have been repeated, and some elements will have been omitted:
set.seed(1); sort(sample(1:5,rep=T)); sort(sample(1:5,rep=T)); sort(sample(1:5,rep=T));
## [1] 2 2 2 3 5
## [1] 1 4 4 5 5
## [1] 1 2 2 4 4
That's exactly what is happening with your code. Your output vectors are different between the two list components, because you're sampling with replacement before sorting, which means different repetitions and omissions of the elements will occur for each list component. But since you're sampling from the same sequence and you're sorting the result, you're bound to get similar-looking results for each list component, even though they're not identical.
I think what you might be looking for is random deviates from a uniform distribution. You can get these from runif():
set.seed(1); runif(5,-0.014,0.012);
## [1] -0.0070967748 -0.0043247786 0.0008941874 0.0096134025 -0.0087562698
set.seed(1); runif(5,-0.02,0.02);
## [1] -0.009379653 -0.005115044 0.002914135 0.016328312 -0.011932723
set.seed(1); runif(5,-1,1);
## [1] -0.4689827 -0.2557522 0.1457067 0.8164156 -0.5966361
Thus, your code would become:
set.seed(1);
data.list <- lapply(1:2, function(x) {
nrep <- 1;
time <- rep(seq(90,54000,by=90),times=nrep);
Mx <- rep(sort(runif(600,-0.014,0.012)),times=nrep);
My <- rep(sort(runif(600,-0.02,0.02)),times=nrep);
Mz <- rep(sort(runif(600,-1,1)),times=nrep);
data.frame(time,Mx,My,Mz,set_nbr=x);
});
Which gives:
lapply(data.list,head);
## [[1]]
## time Mx My Mz set_nbr
## 1 90 -0.01395224 -0.01994741 -0.9967155 1
## 2 180 -0.01394975 -0.01991923 -0.9933909 1
## 3 270 -0.01378866 -0.01980934 -0.9905714 1
## 4 360 -0.01371306 -0.01977090 -0.9854065 1
## 5 450 -0.01371011 -0.01961713 -0.9850108 1
## 6 540 -0.01365998 -0.01960718 -0.9846628 1
##
## [[2]]
## time Mx My Mz set_nbr
## 1 90 -0.01398426 -0.01997718 -0.9970438 2
## 2 180 -0.01398293 -0.01989651 -0.9931286 2
## 3 270 -0.01397330 -0.01988715 -0.9923425 2
## 4 360 -0.01396455 -0.01957807 -0.9913645 2
## 5 450 -0.01384501 -0.01939597 -0.9892001 2
## 6 540 -0.01382531 -0.01931913 -0.9889356 2
Edit: It looked from your question like you wanted the random numbers to be different between list components, that is to say, between the components generated from the 1:2 passed as the first argument to lapply(). The repetition of each random vector nrep times within each list component didn't appear to be relevant, partly because you set nrep to 1, so there wasn't any actual repetition.
But that's ok, we can achieve this requirement by using replicate() instead of rep(), because replicate() actual runs its expression argument once for every repetition. We also have to flatten the result, because replicate() by default returns a matrix, and we want a straight vector:
set.seed(1);
data.list <- lapply(1:2, function(x) {
nrep <- 2;
time <- rep(seq(90,54000,by=90),times=nrep);
Mx <- c(replicate(nrep,sort(runif(600,-0.014,0.012))));
My <- c(replicate(nrep,sort(runif(600,-0.02,0.02))));
Mz <- c(replicate(nrep,sort(runif(600,-1,1))));
data.frame(time,Mx,My,Mz,set_nbr=x);
});
lapply(data.list,function(x) x[c(1:6,601:606),]);
## [[1]]
## time Mx My Mz set_nbr
## 1 90 -0.01395224 -0.01993431 -0.9988590 1
## 2 180 -0.01394975 -0.01986782 -0.9948254 1
## 3 270 -0.01378866 -0.01981143 -0.9943576 1
## 4 360 -0.01371306 -0.01970813 -0.9789037 1
## 5 450 -0.01371011 -0.01970022 -0.9697986 1
## 6 540 -0.01365998 -0.01969326 -0.9659567 1
## 601 90 -0.01396582 -0.01997579 -0.9970438 1
## 602 180 -0.01394750 -0.01997375 -0.9931286 1
## 603 270 -0.01387607 -0.01995893 -0.9923425 1
## 604 360 -0.01385108 -0.01994546 -0.9913645 1
## 605 450 -0.01375113 -0.01976155 -0.9892001 1
## 606 540 -0.01374467 -0.01973125 -0.9889356 1
##
## [[2]]
## time Mx My Mz set_nbr
## 1 90 -0.01396979 -0.01999198 -0.9960861 2
## 2 180 -0.01390373 -0.01995219 -0.9945237 2
## 3 270 -0.01390252 -0.01991559 -0.9925640 2
## 4 360 -0.01388905 -0.01978123 -0.9890171 2
## 5 450 -0.01386718 -0.01967644 -0.9835435 2
## 6 540 -0.01384351 -0.01958008 -0.9822988 2
## 601 90 -0.01396739 -0.01989328 -0.9971255 2
## 602 180 -0.01396433 -0.01985785 -0.9954987 2
## 603 270 -0.01390700 -0.01984074 -0.9903196 2
## 604 360 -0.01376890 -0.01982715 -0.9902251 2
## 605 450 -0.01366110 -0.01979802 -0.9829480 2
## 606 540 -0.01364868 -0.01977278 -0.9812671 2

Related

R ranges: 1:0 - illogical behavior

I have an array X of length N, and I'd like to compute sum(X[(i+1):N]) - sum(X[1:(i-1)]. This works fine if my index, i, is within 2..(N-1). If it's equal to 1, the second term will return the first element of the array rather than exclude it. If it's equal to N, the first term will return the last element of the array rather than exclude it. seq_len is the only function I'm aware of that does the job, but only for the 2nd term (it indexes 1:n). What I need is a range function that will return NULL (rather than throw an exception like seq) when its 2nd argument is below its first. The sum function will do the rest. Is anyone aware of one, or do I have to write one myself?
I suggest an alternate path for generating indexing sequences: seq_len, which reacts intuitively in the extremes.
Bottom Line Up Front: use sum(X[-seq_len(i)]) - sum(X[seq_len(i-1)]) instead.
First, some sample data:
X <- 1:10
N <- length(X)
Your approach, at the two extremes:
i <- 1
X[(i+1):N]
# [1] 2 3 4 5 6 7 8 9 10
X[1:(i-1)] # oops
# [1] 1
That should return "nothing", I believe. (More the point, sum(...) should return 0. For the record, sum(integer(0)) is 0.)
i <- 10
X[(i+1):N] # oops
# [1] NA 10
X[1:(i-1)]
# [1] 1 2 3 4 5 6 7 8 9
There's your other error, where you'd expect "nothing" in the first subset.
Instead, I suggest you use seq_len:
i <- 1
X[-seq_len(i)]
# [1] 2 3 4 5 6 7 8 9 10
X[seq_len(i-1)]
# integer(0)
i <- 10
X[-seq_len(i)]
# integer(0)
X[seq_len(i-1)]
# [1] 1 2 3 4 5 6 7 8 9
Both seem fine, and something in the middle makes sense.
i <- 5
X[-seq_len(i)]
# [1] 6 7 8 9 10
X[seq_len(i-1)]
# [1] 1 2 3 4
In this contrived example, what we're looking for at any value of i:
1: sum(2:10) - 0 = 54 - 0 = 54
2: sum(3:10) - sum(1:1) = 52 - 1 = 51
3: sum(4:10) - sum(1:2) = 49 - 3 = 46
...
10: 0 - sum(1:9) = 0 - 45 = -45
And we now get that:
func <- function(i, x) sum(x[-seq_len(i)]) - sum(x[seq_len(i-1)])
sapply(c(1,2,3,10), func, X)
# [1] 54 51 46 -45
Edit:
李哲源's answer got me to thinking that you don't need to re-sum the numbers before and after all the time. Just do it once and re-use it. This method could be easily a bit faster if your vector is large.
Xb <- c(0, cumsum(X)[-N])
Xb
# [1] 0 1 3 6 10 15 21 28 36 45
Xa <- c(rev(cumsum(rev(X)))[-1], 0)
Xa
# [1] 54 52 49 45 40 34 27 19 10 0
sapply(c(1,2,3,10), function(i) Xa[i] - Xb[i])
# [1] 54 51 46 -45
So this suggests that your summed value at any value of i is
Xs <- Xa - Xb
Xs
# [1] 54 51 46 39 30 19 6 -9 -26 -45
where you can find the specific value with Xs[i]. No repeated summing required.

Calculate mean of a proportion of the data.frame

I'm working with data that looks similar to this:
cat value n
1 100 18
2 0 19
3 -100 15
4 100 13
5 0 17
6 -100 18
In the real data, there are many cats and value can be any number between -100 and 100 (no NA).
What I want to do is to calculate the mean of value based on terciles defined by n
So, for example, since sum(n)=100 what I want to do is to get n's as close as possible to 33 and calculate the mean of value. So for the first tercile, 18 isn't quite 33, so I need to take 15 values from cat=2. So the mean for the first tercile should be (100*18+0*15)/(18+15). The second tercile would be the remaining ns from cat=2, then as many as are needed to get to 33: (0*4+-100*15+100*13+0*1)/(4+15+13+1). Similar for the last tercile.
I got started writing this, but ended up with lots of nasty for loops and if statements. I'm hoping that you see an easier way to deal with this than I do. Thanks in advance!
A solution with data.table:
setDT(df)[rep(1:.N,n)
][,indx:=c(rep("a",33),rep("b",33),rep("c",34))
][,.(mean_val_indx=mean(value)),by=indx]
this gives:
indx mean_val_indx
1: a 54.545455
2: b -6.060606
3: c -52.941176
Which are the means of value for the three parts of the data.
Broken down in the intermediate steps:
1: replice the rows according n
setDT(df)[rep(1:.N,n)]
this gives (shortened):
cat value n
1: 1 100 18
2: 1 100 18
....
17: 1 100 18
18: 1 100 18
19: 2 0 19
20: 2 0 19
....
36: 2 0 19
37: 2 0 19
38: 3 -100 15
....
99: 6 -100 18
100: 6 -100 18
2: create an index with [,indx:=c(rep("a",33),rep("b",33),rep("c",34))]
setDT(df)[rep(1:.N,n)
][,indx:=c(rep("a",33),rep("b",33),rep("c",34))]
this gives:
> dt
cat value n indx
1: 1 100 18 a
2: 1 100 18 a
....
17: 1 100 18 a
18: 1 100 18 a
19: 2 0 19 a
20: 2 0 19 a
....
32: 2 0 19 a
33: 2 0 19 a
34: 2 0 19 b
35: 2 0 19 b
....
99: 6 -100 18 c
100: 6 -100 18 c
3: summarise value by indx with [,.(mean_val_indx=mean(value)),by=indx]
You could try something like this, data being your example dataframe:
longData<-unlist(apply(data[,c("value","n")],1,function(x){
rep(x["value"],x["n"])
}))
aggregate(longData,list(cut(seq_along(longData),breaks=3,right=FALSE)),mean)
longData will be a vector of length 100 with, using your example, 18 repetitions of -100, 19 repetitions of 0 etc.
The cut in the aggregate will divide longData into three groups, and the mean of each group will be calculated.
If already the data is very long repetition by "n" is perhaps unwanted.
The following solution doesn't do this. Moreover, 1/3 of the sum of the
"n"-values is not rounded to the nearest integer.
"i" is the vector of row numbers where terciles end. Since it is possible
that several terciles end at the same row, those row numbers are replicated.
The result is the vector "k".
For each index "j" the cumulative sum of "data$value"*"data$n" up to "k[j]"
covers "ms[k[j]]" terciles, so "ms[j]-j" terciles have to be subtracted
to get the cumulative sum up to the "j"th tercile.
m <- 3
sn <- sum(data$n)
ms <- m * cumsum(data$n) / sn
d <- diff(c(0,floor(ms)))
i <- which(d>0)
k <- rep(i,d[i])
vn <- data$value * data$n
sums <- cumsum(vn)[k] - (ms[k]-(1:m))*data$value[k]*sn/m
means <- m*diff(c(0,sums))/sn
The means of the terciles are:
> means
[1] 54 -6 -54
In this example "i" is equal to "k". But if terciles are replaced by deciles,
i.e. "m" is not 3 but 10, they are distinct:
> m
[1] 10
> i
[1] 1 2 3 4 5 6
> k
[1] 1 2 2 3 3 4 5 5 6 6
> means
[1] 100 80 0 -30 -100 60 50 0 -80 -100
I compared the speed of the 4 answers, using out small example with 8 rows:
> ##### "longData"-Answer #####
>
> system.time( for ( i in 1:1000 ) { A1 <- f1(data) } )
User System verstrichen
3.48 0.00 3.49
> ##### "sapply"-Answer #####
>
> system.time( for ( i in 1:1000 ) { A2 <- f2(data) } )
User System verstrichen
1.00 0.00 0.99
> ##### "data.table"Answer #####
>
> system.time( for ( i in 1:1000 ) { A3 <- f3(data) } )
User System verstrichen
4.73 0.00 4.79
> ##### this Answer #####
>
> system.time( for ( i in 1:1000 ) { A4 <- f4(data) } )
User System verstrichen
0.43 0.00 0.44
The "sapply"-Answer is even false:
> A1
Group.1 x
1 [0.901,34) 54.545455
2 [34,67) -6.060606
3 [67,100) -52.941176
> A2
(0,33] (33,67] (67,100]
-100.00000 0.00000 93.93939
> A3
indx mean_val_indx
1: a 54.545455
2: b -6.060606
3: c -52.941176
> A4
[1] 54 -6 -54
>
This is basically the same as NicE although perhaps useful as a different way fo assembling the rep and cutting operations:
sapply(split( sort(unlist( mapply(rep, res$value, res$n) )),
cut(seq(sum(res$n)), breaks=c(0,33,67,100) )),
mean)
(0,33] (33,67] (67,100]
-100.00000 0.00000 93.93939

How to generalize this algorithm (sign pattern match counter)?

I have this code in R :
corr = function(x, y) {
sx = sign(x)
sy = sign(y)
cond_a = sx == sy && sx > 0 && sy >0
cond_b = sx < sy && sx < 0 && sy >0
cond_c = sx > sy && sx > 0 && sy <0
cond_d = sx == sy && sx < 0 && sy < 0
cond_e = sx == 0 || sy == 0
if(cond_a) return('a')
else if(cond_b) return('b')
else if(cond_c) return('c')
else if(cond_d) return('d')
else if(cond_e) return('e')
}
Its role is to be used in conjunction with the mapply function in R in order to count all the possible sign patterns present in a time series. In this case the pattern has a length of 2 and all the possible tuples are : (+,+)(+,-)(-,+)(-,-)
I use the corr function this way :
> with(dt['AAPL'], table(mapply(corr, Return[-1], Return[-length(Return)])) /length(Return)*100)
a b c d e
24.6129416 25.4466058 25.4863041 24.0174672 0.3969829
> dt["AAPL",list(date, Return)]
symbol date Return
1: AAPL 2014-08-29 -0.3499903
2: AAPL 2014-08-28 0.6496702
3: AAPL 2014-08-27 1.0987923
4: AAPL 2014-08-26 -0.5235654
5: AAPL 2014-08-25 -0.2456037
I would like to generalize the corr function to n arguments. This mean that for every nI would have to write down all the conditions corresponding to all the possible n-tuples. Currently the best thing I can think of for doing that is to make a python script to write the code string using loops, but there must be a way to do this properly. Do you have an idea about how I could generalize the fastidious condition writing, maybe I could try to use expand.grid but how do the matching then ?
I think you're better off using rollapply(...) in the zoo package for this. Since you seem to be using quantmod anyway (which loads xts and zoo), here is a solution that does not use all those nested if(...) statements.
library(quantmod)
AAPL <- getSymbols("AAPL",auto.assign=FALSE)
AAPL <- AAPL["2007-08::2009-03"] # AAPL during the crash...
Returns <- dailyReturn(AAPL)
get.patterns <- function(ret,n) {
f <- function(x) { # identifies which row of `patterns` matches sign(x)
which(apply(patterns,1,function(row)all(row==sign(x))))
}
returns <- na.omit(ret)
patterns <- expand.grid(rep(list(c(-1,1)),n))
labels <- apply(patterns,1,function(row) paste0("(",paste(row,collapse=","),")"))
result <- rollapply(returns,width=n,f,align="left")
data.frame(100*table(labels[result])/(length(returns)-(n-1)))
}
get.patterns(Returns,n=2)
# Var1 Freq
# 1 (-1,-1) 22.67303
# 2 (-1,1) 26.49165
# 3 (1,-1) 26.73031
# 4 (1,1) 23.15036
get.patterns(Returns,n=3)
# Var1 Freq
# 1 (-1,-1,-1) 9.090909
# 2 (-1,-1,1) 13.397129
# 3 (-1,1,-1) 14.593301
# 4 (-1,1,1) 11.722488
# 5 (1,-1,-1) 13.636364
# 6 (1,-1,1) 13.157895
# 7 (1,1,-1) 12.200957
# 8 (1,1,1) 10.765550
The basic idea is to create a patterns matrix with 2^n rows and n columns, where each row represents one of the possible patterns (e,g, (1,1), (-1,1), etc.). Then pass the daily returns to this function n-wise using rollapply(...) and identify which row in patterns matches sign(x) exactly. Then use this vector of row numbers an an index into labels, which contains a character representation of the patterns, then use table(...) as you did.
This is general for an n-day pattern, but it ignores situations where any return is exactly zero, so the $Freq columns do not add up to 100. As you can see, this doesn't happen very often.
It's interesting that even during the crash it was (very slightly) more likely to have two up days in succession, than two down days. If you look at plot(Cl(AAPL)) during this period, you can see that it was a pretty wild ride.
This is a little different approach but it may give you what you're looking for and allows you to use any size of n-tuple. The basic approach is to find the signs of the adjacent changes for each sequential set of n returns, convert the n-length sign changes into n-tuples of 1's and 0's where 0 = negative return and 1 = positive return. Then calculate the decimal value of each n-tuple taken as binary number. These numbers will clearly be different for each distinct n-tuple. Using a zoo time series for these calculations provides several useful functions including get.hist.quote() to retrieve stock prices, diff() to calculate returns, and the rollapply() function to use in calculating the n-tuples and their sums.The code below does these calculations, converts the sum of the sign changes back to n-tuples of binary digits and collects the results in a data frame.
library(zoo)
library(tseries)
n <- 3 # set size of n-tuple
#
# get stock prices and compute % returns
#
dtz <- get.hist.quote("AAPL","2014-01-01","2014-10-01", quote="Close")
dtz <- merge(dtz, (diff(dtz, arithmetic=FALSE ) - 1)*100)
names(dtz) <- c("prices","returns")
#
# calculate the sum of the sign changes
#
dtz <- merge(dtz, rollapply( data=(sign(dtz$returns)+1)/2, width=n,
FUN=function(x, y) sum(x*y), y = 2^(0:(n-1)), align="right" ))
dtz <- fortify.zoo(dtz)
names(dtz) <- c("date","prices","returns", "sum_sgn_chg")
#
# convert the sum of the sign changes back to an n-tuple of binary digits
#
for( i in 1:nrow(dtz) )
dtz$sign_chg[i] <- paste(((as.numeric(dtz$sum_sgn_chg[i]) %/%(2^(0:2))) %%2), collapse="")
#
# report first part of result
#
head(dtz, 10)
#
# report count of changes by month and type
#
table(format(dtz$date,"%Y %m"), dtz$sign_chg)
An example of possible output is a table showing the count of changes by type for each month.
000 001 010 011 100 101 110 111 NANANA
2014 01 1 3 3 2 3 2 2 2 3
2014 02 1 2 4 2 2 3 2 3 0
2014 03 2 3 0 4 4 1 4 3 0
2014 04 2 3 2 3 3 2 3 3 0
2014 05 2 2 1 3 1 2 3 7 0
2014 06 3 4 3 2 4 1 1 3 0
2014 07 2 1 2 4 2 5 5 1 0
2014 08 2 2 1 3 1 2 2 8 0
2014 09 0 4 2 3 4 2 4 2 0
2014 10 0 0 1 0 0 0 0 0 0
so this would show that in month 1, January of 2014, there was one set of three days with 000 indicating 3 down returns , 3 days with the 001 change indicating two down return and followed by one positive return and so forth. Most months seem to have a fairly random distribution but May and August show 7 and 8 sets of 3 days of positive returns reflecting the fact that these were strong months for AAPL.

Is there a way stop table from sorting in R

Problem setup: Creating a function to take multiple CSV files selected by ID column and combine into 1 csv, then create an output of number of observations by ID.
Expected:
complete("specdata", 30:25) ##notice descending order of IDs requested
## id nobs
## 1 30 932
## 2 29 711
## 3 28 475
## 4 27 338
## 5 26 586
## 6 25 463
I get:
> complete("specdata", 30:25)
id nobs
1 25 463
2 26 586
3 27 338
4 28 475
5 29 711
6 30 932
Which is "wrong" because it has been sorted by id.
The CSV file I read from does have the data in descending order. My snippet:
dfTable<-read.csv("~/progAssign1/specdata/tmpdata.csv")
ccTab<-complete.cases(dfTable)
xTab3<-as.data.frame(table(dfTable$ID[ccTab]),)
colnames(xTab3)<-c("id","nobs")
And as near as I can tell, the third line is where sorting occurs. I broke out the expression and it happens in the table() call. I've not found any option or parameter I can pass to make something like sort=FALSE. You'd think...
Anyway. Any help appreciated!
So, the problem is in the output of table, which are sorted by default. For example:
> r = sample(5,15,replace = T)
> r
[1] 1 4 1 1 3 5 3 2 1 4 2 4 2 4 4
> table(r)
r
1 2 3 4 5
4 3 2 5 1
If you want to take the order of first appearance, you are going to get your hands a little bit dirty by recoding the table function:
unique_r = unique(r)
table_r = rbind(label=unique_r, count=sapply(unique_r,function(x)sum(r==x)))
table_r
[,1] [,2] [,3] [,4] [,5]
label 1 4 3 5 2
count 4 5 2 1 3
One way to get around this is...don't use table. Here's an example where I create three one-line data sets from your data. Then I read them in with a descending sequence, with read.table and it seems to be okay.
The real big thing here is that multiple data sets should be placed in a list upon being read into R. You'll get the exact order of data sets you want that way, among other benefits.
Once you've read them into R the way you want them, it's much easier to order them at the very end. Ordering of rows (for me) is usually the very last step.
> dat <- read.table(h=T, text = "id nobs
1 25 463
2 26 586
3 27 338
4 28 475
5 29 711
6 30 932")
Write three one-line files:
> write.table(dat[3,], "dat3.csv", row.names = FALSE)
> write.table(dat[2,], "dat2.csv", row.names = FALSE)
> write.table(dat[1,], "dat1.csv", row.names = FALSE)
Read them in using a 3:1 order:
> do.call(rbind, lapply(3:1, function(x){
read.table(paste0("dat", x, ".csv"), header = TRUE)
}))
# id nobs
# 1 27 338
# 2 26 586
# 3 25 463
Then, if we change 3:1 to 1:3 the rows "comply" with our request
> do.call(rbind, lapply(1:3, function(x){
read.table(paste0("dat", x, ".csv"), header = TRUE)
}))
# id nobs
# 1 25 463
# 2 26 586
# 3 27 338
And just for fun
> fun <- function(z){
do.call(rbind, lapply(z, function(x){
read.table(paste0("dat", x, ".csv"), header = TRUE) }))
}
> fun(c(2, 3, 1))
# id nobs
# 1 26 586
# 2 27 338
# 3 25 463
You may try something like this:
t1 <- c(5,3,1,3,5,5,5)
as.data.frame(table(t1)) ##result in ascending order
# t1 Freq
#1 1 1
#2 3 2
#3 5 4
t1 <- factor(t1)
as.data.frame(table(reorder(t1, rep(-1, length(t1)),sum)))
# Var1 Freq
#1 5 4
#2 3 2
#3 1 1
In your case you are complaining about the actions of the table function with a single argument returning the items with the names in ascending order and you wnat them in descending order. You could have simply used the rev() function around the table call.
xTab3<-as.data.frame( rev( table( dfTable$ID[ccTab] ) ),)
(I'm not sure what that last comma is doing in there.) The sort order in the original would not be expected to determine the order of a table operation. Generally R will return results with discrete labels sorted in alpha (ascending) order unless the levels of a factor item have been specified differently. That's one of those R-specific rules that may be difficult to intuit. The other R-specific rule that may be difficult to grasp (although not really a problem here) is that arguments are often expected to be in the form of R-lists.
It's probably wise to think about R-table objects at this point (and what happens with the as.data.frame call. table-objects are actually R-matrices, so the feature that you wanted to sort by was actually the rownames of that table object and are of class character:
r = sample(5,15,replace = T)
table(r)
#r
#2 3 4 5
#5 3 2 5
rownames(table(r))
#[1] "2" "3" "4" "5"
str(as.data.frame(table(r)))
#-------
'data.frame': 4 obs. of 2 variables:
$ r : Factor w/ 4 levels "2","3","4","5": 1 2 3 4
$ Freq: int 5 3 2 5
I just wanna share this homework I've done
complete <- function(directory, id=1:332){
setwd("E:/Coursera")
files <- dir(directory, full.names = TRUE)
data <- lapply(files, read.csv)
specdata <- do.call(rbind, data)
cleandata <- specdata[!is.na(specdata$sulfate) & !is.na(specdata$nitrate),]
targetdata <- data.frame(Date=numeric(0), sulfate=numeric(0), nitrate=numeric(0), ID=numeric(0))
result<-data.frame(id=numeric(0), nobs=numeric(0))
for(i in id){
targetdata <- cleandata[cleandata$ID == i, ]
result <- rbind(result, data.frame(table(targetdata$ID)))
}
names(result) <- c("id","nobs")
result
}
A simple solution that no one has proposed yet is combining table() with unique() function. The unique() function does the behaviour that you are looking (listing unique IDs in order of appearance).
In your case it would be something like this:
dfTable<-read.csv("~/progAssign1/specdata/tmpdata.csv")
ccTab<-complete.cases(dfTable)
x<-dfTable$ID[ccTab] #unique IDs
xTab3<-as.data.frame(table(x)[unique(x)],) #here you sort the "table()" result in order of appearance
colnames(xTab3)<-c("id","nobs")

R: Finding the begin of a (exponential?) decay?

How to find the index indicated by the red vlin in the following example:
# Get the data as "tmpData"
source("http://pastie.org/pastes/9350691/download")
# Plot
plot(tmpData,type="l")
abline(v=49,col="red")
The following approach is promising, but how to find the peak maximum?
library(RcppRoll)
n <- 10
smoothedTmpData <- roll_mean(tmpData,n)
plot(-diff(smoothedTmpData),type="l")
abline(v=49,col="red")
which.max(-diff(smoothedTmpData)) gives you the index of the maximum.
http://www.inside-r.org/r-doc/base/which.max
I'm unsure if this is your actual question...
Where there is a single peak in the gradient, as in your example dataset, then gwieshammer is correct: you can just use which.max to find it.
For the case where there are multiple possible peaks, you need a more sophisticated approach. R has lots of peak finding functions (of varying quality). One that works for this data is wavCWTPeaks in wmtsa.
library(RcppRoll)
library(wmtsa)
source("http://pastie.org/pastes/9350691/download")
n <- 10
smoothedTmpData <- roll_mean(tmpData, n)
gradient <- -diff(smoothedTmpData)
cwt <- wavCWT(gradient)
tree <- wavCWTTree(cwt)
(peaks <- wavCWTPeaks(tree))
## $x
## [1] 4 52
##
## $y
## [1] 302.6718 5844.3172
##
## attr(,"peaks")
## branch itime iscale time scale extrema iendtime
## 1 1 5 2 5 2 16620.58 4
## 2 2 57 26 57 30 20064.64 52
## attr(,"snr.min")
## [1] 3
## attr(,"scale.range")
## [1] 1 28
## attr(,"length.min")
## [1] 10
## attr(,"noise.span")
## [1] 5
## attr(,"noise.fun")
## [1] "quantile"
## attr(,"noise.min")
## 5%
## 4.121621
So the main peak close to 50 is correctly found, and the routine picks up another smaller peak at the start.

Resources