Obtaining different results from sum() and '+' - r

Below is my experiment:
> xx = 293.62882204364098
> yy = 0.086783439604999998
> print(xx + yy, 20)
[1] 293.71560548324595175
> print(sum(c(xx,yy)), 20)
[1] 293.71560548324600859
It is strange to me that sum() and + giving different results when both are applied to the same numbers.
Is this result expected?
How can I get the same result?
Which one is most efficient?

There is an r-devel thread here that includes some detailed description of the implementation. In particular, from Tomas Kalibera:
R uses long double type for the accumulator (on platforms where it is
available). This is also mentioned in ?sum:
"Where possible extended-precision accumulators are used, typically well
supported with C99 and newer, but possibly platform-dependent."
This would imply that sum() is more accurate, although this comes with a giant flashing warning sign that if this level of accuracy is important to you, you should be very worried about the implementation of your calculations [in terms both of algorithms and underlying numerical implementations].
I answered a question here where I eventually figured out (after some false starts) that the difference between + and sum() is due to the use of extended precision for sum().
This code shows that the sums of individual elements (as in sum(xx,yy) are added together with + (in C), whereas this code is used to sum the individual components; line 154 (LDOUBLE s=0.0) shows that the accumulator is stored in extended precision (if available).
I believe that #JonSpring's timing results are probably explained (but would be happy to be corrected) by (1) sum(xx,yy) will have more processing, type-checking etc. than +; (2) sum(c(xx,yy)) will be slightly slower than sum(xx,yy) because it works in extended precision.

Looks like addition is 3x as fast as summing, but unless you're doing high-frequency trading I can't see a situation where this would be your timing bottleneck.
xx = 293.62882204364098
yy = 0.086783439604999998
microbenchmark::microbenchmark(xx + yy, sum(xx,yy), sum(c(xx, yy)))
Unit: nanoseconds
expr min lq mean median uq max neval
xx + yy 88 102.5 111.90 107.0 110.0 352 100
sum(xx, yy) 201 211.0 256.57 218.5 232.5 2886 100
sum(c(xx, yy)) 283 297.5 330.42 304.0 311.5 1944 100

Related

How to round a whole number in R [duplicate]

This question already has an answer here:
round number to the first 3 digits (start with digit != 0)
(1 answer)
Closed 3 years ago.
This is a really simple question but how do I round a number in R such that I only show 2 significant figures?
E.g. 326 rounds to 330 and 4999 rounds to 5000
Thanks
Use digits to indicate decimal places.
round(326, digits=-1)
[1] 330
Here is the difference between signif() and round(). Taken directly from documentation:
x2 <- pi * 100^(-1:3)
round(x2, 3)
signif(x2, 3)
[1] 0.031 3.142 314.159 31415.927 3141592.654
[1] 3.14e-02 3.14e+00 3.14e+02 3.14e+04 3.14e+06
Use the one that works for you.
Maybe this could help:
signif(4999,2)
5000
signif(326,2)
330
signif(326232,2)
330000
And as Jim O. pointed out, there is a difference between signif() and round(). Also in performances are different, due as pointed out by Gregor, this could be not too much useful to know but maybe interesting:
library(microbenchmark)
k <- sample(1:100000,1000000,replace=T)
microbenchmark(
round_ ={round(k, digits=-1)},
signif_ ={signif(k,2)}
)
Unit: milliseconds
expr min lq mean median uq max neval
round_ 68.56366 70.22595 74.02643 71.99918 75.32761 109.5727 100
signif_ 109.57957 111.86501 121.17458 114.13232 118.88837 495.0321 100
try this
round(3333,-1)
round(3333,-2)
and see what you get

For-loop vs foreach vs apply and the fastest objects for data manipulation?

I'm hoping someone more knowledgeable than myself can help optimize this code. I've tried a number of methods, including foreach with doparallel (and snow) and compiler, but I think there may be simpler ways to improve the code, such as changing dataframes to datatables/matrices, and perhaps pre-loading blank objects instead of concatenating vectors in a loop.
Most of the variables listed below must be allowed to change in length depending on previous steps in the pipeline. Dimensions listed are taken from 1 example to show relative magnitude.
s.ids = a factor with length 66510. Haven't noticed a difference in speed when changed to a character vector.
g.list = a character vector with length 978.
l_signatures = a 978x66511 matrix.
d_g_up and d_g_down = small dataframes (nx10, n ranging from 5-200) with metadata related to g.list
c_score_new() computes a score. It's complex enough that it's essentially unchangeable in this scenario. It expects e_signature to have 2 columns, 1 made of g.list ("ids"), and the other as corresponding "rank"s generated by: rank(-1 * l_signatures[,as.character(id)], ties.method="random")
for (id in s.ids) {
e_signature <- data.frame(g.list,
rank(-1 * l_signatures[, as.character(id)],
ties.method="random"))
colnames(e_signature) <- c("ids","rank")
d_scores <- c(d_scores, c_score_new(d_g_up$Symbol, d_g_down$Symbol, e_signature))
}
Total, this takes 5-10 minutes to compute, with 3-5 minutes attributable to the generation of e_signature, which is not computationally complex. That's where I suspect optimization might be of the most benefit.
If we did a loop to generate e_signature in a more efficient way and combined results into 1 object (978x66510) before doing c_score_new(), it might be faster?
I'm having trouble working out the details, and I'm not confident this is the best method anyhow. So before I chased this wild goose, I thought the community might be able to steer me in the best direction.
The largest amount of time is taken by rank. It is possible to reduce computation time by more than 50%, i.e. change base::rank with for loop to Rfast::colRanks, please see below:
library(microbenchmark)
library(Rfast)
n <- 978
m <- 40000 #66510
x <- matrix(rnorm(n * m), ncol = m)
microbenchmark(
Initial = {
for (i in 1:ncol(x)) {
base::rank(x[, i], ties.method = "random")
}
},
Optimized = {
colRanks(x, method = "min")
},
times = 1
)
Output:
Unit: seconds
expr min lq mean median uq max neval
Initial 8.092186 8.092186 8.092186 8.092186 8.092186 8.092186 1
Optimized 3.397526 3.397526 3.397526 3.397526 3.397526 3.397526 1

Speed of string matching comparison operators

I got curious about the speed of string comparison in R, when's the right time to use != vs == and was wondering how much shortcutting they do.
If I have a vector with two levels, one which occurs frequently, and another which is rare, (trying to multiply my desired effect).
x <- sample(c('ALICE', 'HAL90000000000'), replace = TRUE, 1000, prob = c(0.05,0.95))
I would assume (if there is shortcutting) that the operation
x != 'ALICE'
would be considerably faster than:
x == 'HAL90000000000'
since to check equality in the latter case, I would assume I need to check every character, while the former would be invalidated by either the first or last character (depending on which side the algorithm checks)
but when I benchmark, it either does not seem to be the case (it was inconclusive in repeated trials, though with a very slight bias toward the == operation being faster ?!), or this isn't a fair trial:
> microbenchmark(x != 'ALICE', x == 'HAL90000000000')
Unit: microseconds
expr min lq mean median uq max neval
x != "ALICE" 4.520 4.5505 4.61831 4.5775 4.6525 4.970 100
x == "HAL90000000000" 3.766 3.8015 4.00386 3.8425 3.9200 13.766 100
Why is this?
EDIT:
I'm assuming it's because it's doing full string matching, but if so, is there a way to get R to optimize these ones? I don't get any gains from the obfuscation of the amount of time it takes to match long or short strings, no worries about passwords.

how to find consecutive composite numbers in R

I want first 'n' consecutive composite numbers
I searched command for finding consecutive composite numbers, but i got the result proving for that thorem. I didn't get any command for that..please help me to slove this problem in R.
Here is another option:
n_composite <- function(n) {
s <- 4L
i <- 1L
vec <- numeric(n)
while(i <= n) {
if(any(s %% 2:(s-1) == 0L)) {
vec[i] <- s
i <- i + 1L
}
s <- s + 1L
}
vec
}
It uses basic control flows to cycle through positive integers indexing composites.
benchmark
all.equal(find_N_composites(1e4), n_composite(1e4))
[1] TRUE
library(microbenchmark)
microbenchmark(
Mak = find_N_composites(1e4),
plafort = n_composite(1e4),
times=5
)
Unit: milliseconds
expr min lq mean median uq
Mak 2304.8671 2347.9768 2397.0620 2376.4306 2475.2368
plafort 508.8132 509.3055 522.1436 509.3608 530.4311
max neval cld
2480.7988 5 b
552.8076 5 a
The code of #Pierre Lafortune is neat and not too slow, but I'd like to propose another approach which is substantially faster.
Tackling the problem from another perspective, finding the first n composite numbers in R can be translated to "get the first n+k integers and remove the primes". This is fast because generating the sequence 1:(n+k) takes almost no time and there are very sophisticated algorithms to find primes available, one implementation being numbers::Primes().
The sequence needs to end with n+k because within the first n integers there will be some (k1) primes that need to be replaced. Note that the range (n+1):(n+k1) might also contain k2 primes, which need to be replaced as well. And on, and on, and on, … This will require a recursive structure.
Pierre's answer basically does something similar: He iteratively checks if an integer is a composite number (non-prime) and continues until enough composites are found. However, this has one drawback: The algorithm to find (non-) primes is rather naive (as compared to other algorithms to find primes; no offense intended). One the other hand, that solution doesn't involve the recursive problem of possible primes in any range of integers mentioned above.
The recursive solution I'd like to suggest is the following:
library(numbers)
n_composite2 <- function(n, from = 2) {
endRange <- from + n - 1
numbers <- seq(from = from, to = endRange)
primes <- Primes(n1 = from, n2 = endRange)
composites <- numbers[!(numbers %in% primes)]
nPrimes <- length(primes)
if (nPrimes >= 1) return(c(composites, n_composite2(nPrimes, from = endRange + 1)))
return(composites)
}
This generates a sequence of integers (potential composites), then uses numbers::Primes() to find the primes in that range and removes them from the sequence. If some numbers have been removed, the function calls itself again, this time computing [number of primes in previous step] composites and starting the sequence from where the previous step stopped.
If there are doubts whether this actually works, here the check against Pierre's solution (n_composite()):
> all(n_composite(1e4) == n_composite2(1e4))
[1] TRUE
Comparing both functions, n_composite2() is approximately 19 times faster:
library(microbenchmark)
microbenchmark(
"n_composite2" = n_composite2(1e4),
"n_composite" = n_composite(1e4),
times=5
)
Unit: milliseconds
expr min lq mean median uq max neval
n_composite2 34.44039 34.51352 35.10659 34.71281 35.21145 36.65476 5
n_composite 642.34106 661.15725 666.02819 662.99657 671.52093 692.12512 5
As a final remark: There are many solutions "between" Pierre's approach and the solution presented here. One could use numbers::Primes() in a while loop, very similar to what's happening in n_composite(). One could also start with a "sufficiently long" sequence of integers, remove the primes and then take the first n remaining numbers. To be efficient, this approach required a good approximation of the numbers of primes in a given range which is also not trivial (for low numbers).
That is indeed a lazy way of asking a question, but nevertheless; this should do it:
is_composite<-function(x){
sapply(x,function(y) if(y<3){FALSE}else{any(y%%(2:(y-1))==0)})
}
which(is_composite(1:100))
find_N_composites<-function(N){
which(is_composite(1:(2*N+2)))[1:N]
}
find_N_composites(10)
system.time({
x<-find_N_composites(1e+04)
})
The idea is to consequently check for each number if it has any divisors except 1 and itself. The function I provided finds first 10 000 composite numbers in about 2 seconds. If you want greater speed on large numbers, it will be better to optimize it. For example, by looking for divisors only amongst simple numbers.

Efficiently extract frequency of signal from FFT

I am using R and attempting to recover frequencies (really, just a number close to the actual frequency) from a large number of sound waves (1000s of audio files) by applying Fast Fourier transforms to each of them and identifying the frequency with the highest magnitude for each file. I'd like to be able to recover these peak frequencies as quickly as possible. The FFT method is one method that I've learned about recently and I think it should work for this task, but I am open to answers that do not rely on FFTs. I have tried a few ways of applying the FFT and getting the frequency of highest magnitude, and I have seen significant performance gains since my first method, but I'd like to speed up the execution time much more if possible.
Here is sample data:
s.rate<-44100 # sampling frequency
t <- 2 # seconds, for my situation, I've got 1000s of 1 - 5 minute files to go through
ind <- seq(s.rate*t)/s.rate # time indices for each step
# let's add two sin waves together to make the sound wave
f1 <- 600 # Hz: freq of sound wave 1
y <- 100*sin(2*pi*f1*ind) # sine wave 1
f2 <- 1500 # Hz: freq of sound wave 2
z <- 500*sin(2*pi*f2*ind+1) # sine wave 2
s <- y+z # the sound wave: my data isn't this nice, but I think this is an OK example
The first method I tried was using the fpeaks and spec functions from the seewave package, and it seems to work. However, it is prohibitively slow.
library(seewave)
fpeaks(spec(s, f=s.rate), nmax=1, plot=F) * 1000 # *1000 in order to recover freq in Hz
[1] 1494
# pretty close, quite slow
After doing a bit more reading, I tried this next approach, wherein
spec(s, f=s.rate, plot=F)[which(spec(s, f=s.rate, plot=F)[,2]==max(spec(s, f=s.rate, plot=F)[,2])),1] * 1000 # again need to *1000 to get Hz
x
1494
# pretty close, definitely faster
After a bit more looking around, I found this approach to work reasonably well.
which(Mod(fft(s)) == max(abs(Mod(fft(s))))) * s.rate / length(s)
[1] 1500
# recovered the exact frequency, and quickly!
Here is some performance data:
library(microbenchmark)
microbenchmark(
WHICH.MOD = which(Mod(fft(s))==max(abs(Mod(fft(s))))) * s.rate / length(s),
SPEC.WHICH = spec(s,f=s.rate,plot=F)[which(spec(s,f=s.rate,plot=F)[,2] == max(spec(s,f=s.rate,plot=F)[,2])),1] * 1000, # this is spec from the seewave package
# to recover a number around 1500, you have to multiply by 1000
FPEAKS.SPEC = fpeaks(spec(s,f=s.rate),nmax=1,plot=F)[,1] * 1000, # fpeaks is from the seewave package... again, need to multiply by 1000
times=10)
Unit: milliseconds
expr min lq median uq max neval
WHICH.MOD 10.78 10.81 11.07 11.43 12.33 10
SPEC.WHICH 64.68 65.83 66.66 67.18 78.74 10
FPEAKS.SPEC 100297.52 100648.50 101056.05 101737.56 102927.06 10
Good solutions will be the ones that recover a frequency close (± 10 Hz) to the real frequency the fastest.
More Context
I've got many files (several GBs), each containing a tone that gets modulated several times a second, and sometimes the signal actually disappears altogether so that there is just silence. I want to identify the frequency of the unmodulated tone. I know they should all be somewhere less than 6000 Hz, but I don't know more precisely than that. If (big if) I understand correctly, I've got an OK approach here, it's just a matter of making it faster. Just fyi, I have no previous experience in digital signal processing, so I appreciate any tips and pointers related to the mathematics / methods in addition to advice on how better to approach this programmatically.
After coming to a better understanding of this task and some of the terminology involved, I came across some additional approaches, which I'll present here. These additional approaches allow for window functions and a lot more, really, and the fastest approach in my question does not. I also just sped things up a bit by assigning the result of some of the functions to an object and indexing the object instead of running the function again
#i.e.
(ms<-meanspec(s,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000
# instead of
meanspec(s,f=s.rate,wl=1024,plot=F)[which.max(meanspec(s,f=s.rate,wl=1024,plot=F)[,2]),1]*1000
I have my favorite approach, but I welcome constructive warnings, feedback, and opinions.
microbenchmark(
WHICH.MOD = which((mfft<-Mod(fft(s)))[1:(length(s)/2)] == max(abs(mfft[1:(length(s)/2)]))) * s.rate / length(s),
MEANSPEC = (ms<-meanspec(s,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000,
DFREQ.HIST = (h<-hist(dfreq(s,f=s.rate,wl=1024,plot=F)[,2],200,plot=F))$mids[which.max(h$density)]*1000,
DFREQ.DENS = (dens <- density(dfreq(s,f=s.rate,wl=1024,plot=F)[,2],na.rm=T))$x[which.max(dens$y)]*1000,
FPEAKS.MSPEC = fpeaks(meanspec(s,f=s.rate,wl=1024,plot=F),nmax=1,plot=F)[,1]*1000 ,
times=100)
Unit: milliseconds
expr min lq median uq max neval
WHICH.MOD 8.119499 8.394254 8.513992 8.631377 10.81916 100
MEANSPEC 7.748739 7.985650 8.069466 8.211654 10.03744 100
DFREQ.HIST 9.720990 10.186257 10.299152 10.492016 12.07640 100
DFREQ.DENS 10.086190 10.413116 10.555305 10.721014 12.48137 100
FPEAKS.MSPEC 33.848135 35.441716 36.302971 37.089605 76.45978 100
DFREQ.DENS returns a frequency value farthest from the real value. The other approaches return values close to the real value.
With one of my audio files (i.e. real data) the performance results are a bit different (see below). One potentially relevant difference between the data being used above and the real data used for the performance data below is that above the data is just a vector of numerics and my real data is stored in a Wave object, an S4 object from the tuneR package.
library(Rmpfr) # to avoid an integer overflow problem in `WHICH.MOD`
microbenchmark(
WHICH.MOD = which((mfft<-Mod(fft(d#left)))[1:(length(d#left)/2)] == max(abs(mfft[1:(length(d#left)/2)]))) * mpfr(s.rate,100) / length(d#left),
MEANSPEC = (ms<-meanspec(d,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000,
DFREQ.HIST = (h<-hist(dfreq(d,f=s.rate,wl=1024,plot=F)[,2],200,plot=F))$mids[which.max(h$density)]*1000,
DFREQ.DENS = (dens <- density(dfreq(d,f=s.rate,wl=1024,plot=F)[,2],na.rm=T))$x[which.max(dens$y)]*1000,
FPEAKS.MSPEC = fpeaks(meanspec(d,f=s.rate,wl=1024,plot=F),nmax=1,plot=F)[,1]*1000 ,
times=25)
Unit: seconds
expr min lq median uq max neval
WHICH.MOD 3.249395 3.320995 3.361160 3.421977 3.768885 25
MEANSPEC 1.180119 1.234359 1.263213 1.286397 1.315912 25
DFREQ.HIST 1.468117 1.519957 1.534353 1.563132 1.726012 25
DFREQ.DENS 1.432193 1.489323 1.514968 1.553121 1.713296 25
FPEAKS.MSPEC 1.207205 1.260006 1.277846 1.308961 1.390722 25
WHICH.MOD actually has to run twice to account for the left and right audio channels (i.e. my data is stereo), so it takes longer than the output indicates. Note: I needed to use the Rmpfr library in order for the WHICH.MOD approach to work with my real data, as I was having problems with integer overflow.
Interestingly, FPEAKS.MSPEC performed really well with my data, and it seems to return a pretty accurate frequency (based on my visual inspection of a spectrogram). DFREQ.HIST and DFREQ.DENS are quick, but the output frequency isn't as close to what I judge is the real value, and both are relatively ugly solutions. My favorite solution so far MEANSPEC uses the meanspec and which.max. I'll mark this as the answer since I haven't had any other answers, but feel free to provide an other answer. I'll vote for it and maybe select it as the answer if it provides a better solution.

Resources