I have standard periodogram produced from the spectrum function call in the R "stats" package. It produces a spectral density on the Y axis. I wish to actually inspect the amplitude of the key frequency signals.
How do i convert the spectral density to an amplitude? Is there a periodgram plot/analysis in R that produces a frequency vs amplitude plot automatically? Appreciate any advice.
Maybe you use different terminology than I do. The help page says that value returned from the spectrum function is a list whose first two elements are:
freq
vector of frequencies at which the spectral density is estimated.
(Possibly approximate Fourier frequencies.) The units are the reciprocal
of cycles per unit time (and not per observation spacing): see ‘Details’
spec
Vector (for univariate series) or matrix (for multivariate series) of
estimates of the spectral density at frequencies corresponding to freq.
So is the $spec element what you are calling the vector of "amplitudes"? (You haven't said what the "key frequency signals" are so I just picked the fourth frequency from the example in ?spectrum:
lh.spec <- spectrum(lh)
lh.spec$freq[4]
#1] 0.08333333
lh.spec$spec[4]
#[1] 1.167519
Related
I am attempting to reproduce the above function in R. The numerator has the product of the probability density function (pdf) of "y" at time "t". The omega_t is simply the weight (which for now lets ignore). The i stands for each forecast of y (along with the density) derived for model_i, at time t.
The denominator is the integral of the above product. My question is: How to estimate the densities. To get the density of the variable one needs some datapoints. So far I have this:
y<-c(-0.00604,-0.00180,0.00292,-0.0148)
forecastsy_model1<-c(-0.0183,0.00685) # respectively time t=1 and t=2 of the forecasts
forecastsy_model2<-c(-0.0163,0.00931) # similarly
all.y.1<-c(y,forecasty_model1) #together in one vector
all.y.2<-c(y,forecasty_model2) #same
However, I am not aware how to extract the density of x1 for time t=1, or t=6, in order to do the products. I have considered this to find the density estimated using this:
dy1<-density(all.y.1)
which(dy1$x==0.00685)
integer(0) #length(dy1$x) : 512
with dy1$x containing the n coordinates of the points where the density is estimated, according to the documentation. Shouldn't n be 6, or at least contain the points of y that I have supplied? What is the correct way to extract the density (pdf) of y?
There is an n argument in density which defaults to 512. density returns you estimated density values on a relatively dense grid so that you can plot the density curve. The grid points are determined by the range of your data (plus some extension) and the n value. They produce a evenly spaced grid. The sampling locations may not lie exactly on this grid.
You can use linear interpolation to get density value anywhere covered by this grid:
Find the probability density of a new data point using "density" function in R
Exact kernel density value for any point in R
First off, I'm not entirely sure if this is the correct place to be posting this, as perhaps it should go in a more statistics-focussed forum. However, as I'm planning to implement this with R, I figured it would be best to post it here. Please apologise if I'm wrong.
So, what I'm trying to do is the following. I want to simulate data for a total of 250.000 observations, assigning a continuous (non-integer) value in line with a kernel density estimate derived from empirical data (discrete), with original values ranging from -5 to +5. Here's a plot of the distribution I want to use.
It's quite essential to me that I don't simulate the new data based on the discrete probabilities, but rather the continuous ones as it's really important that a value can be say 2.89 rather than 3 or 2. So new values would be assigned based on the probabilities depicted in the plot. The most frequent value in the simulated data would be somewhere around +2, whereas values around -4 and +5 would be rather rare.
I have done quite a bit of reading on simulating data in R and about how kernel density estimates work, but I'm really not moving forward at all. So my question basically entails two steps - how do I even simulate the data (1) and furthermore, how do I simulate the data using this particular probability distribution (2)?
Thanks in advance, I hope you guys can help me out with this.
With your underlying discrete data, create a kernel density estimate on as fine a grid as you wish (i.e., as "close to continuous" as needed for your application (within the limits of machine precision and computing time, of course)). Then sample from that kernel density, using the density values to ensure that more probable values of your distribution are more likely to be sampled. For example:
Fake data, just to have something to work with in this example:
set.seed(4396)
dat = round(rnorm(1000,100,10))
Create kernel density estimate. Increase n if you want the density estimated on a finer grid of points:
dens = density(dat, n=2^14)
In this case, the density is estimated on a grid of 2^14 points, with distance mean(diff(dens$x))=0.0045 between each point.
Now, sample from the kernel density estimate: We sample the x-values of the density estimate, and set prob equal to the y-values (densities) of the density estimate, so that more probable x-values will be more likely to be sampled:
kern.samp = sample(dens$x, 250000, replace=TRUE, prob=dens$y)
Compare dens (the density estimate of our original data) (black line), with the density of kern.samp (red):
plot(dens, lwd=2)
lines(density(kern.samp), col="red",lwd=2)
With the method above, you can create a finer and finer grid for the density estimate, but you'll still be limited to density values at grid points used for the density estimate (i.e., the values of dens$x). However, if you really need to be able to get the density for any data value, you can create an approximation function. In this case, you would still create the density estimate--at whatever bandwidth and grid size necessary to capture the structure of the data--and then create a function that interpolates the density between the grid points. For example:
dens = density(dat, n=2^14)
dens.func = approxfun(dens)
x = c(72.4588, 86.94, 101.1058301)
dens.func(x)
[1] 0.001689885 0.017292405 0.040875436
You can use this to obtain the density distribution at any x value (rather than just at the grid points used by the density function), and then use the output of dens.func as the prob argument to sample.
How do you get N pairs of values, which represent a joint probability (2d density) of a much larger pairs of values?
I do MCMC sampling on parameters of a function, and I want to visualize the posterior density of that function by plotting, say, 20 semi-transparent lines that visualizes the density of the function. I know that I can just make a sufficiently large sample to approximate the density (like this), but this could clutter the graph. Rather, I'd imagine something like percentiles would work. E.g. a pair for each 2% change in percentile should accurately portray the density using 50 pairs.
Specifically, I'm sampling from a bayesian model with a two-parameter geometric series. The density is not necessarily monotonically increasing (there can be several peaks). This is a bit complicated, but just to get started, a multivariate normal would be more minimal to work with:
library(MASS)
pairs = mvrnorm(10000, c(1,3), rbind(c(0.2, 0.1), c(0.1, 0.2)))
# Easy to just sample the rows to get an approximate representation:
pairs[sample(nrow(pairs), size=50, replace=FALSE)
# But how to get more certainty that the samples are really representative?
# This would probably start with the density
dens = kde2d(pairs[,1], pairs[,2], n=100)
Here, dens$z is a 100*100 matrix containing a density for each of the combinations of dens$x and dens$y, i.e. the (binned) pairs in pairs. Here's an image(dens) visualization:
I am trying to plot the cdf of a uniform distribution in octave but I am not getting the cdf. I am simply getting the original distribution. Also the original distribution, which is meant to be a uniform distribution, is not a uniform distribution at all!
Here is my octave code:
x = unifrnd(0,1,100,1);
hist(x)
cdfPlot = unifcdf(x)
hist(cdfPlot)
The histogram for the 1st one (hist(x)):
and the second one (hist(cdfPlot)) :
I also tried to use cdfplot(x) in octave but it said :
warning: the 'cdfplot' function belongs to the statistics package from
Octave Forge but has not yet been implemented.
Please read http://www.octave.org/missing.html to learn how you can
contribute missing functionality.
please help!
Judging by the submitted code, what you are trying to do is obtain a sample from a uniform distribution and then show a flat (mostly) histogram corresponding to a uniform distribution and a line corresponding to the cumulative distribution of the distribution.
For the first part:
Of course, with 100 samples (and no averaging), you are not going to observe a flat distribution, but if you try:
x=unifrnd(0,1,100000,1);
hist(x);
Then you are more likely to get a flat-looking histogram.
For the second part:
unifcdf(x,A,B) will return the value of a uniform distribution's CDF at some value x, between the interval set by parameters A,B. That is, the value of the CDF model itself, NOT the cumulative sum of the sample's histogram. To obtain that, you need to:
x=unifrnd(0,1,100000,1);
[counts, intervals] = hist(x);
xCDF = cumsum(counts);
bar(xCDF);
Finally, if you are looking for the model values, that is the values that would be returned by a formula describing a distribution, then for the uniform distribution that would be a probability of (1/nBins) between your A, B interval (in this case, 0,1) and a count of (1/nBins)*NSamples, while the CDF would be a line of slope (1/nBins) (i.e. the interval of the density function) and of binNum*((1/nBins)*NSamples). In the example above and using the default nBins for hist which is 10, x is decomposed to 10 intervals each with an approximate number of counts of 10000 items of x and the last value of the cumulative sum is 100000 which is of course the total number of samples in x.
For more information please see this link.
Hope this helps.
So I am trying to find 3 things given a certain function in the x domain when transformed into the spectral domain.
the Amplitude
The Frequency
The Phase
In R (statistical software) I have coded the following function:
y=7*cos(2*pi*(seq(-50,50,by=.01)*(1/9))+32)
fty=fft(y,inverse=F)
angle=atan2(Im(fty), Re(fty))
x=which(abs(fty)[1:(length(fty)/2)]==max(abs(fty)[1:(length(fty)/2)]))
par(mfcol=c(2,1))
plot(seq(-50,50,by=.01),y,type="l",ylab = "Cosine Function")
plot(abs(fty),xlim=c(x-30,x+30),type="l",ylab="Spectral Density in hz")
I know I can compute the frequency manually by taking the bin value and dividing it by the size of the interval(total time of the domain). Since the bins started at 1, when it should be zero, it would thus be frequency=(BinValue-1)/MaxTime, which does get me the 1/9'th I have in the function above.
I have two quick questions:
First) I am having trouble computing the phase, is there a prebuilt R function that can give me the phase? From a manual calculation, the density function peaks at 12 (see bottom graph), shouldn't the value then the value of the phase be 2*pi+angle[12] but I am getting a value of
angle[12] [1] -2.558724
which puts the phase at 2*pi+angle[12]=3.724462. But that's wrong the phase should be 32 radians. What am I doing wrong?
Second) Is there a function that can automatically convert abs(fty)[12]=34351.41 , to the amplitude number I have in front of the cosine, which is 7?