I have posted this question in the R tag but I am open to solutions in other languages.
Lets say you have some waveforms. The first is just a bar. It is completely horizontal so it has no deviation. The other waveforms look like these:
Now I am able to get these waveforms separated into a uniform box so that they are all the same pixel size and resolution. My first idea was to quantify the amount of whitespace within one of these uniform boxes that the waveform used up using the code found here:
Measuring whitespace in a jpeg
Now however I want to measure the deviation between waveforms. That is, how could I quantify how "jumpy" a waveform is? Looking at the picture above, the second waveform seems the most homogeneous, and the third waveform seems to display the most variation, but I am unsure about how to quantify this. Any suggestions would be greatly appreciated.
I would recommend starting by getting familiarized with the packages tuneR and seewave, you can import and extract a lot of parameters from these two packages. In particular you could use the function acustat from seewave, this is a worked example with data from the package
data(tico)
note <- cutw(tico, from=0.5, to=0.9, output="Wave")
a<-acoustat(note)
a will give you 10 acoustic parameters from the sound, you could also use other packages like soundecology, that also extract some other variables, in particular, the function acoustic_diversity measures sound complexity
Related
Could you please help me to add zooming option for wordcloud
Please find reproducible example #
´http://shiny.rstudio.com/gallery/word-cloud.html´
I tried to incorporate rbokeh and plotly but couldnt find wordcloud equivalent render function
Additionally, I found ECharts from github #
´https://github.com/XD-DENG/ECharts2Shiny/tree/8ac690a8039abc2334ec06f394ba97498b518e81´
But incorporating this ECharts are also not convenient for really zoom.
Thanks in advance,
Abi
Normalisation is required only if the predictors are not meant to be comparable on the original scaling. There's no rule that says you must normalize.
PCA is a statistical method that gives you a new linear transformation. By itself, it loses nothing. All it does is to give you new principal components.
You lose information only if you choose a subset of those principal components.
Usually PCA includes centering the data as a Pre Process Step.
PCA only arranges the data in its own Axis (Eigne Vectors) System.
If you use all axis you lose no information.
Yet, usually we want to apply Dimensionality Reduction, intuitively, having less coordinates for the data.
This process means projecting the data into Sub Space which is spanned by only some of the Eigen Vectors of the data.
If one chose wisely the number of vectors one might end up with a significant reduction in the number of dimensions of the data with negligible loss of data / information.
The way to do so is by choosing Eigen Vectors which their Eigen Values sum to most of the data power.
PCA itself is invertible, so lossless.
But:
It is common to drop some components, which will cause a loss of information.
Numerical issues may cause a loss in precision.
I am trying to use a regression model to establish a relationship between two parameters, A and B(more specifically, runtime and workload, so that can I recommend what an optimal workload could be maybe, or how strongly one affects the other etc. ) I am using 'rlm'(robust linear model) for this purpose since it saves me the trouble of dealing with outliers before hand.
However, rather than output one single regression model, I would like to determine a band that can confidently explain most of the points. Here is an image I took from the web. Those additional red lines are what I want to determine.
This is what I had in mind :
1. I found the mean of the residuals of all the points lying above the line. Then we probably shift the original regression line by some multiple of mean + k*sigma. The same can be done for the points below the line.
In SVM, in order to find the support vectors, we draw parallel lines(essentially shift the middle line until we find support vectors on either sides). I had something like that in mind. Play around with the intercepts a little and find the the number of points which can be explained by the band. Keep a threshold so you can stop somewhere.
The problem is, I am unable to implement this in R. For that matter, I am not sure if these approaches even work either. I would like to know what you would suggest. Also, is there a classic way to do this using one of the many R packages?
Thanks a lot for helping. Appreciate it.
Do you guys have an idea how to approach the problem of finding artefacts/outliers in a blood pressure curve? My goal is to write a program, that finds out the start and end of each artefact. Here are some examples of different artefacts, the green area is the correct blood pressure curve and the red one is the artefact, that needs to be detected:
And this is an example of a whole blood pressure curve:
My first idea was to calculate the mean from the whole curve and many means in short intervals of the curve and then find out where it differs. But the blood pressure varies so much, that I don't think this could work, because it would find too many non existing "artefacts".
Thanks for your input!
EDIT: Here is some data for two example artefacts:
Artefact1
Artefact2
Without any data there is just the option to point you towards different methods.
First (without knowing your data, which is always a huge drawback), I would point you towards Markov switching models, which can be analysed using the HiddenMarkov-package, or the HMM-package. (Unfortunately the RHmm-package that the first link describes is no longer maintained)
You might find it worthwile to look into Twitter's outlier detection.
Furthermore, there are many blogposts that look into change point detection or regime changes. I find this R-bloggers blog post very helpful for a start. It refers to the CPM-package, which stands for "Sequential and Batch Change Detection Using Parametric and Nonparametric Methods", the BCP-package ("Bayesian Analysis of Change Point Problems"), and the ECP-package ("Non-Parametric Multiple Change-Point Analysis of Multivariate Data"). You probably want to look into the first two as you don't have multivariate data.
Does that help you getting started?
I could provide an graphical answer that does not use any statistical algorithm. From your data I observe that the "abnormal" sequences seem to present constant portions or, inversely, very high variations. Working on the derivative, and setting limits on this derivative could work. Here is a workaround:
require(forecast)
test=c(df2$BP)
test=ma(test, order=50)
test=test[complete.cases(test)]
which <- ma(0+abs(diff(test))>1, order=10)>0.1
abnormal=test; abnormal[!which]<-NA
plot(x=1:NROW(test), y=test, type='l')
lines(x=1:NROW(test), y=abnormal, col='red')
What it does: first "smooths" the data with a moving average to prevent the micro-variations to be detected. Then it applyes the "diff" function (derivative) and tests if it is greater than 1 (this value is to be adjusted manually depending on the soothing amplitude). THen, in order to get a whole "block" of abnormal sequence without tiny gaps, we apply again a smoothing on the boolean and test it superior to 0.1 to grasp better the boundaries of the zone. Eventually, I overplot the spotted portions in red.
This works for one type of abnormality. For the other type, you could make up a low treshold on the derivative, inversely, and play with the tuning parameters of smoothing.
I am trying to assess how many audio drop outs are in a given sound file of an ecological soundscape.
Format: Wave
Samplingrate (Hertz): 192000
Channels (Mono/Stereo): Stereo
PCM (integer format): TRUE
Bit (8/16/24/32/64): 16
My project had a two element hydrophone. The elements were different brands/models, and we are trying to determine which element preformed better in our specific experiment. One analysis we would like to conduct is measuring how often each element had drop-outs, or loss of signal. These drop-outs are not signal amplitude related, in other words, the drop-outs are not caused by maxing out the amplitude. The element or the associated electronics just failed.
I've been trying to do this in R, as that is the program I am most familiar with. I have very limited experience with Matlab and regex, but am opening to trying those programs/languages. I'm a biologist, so please excuse any ignorance.
In R I've been playing around with the package 'seewave', and while I've been able to produce some very pretty spectrograms (which, to be fair, is the only context I've previously used that package). I attempted to use the envelope and automatic temporal measurements function within seewave (timer). I got some interesting, but opposite results.
foo=readWave("Documents/DASBR/DASBR2_20131119$032011.wav", from=53, to=60, units="seconds")
timer(foo, f=96000, threshold=6.5, msmooth=c(30,5), colval="blue")
I've altered the values of msmooth and threshold countless times, but that's just fine tinkering. What this function preforms is measuring the duration between amplitude peaks at the given threshold. What I need it to do either a) find samples in the signal without amplitude or b) measure the duration between areas without amplitude. I can work with either of those outputs. Basically I want to reverse the direction the threshold is measuring, does that make sense? So therefore any sample that is below a threshold will trigger a measurement, rather than any sample that is above the threshold.
I'm still playing with seewave to see how to produce the data I need, but I'm looking for a bit of guidance. Perhaps there is a function in seewave that will accomplish what I'm trying to do more efficiently. Or, if there is anyway to output the numerical data generated from timer, I could use the 'quantmod' package function 'findValleys' to get a list of all the data gaps.
So yeah, guidance is what I'm requesting, oh data crunching gods.
Cheers.
This problem sounds reminiscent of power transfer problems often seen in electrical engineering. One way to solve the problem is to take the RMS (the root of the mean of the square) of the samples in the signal over time, averaged over short durations (perhaps a few seconds or even shorter). The durations where you see low RMS are where the dropouts are. It's analogous to the VU meters that you sometimes see on audio amplifiers - which indicate the power being transferred to the speakers from the amplifier.
I just wanted to summarize what I ended up doing so other people will be aware. Unfortunately, the RMS measurement is not what I was looking for. Though rms could technically give me a basic idea of drop-outs could occur, because I'm working with ecological recordings there are too many other factors at play.
Background: The sound streams I am working with are from a two element hydrophone, separated vertically by 2 meters and recording at 100 m below sea level. We are finding that the element sitting at ~100 meters is experiencing heavy drop outs, while the element at ~102 meters is mostly fine. We are currently attributing this to a to-be-identified electrical issue. If both elements were poised to receive auto exactly the same way, rms would work when detecting drop-outs, but because sound is received independently the rms calculation is too heavily impacted by other factors. Two meters can make a larger difference than you'd think when it comes to source levels and signal reception, it's enough for us to localize vocalizing animals (with left/right ambiguity) based on the delay between signal arrivals.
All the same, here's what I did:
library(seewave)
library(tuneR)
foo=readWave("Sound_file_Path")
L=foo#left
R=foo#right
rms(L)
rms(R)
I then looped this process through a directory, which I detail here: for.loop with WAV files
So far, this issue is still unresolved, but thank you for the discussion!
~etg
i have a problem with clustering time series in R.
I googled a lot and found nothing that fits my problem.
I have made a STL-Decomposition of Timeseries.
The trend component is in a matrix with 64 columns, one for every series.
Now i want to cluster these series in simular groups, involve the curve shapes and the timely shift. I found some functions that imply one of these aspects but not both.
First i tried to calculte a distance matrix with the dtw-distance so i
found clusters based on the values and inply the time shift but not on the shape of the timeseries. After this i tried some correlation based clustering, but then the timely shift
we're not recognized and the result dont satisfy my claims.
Is there a function that could cover my problem or have i to build up something
on my own. Im thankful for every kind of help, after two days of tutorials and examples i totaly uninspired. I hope i could explain the problem well enough to you.
I attached a picture. Here you can see some example time series.
There you could see the problem. The two series in the middle are set to one cluster,
although the upper and the one on the bottom have the same shape as one of the middle.
Have you tried the R package dtwclust
https://cran.r-project.org/web/packages/dtwclust/index.html
(I'm just starting to explore this package, but it seems like it covers a lot of aspects of time series clustering and it has lots of good references.)
you can use the kml package. It is used specifically to longitudinal data. You can consult its help. It has the next example:
### Generation of some data
cld1 <- generateArtificialLongData(25)
### We suspect 3, 4 or 6 clusters, we want 3 redrawing.
### We want to "see" what happen (so printCal and printTraj are TRUE)
kml(cld1,c(3,4,6),3,toPlot='both')
### 4 seems to be the best. We want 7 more redrawing.
### We don't want to see again, we want to get the result as fast as possible.
kml(cld1,4,10)
Example cluster