Google Analytics Acquisition Reporting Discrepancies - google-analytics

We're seeing difference numbers when comparing last year's data. Manually setting last years acquisition shows different numbers vs using the "compare to" tool. This example is minor but it's causing major issues when comparing over a longer period.

I see a yellow shield in top of third picture. Try to click on it to see if you have exceeded the sampling threshold (therefore the calculation is performed on a part of sessions and not on 100% and it is no longer accurate).
Sampling is widely used in statistical analysis because analyzing a subset of data gives similar results to an analysis of a complete data set, but can produce these results with a smaller a computational burden and a reduced processing time.
https://support.google.com/analytics/answer/2637192
Example:

Related

Results different after latest JAGS update?

I am running Bayesian Hierarchical Modeling in R using R2jags. When I open code I used a month ago and run it on a dataset I used a month ago (verified by "date modified" in windows explorer), I get different results than I got a month ago. The only difference I can think of is I got a new work computer in the last month and we installed JAGS 4.3.0. I was previously using 4.2.0.
Is it remotely possible to get different results just from updating my version of JAGS? I'm not posting code or results here because I don't need help troubleshooting it - everything is exactly the same.
Edit:
Conversion seems fine - Gewekes, autocorrelation plots, and trace plots look good. That hasn't changed.
I have a seed set both via set.seet () and jags.seed=. Is that enough? I've never had a problem replicating these types of results before.
As far as how different the results are, they are large enough to cause meaningful difference in the inference. I am assessing relationships between 30 chemical exposures and a health outcome in among 336 humans. Here are two examples. Chemical B troubles me the most because of the credible interval shift. Chemical A is another example.
I also doubled the number of iterations from 50k to 100k which resulted in very minor/inconsequential differences.
Edit 2:
I posted at source forge asking about the different default RNGs for versions: https://sourceforge.net/p/mcmc-jags/discussion/610037/thread/52bfef7d17/
There are at least 3 possible reasons for you seeing a difference between results from these models:
One or both of your attempts to fit this model did not converge, and/or your effective sample size is so small that random sampling error is having a large impact on your inference. If you have already checked to ensure convergence and sufficient effective sample size (for both models) then you can rule this out.
You are seeing small differences in the posteriors due to the random sampling inherent to MCMC in otherwise converged results. If these differences are big enough to cause a meaningful difference in inference then your effective sample size is not high enough - so just run the models for longer and the difference should reduce. You can also set the random seed in JAGS using initial values for .RNG.seed and .RNG.name so that successive model runs are numerically identical. If you run the models for longer and this difference does not reduce (or if it is a large difference to begin with) then you can rule this out.
Your model contains a node for which the default sampling scheme changed between JAGS 4.2.0 and 4.3.0 - there were some changes to sampling schemes (and the order of precedence for assigning samplers to nodes) that could conceivably have changed your results (from memory I think this affected GLM particularly, but I can't remember exactly). However, although this may affect the probability of convergence, it should not substantially affect the posterior if the model does converge. It may be contributing to a numerical difference as explained for point (2) though.
I'd recommend first ensuring convergence of both models, and then (assuming they did both converge) looking at exactly how much of a difference you are seeing. If it looks like both models converged AND the difference is more than just random sampling variation, then please reply here and/or update your question (as that shouldn't happen ... i.e. we may need to look into the possibility of a bug in JAGS).
Thanks,
Matt
--------- Edit following additional information added to the question --------
Based on what you have written, it does seem that the difference in inference exceeds what might be expected due to random variation, so there may be some kind of underlying issue here. In order to diagnose this further we would need a minimal reproducible example (https://stackoverflow.com/help/minimal-reproducible-example). This means that you would need to provide not only the model (or preferably a simplified model that still exhibits the problem) but also some data to which we can fit the model. If your data are too sensitive to share then this could be a fictitious dataset for which you also see a difference between JAGS 4.2.0 and JAGS 4.3.0.
The official help forum for JAGS is at https://sourceforge.net/p/mcmc-jags/discussion/610037/ - so you can certainly post there, although we would still need a minimal reproducible example to be able to do anything. If you do so, then please update both posts with a link to the other so that anyone reading either post knows about the cross-posting. You should also note that R2jags is not officially supported on the JAGS forums, so please provide the minimal reproducible example using plain rjags code (or runjags if you prefer) rather than using the R2jags wrapper.
To answer your question in the comments: in order to obtain information on the samplers used you can use rjags::list.samplers() eg:
library(rjags)
# LINE is just a small example model built into rjags:
data(LINE)
LINE$recompile()
# $`bugs::ConjugateGamma`
# [1] "tau"
# $`bugs::ConjugateNormal`
# [1] "alpha"
# $`bugs::ConjugateNormal`
# [1] "beta"

R package for visualizing qualitative data -- coding events that occur over time

I'm fairly new to R, so I was hoping I could be pointed in the right direction.
I am looking for a package that allows me to construct graphs of some qualitative data.
The qualitative data consists of certain coded events that happen during during 30 minute chunks of time. The original data are from recorded videos, and the different events that occur have different codes. Ideally, the graphs would be a horizontal band divided into appropriately sized smaller chunks based on how long/when the different coded events occur during the 30 minute period. The idea is to provide a visual of what happens during various 30-minute chunks of time for different individuals. I am open to other types of visuals if they also seem appropriate! Thanks!

How to construct dataframe for time series data using ensemble learning methods

I am trying to predict the Bitcoin price at t+5, i.e. 5 minutes ahead, using 11 technical indicators up to time t which can all be calculated from the open, high, low, close and volume values from the Bitcoin time series (see my full data set here). As far as I know, it is not necessary to manipulate the data frame when using algorithms like regression trees, support vector machines or artificial neural networks, but when using ensemble methods like random forests (RF) and Boosting, I heard that it is necessary to re-arrange the data frame in some way, because ensemble methods draw repeated RANDOM samples from the training data, in which case the sequence of the Bitcoin time series will be ruined. So, is there a way to re-arrange the data frame in some way such that the time series will still be in chronological order every time repeated samples are drawn from the training data?
I was provided with an explanation of how to construct the data frame here and possibly here, too, but unfortunately, I didn't really understand these explanations, because I didn't see a visual example of the to-be-constructed data frame and because I wasn't able to identify the relevant line of code. So, if someone could, show me how to re-arrange the data frame using an example data frame, I would be very thankful. As example data frame, you might consider using the airquality in-built data frame in r (I think it contains time series data), the data I provided above, or any other data frame you think is best.
Many thanks!
There is no problem with resampling for ML algorithms. To capture (auto)correlation just add columns with lagged values of time series. E.g. in case of univarate time-series x[t], where t is time in minutes, you add x[t - 1], x[t - 2], ..., x[t - n] columns with lagged values. More lags you add more history will be accounted at model training.
Some very basic working example you can find here: Prediction using neural networks
More advanced staff with Keras is here: Time series prediction using RNN
However, just for your information, special message by Mr Chollet and Mr Allaire from the above-mentioned article ,):
NOTE: Markets and machine learning
Some readers are bound to want to take the techniques we’ve introduced
here and try them on the problem of forecasting the future price of
securities on the stock market (or currency exchange rates, and so
on). Markets have very different statistical characteristics than
natural phenomena such as weather patterns. Trying to use machine
learning to beat markets, when you only have access to publicly
available data, is a difficult endeavor, and you’re likely to waste
your time and resources with nothing to show for it.
Always remember that when it comes to markets, past performance is not
a good predictor of future returns – looking in the rear-view mirror
is a bad way to drive. Machine learning, on the other hand, is
applicable to datasets where the past is a good predictor of the
future.

Finding and counting audio drop-outs in an ecological recording

I am trying to assess how many audio drop outs are in a given sound file of an ecological soundscape.
Format: Wave
Samplingrate (Hertz): 192000
Channels (Mono/Stereo): Stereo
PCM (integer format): TRUE
Bit (8/16/24/32/64): 16
My project had a two element hydrophone. The elements were different brands/models, and we are trying to determine which element preformed better in our specific experiment. One analysis we would like to conduct is measuring how often each element had drop-outs, or loss of signal. These drop-outs are not signal amplitude related, in other words, the drop-outs are not caused by maxing out the amplitude. The element or the associated electronics just failed.
I've been trying to do this in R, as that is the program I am most familiar with. I have very limited experience with Matlab and regex, but am opening to trying those programs/languages. I'm a biologist, so please excuse any ignorance.
In R I've been playing around with the package 'seewave', and while I've been able to produce some very pretty spectrograms (which, to be fair, is the only context I've previously used that package). I attempted to use the envelope and automatic temporal measurements function within seewave (timer). I got some interesting, but opposite results.
foo=readWave("Documents/DASBR/DASBR2_20131119$032011.wav", from=53, to=60, units="seconds")
timer(foo, f=96000, threshold=6.5, msmooth=c(30,5), colval="blue")
I've altered the values of msmooth and threshold countless times, but that's just fine tinkering. What this function preforms is measuring the duration between amplitude peaks at the given threshold. What I need it to do either a) find samples in the signal without amplitude or b) measure the duration between areas without amplitude. I can work with either of those outputs. Basically I want to reverse the direction the threshold is measuring, does that make sense? So therefore any sample that is below a threshold will trigger a measurement, rather than any sample that is above the threshold.
I'm still playing with seewave to see how to produce the data I need, but I'm looking for a bit of guidance. Perhaps there is a function in seewave that will accomplish what I'm trying to do more efficiently. Or, if there is anyway to output the numerical data generated from timer, I could use the 'quantmod' package function 'findValleys' to get a list of all the data gaps.
So yeah, guidance is what I'm requesting, oh data crunching gods.
Cheers.
This problem sounds reminiscent of power transfer problems often seen in electrical engineering. One way to solve the problem is to take the RMS (the root of the mean of the square) of the samples in the signal over time, averaged over short durations (perhaps a few seconds or even shorter). The durations where you see low RMS are where the dropouts are. It's analogous to the VU meters that you sometimes see on audio amplifiers - which indicate the power being transferred to the speakers from the amplifier.
I just wanted to summarize what I ended up doing so other people will be aware. Unfortunately, the RMS measurement is not what I was looking for. Though rms could technically give me a basic idea of drop-outs could occur, because I'm working with ecological recordings there are too many other factors at play.
Background: The sound streams I am working with are from a two element hydrophone, separated vertically by 2 meters and recording at 100 m below sea level. We are finding that the element sitting at ~100 meters is experiencing heavy drop outs, while the element at ~102 meters is mostly fine. We are currently attributing this to a to-be-identified electrical issue. If both elements were poised to receive auto exactly the same way, rms would work when detecting drop-outs, but because sound is received independently the rms calculation is too heavily impacted by other factors. Two meters can make a larger difference than you'd think when it comes to source levels and signal reception, it's enough for us to localize vocalizing animals (with left/right ambiguity) based on the delay between signal arrivals.
All the same, here's what I did:
library(seewave)
library(tuneR)
foo=readWave("Sound_file_Path")
L=foo#left
R=foo#right
rms(L)
rms(R)
I then looped this process through a directory, which I detail here: for.loop with WAV files
So far, this issue is still unresolved, but thank you for the discussion!
~etg

How to normalize benchmark results to obtain distribution of ratios correctly?

To give a bit of the context, I am measuring the performance of virtual machines (VMs), or systems software in general, and usually want to compare different optimizations for performance problem. Performance is measured in absolute runtime for a number of benchmarks, and usually for a number of configurations of a VM variating over used number of CPU cores, different benchmark parameters, etc. To get reliable results, each configuration is measure like 100 times. Thus, I end up with quite a number of measurements for all kind of different parameters where I am usually interested in the speedup for all of them, comparing the VM with and the VM without a certain optimization.
What I currently do is to pick one specific series of measurements. Lets say the measurements for a VM with and without optimization (VM-norm/VM-opt) running benchmark A, on 1 core.
Since I want to compare the results of the different benchmarks and number of cores, I can not use absolute runtime, but need to normalize it somehow. Thus, I pair up the 100 measurements for benchmark A on 1 core for VM-norm with the corresponding 100 measurements of VM-opt to calculate the VM-opt/VM-norm ratios.
When I do that taking the measurements just in the order I got them, I obviously have quite a high variation in my 100 resulting VM-opt/VM-norm ratios. So, I thought, ok, let's assume the variation in my measurements come from non-deterministic effects and the same effects cause variation in the same way for VM-opt and VM-norm. So, naively, it should be ok to sort the measurements before pairing them up. And, as expected, that reduces the variation of course.
However, my half-knowledge tells me that is not the best way and perhaps not even correct.
Since I am eventually interested in the distribution of those ratios, to visualize them with beanplots, a colleague suggested to use the cartesian product instead of pairing sorted measurements. That sounds like it would account better for the random nature of two arbitrary measurements paired up for comparison. But, I am still wondering what a statistician would suggest for such a problem.
In the end, I am really interested to plot the distribution of ratios with R as bean or violin plots. Simple boxplots, or just mean+stddev tell me too few about what is going on. These distributions usually point at artifacts that are produced by the complex interaction on these much to complex computers, and that's what I am interested in.
Any pointers to approaches of how to work with and how to produce such ratios in a correct way a very welcome.
PS: This is a repost, the original was posted at https://stats.stackexchange.com/questions/15947/how-to-normalize-benchmark-results-to-obtain-distribution-of-ratios-correctly
I found it puzzling that you got such a minimal response on "Cross Validated". This does not seem like a specific R question, but rather a request for how to design an analysis. Perhaps the audience there thought you were asking too broad a question, but if that is the case then the [R] forum is even worse, since we generally tackle problems where data is actually provided. We deal with the requests for implementation construction in our language. I agree that violin plots are preferred to boxplots for the examination of distributions (when there is sufficient data and I am not sure that 100 samples per group makes the grade in that instance), but in any case that means the "R answer" is that you just need to refer to the proper R help page:
library(lattice)
?xyplot
?panel.violin
Further comments would require more details and preferably some data examples constructed in R. You may want to refer to the page where "great question design is outlined".
One further graphical method: If you are interested in the ratios of two paired variates but do not want to "commit" to just x/y, then you can examine them by plotting and then plotting iso-ratio lines by repeatedly using abline(a=0, b= ). I think 100 samples is pretty "thin" for doing density estimates, but there are 2d density methods if you can gather more data.

Resources