I'm trying to come up with a better way of providing an instantaneous average when input signal is very slow. This seems like a math-y kinda question so if it should be over there let me know.
I have events that are measured as a pulse. Normally I can collect the pulses in a counter and then read the counter value at a fixed interval, say 1/4 second. I can then take the count value and divide by the number of seconds so n/0.25 and get a rate. I then apply a low pass filter to clean up the average and that works great normally.
What do I do when the events happen once every 1-60 seconds? The obvious choice is to wait until I have a sufficient number of counts and divide by total time. However, I need to provide the user with a reading every few seconds so waiting is not an option. I need some way to estimate the value.
I've thought of one solution that's kinda hard to explain. I was wondering if there was a standard way of doing this. I'm pretty sure I have to utilize a different kind of "data," the lack of an event. The goal is to estimate until enough time/events have passed to really calculate a rate and to transition from estimate to real rate seamlessly.
Related
I had a homework assignment for school and it dealt with calculating the propagation time of information sent from various places around the world.
The next question asked me to find the difference from the download time from the various places and the estimated propagation time, both in milliseconds. The next part is what confused me, it asked " what time components must this include?"
I dont even know what time components are and have never heard that term in my 4 years of dealing with computers and networks.
Thanks for you help in advance!( The question is posted below)
Compute the difference between the measured download time and estimated propagation time and put the result in the "difference" column. What time components must this value include?
I understand the question to mean "how do you explain the difference between the measured download time and the estimated propagation time?"
So, for example, if you decided that the difference was due to packets having to be re-sent, due to a poor connection, then the "time components" would be time delays due to re-sending the packets.
I am sending Graphite the time spent in Garbage Collection (getting this from jvm via jmx). This is a counter that increases. Is their a way to have Graphite graph the change every minute so I can see a graph that shows time spent in GC by minute?
You should be able to turn the counter into a hit-rate with the Derivative function, then use the summarize function to the counter into the time frame that your after.
&target=summarize(derivative(java.gc_time), "1min") # time spent per minute
derivative(seriesList)
This is the opposite of the integral function. This is useful for taking a
running totalmetric and showing how many requests per minute were handled.
&target=derivative(company.server.application01.ifconfig.TXPackets)
Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.)
By applying the derivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total.
summarize(seriesList, intervalString, func='sum', alignToFrom=False)
Summarize the data into interval buckets of a certain size.
By default, the contents of each interval bucket are summed together.
This is useful for counters where each increment represents a discrete event and
retrieving a “per X” value requires summing all the events in that interval.
Source: http://graphite.readthedocs.org/en/0.9.10/functions.html
In a previous question, I had asked Why can't I simply negate the source time domain amplitude values to produce a destructive noise signal?
One of the posters said that while simply producing a inverses polarity (negated) signal will work in theory, in practice it is not possible
So I am asking, what is the fundamental approach (in a sort of semi technical way) to active noise cancellation?
Secondly, why are most literature on this topic in frequency domain?
It's rather simple.
By the time you send your inverted signal, the noise has already been heard.
You need to look at what frequencies are being generated, and then produce the appropriate inverted signals of those to cancel them out.
Noise cancellation is prediction. Your algorithm has to predict what the sound of the noise will be at some time in the future (that time given by the system and audio time latencies), and then predict what signal will produce the opposite sound at that same point in the future (which your system will distort and delay, so you have to figure in the opposite distortion and delay).
You might be able to use several successive FFTs to determine which frequencies in the noise are not changing, and assume or calculate some probability that they will continue for a short time into the future.
If you know the frequency response curve of the speaker, you might be able to figure out the frequency amplitudes of a signal needed to match some predicted noise spectrum. The phase angle of a sinusoid will change with time. If you know the time delay of your output signal, you might be able to calculate the phase of a sinusoid at some point in the future. If you have a predicted phase of a particular frequency of noise at some time and location, you can add π to that phase angle to estimate the noice-cancelling signal.
If you don't know the frequency response and delay of your system, then you won't know what frequencies, amplitudes or phases of signal to create for cancellation. You might well end up amplifying the noise instead of cancelling it.
It seems that what’s missing is the propagation delay required to intercept and negate a signal. The KISS rule will eventually prove this true. The FFT is a complex calculation and each N iteration will introduce resulting error due to the time required to process the signal. To cancel a sound wave it will need to be intercepted in advance, processed and inverted. Then the time constant of the transducer must. E considered. My experience is that a microphone near the source of “noise” connected by wire and amplification device and transducer near the location where It is to be cancelled.
edit: typo
The basic idea of ANC is to find repetitive sound and play the opposite of it. If the repetitive sound continue to play we'll be able to cancel it. That goes in direct contradiction to to the other answers, but I'll clarify.
Playing the opposite sound means playing it again with a precise power and delay, possibly inverting the waveform. The delay itself varies for each frequency. For example, for a 20Hz sound we have to replay the inverted sound on a precise multiple of 1/20 = 0.05s. For 23Hz, for example, the delay has to be a multiple of 1/23 ~= 0.04347s.
Since any waveform can be produced by sum of sinusoidal, one way of doing it would be to only worry about the N biggest sinusoids, measured in power (square of the amplitudes). For finding the sinusoidal's frequencies and power we use the Fourier Transform, typically with the FFT algorithm.
If we take, for example N=8, it means we are trying to eliminate the 8 most powerfull wave components. For each of them we store:
wave's amplitude
wave's offset, taking the computer's clock as a base.
than we constantly play 8 sinusoids, each on the correct power and with the correct delay. The hard part is what happens next. We need to keep listening to adapt, but now we are listening to the environment sound + our own sound. This algorithm is harder to implement, but conceptually is easier, and one could easily figure out how to do it by himself.
So, contrary to what the other answers say, managing the time delay is critical. Is not possible to create an ANC system without doing it. If you only care about the frequency domain, the only thing you could possibly do is filter those frequencies. On an ANC system this makes not sense.
If I monitor ASP.NET Requests/sec performance counter every N seconds, how should I interpret the values? Is it number of requests processed during sample interval divided by N? or is it current requests/sec regardless the sample interval?
It is dependent upon your N number of seconds value. Which coincidentally is also why the performance counter will always read 0 on its first nextValue() after being initialized. It makes all of its calculations relatively from the last point at which you called nextValue().
You may also want to beware of calling your counters nextValue() function at intervals less than a second as it can tend to produce some very outlandish outlier results. For my uses, calling the function every 5 seconds provides a good balance between up to date information, and smooth average.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What update rate should I run my fixed-rate game logic at?
I've used 60 updates per second in the past, but that's hard because it's not an even number of updates per second (16.666666). My current games uses 100, but that seems like overkill for most things.
None of the above. For the smoothest gameplay possible, your game should be time-based, not frame-locked. Frame-locking works for simple games where you can tweak the logic and lock down the framerate. It doesn't do so well with modern 3D titles where the framerate jumps all over the board and the screen may not be VSynced.
All you need to do is figure out how fast an object should be going (i.e. virtual units per second), compute the amount of time since the last frame, scale the number of virtual units to match the amount of time that has passed, then add those values to your object's position. Voila! Time-based movement.
I used to maintain a Quake3 mod and this was a constant source of user-questions.
Q3 uses 20 'ticks per second' by default - the graphics subsystem interpolates so you get smooth motion on the screen. I initially thought this was way low, but it turns out to be fine, and there really aren't many games at all with faster action than q3
I'd personally go with the "good enough for john carmack, good enough for me"
I like 50 for fixed rate pc games. I can't really tell the difference between 50 and 60 (and if you are making a game that can/cares you should probably be at 100).
you'll notice the question is 'fixed-rate game logic' and not 'draw loop'. For clarity, the code will look something like:
while(1)
{
while(CurrentTime() < lastUpdate + TICK_LENGTH)
{
UpdateGame();
lastUpdate += TICK_LENGTH;
}
Draw();
}
The question is what should TICK_LENGTH be?
Bear in mind that unless your code is measured down to the cycle, not each game loop will take the same number of milliseconds to complete - so 16.6666 being irrational is not an issue really as you will need to time and compensate anyway. Besides it's not 16.6666 updates per second, but the average number of milliseconds your game loop should be targeting.
Such variables are generally best found via the guess and check strategy.
Implement your game logic in such a way that is refresh agnostic (Say for instance, exposing the ms/update as a variable, and using it in any calculations), then play around with the refresh until it works, and then keep it there.
As a short term solution, if you want an even update rate but don't care about the evenness of the updates per second, 15ms is close to 60 updates/sec. While if you are about both, your closest options is 20ms or 50 updates/sec is probably the closest you are going to get.
In either case, I would simply treat time as a double (Or a long with high-resolution), and provide the rate to your game as a variable, rather then hard coding them.
The ideal is to run at the same refresh-rate as the monitor. That way your visuals and the game updates don't go in and out of phase with each other. The fact that each frame doesn't last an integral number of milliseconds shouldn't matter to you; why is that a problem?
I usually use 30 or 33. It's often enough for the user to feel the flow and rare enough not to hog the CPU too much.
Normally I don't limit the FPS of the game, instead I change all my logic to take the time elapsed from last frame as input.
As far as fixed-rate goes, unless you need a high rate for any reason, you should use something like 25/30. That should be enough rate, and will be making your game a little lighter on CPU usage.
Your engine should both "tick" (update) and draw at 60fps with vertical sync (vsync). This refresh rate is enough to provide:
low input lag for a feeling of responsiveness,
and smooth motion even when the player and scene are moving rapidly.
Both the game physics and the renderer should be able to drop frames if they need to, but optimize your game to run as close to this 60hz standard as possible. Also, some subsystems like AI can tick closer to 10-20fps, and make sure your physics are interpolated on a frame-to-frame time delta, like this: http://gafferongames.com/game-physics/fix-your-timestep/