In a previous question, I had asked Why can't I simply negate the source time domain amplitude values to produce a destructive noise signal?
One of the posters said that while simply producing a inverses polarity (negated) signal will work in theory, in practice it is not possible
So I am asking, what is the fundamental approach (in a sort of semi technical way) to active noise cancellation?
Secondly, why are most literature on this topic in frequency domain?
It's rather simple.
By the time you send your inverted signal, the noise has already been heard.
You need to look at what frequencies are being generated, and then produce the appropriate inverted signals of those to cancel them out.
Noise cancellation is prediction. Your algorithm has to predict what the sound of the noise will be at some time in the future (that time given by the system and audio time latencies), and then predict what signal will produce the opposite sound at that same point in the future (which your system will distort and delay, so you have to figure in the opposite distortion and delay).
You might be able to use several successive FFTs to determine which frequencies in the noise are not changing, and assume or calculate some probability that they will continue for a short time into the future.
If you know the frequency response curve of the speaker, you might be able to figure out the frequency amplitudes of a signal needed to match some predicted noise spectrum. The phase angle of a sinusoid will change with time. If you know the time delay of your output signal, you might be able to calculate the phase of a sinusoid at some point in the future. If you have a predicted phase of a particular frequency of noise at some time and location, you can add π to that phase angle to estimate the noice-cancelling signal.
If you don't know the frequency response and delay of your system, then you won't know what frequencies, amplitudes or phases of signal to create for cancellation. You might well end up amplifying the noise instead of cancelling it.
It seems that what’s missing is the propagation delay required to intercept and negate a signal. The KISS rule will eventually prove this true. The FFT is a complex calculation and each N iteration will introduce resulting error due to the time required to process the signal. To cancel a sound wave it will need to be intercepted in advance, processed and inverted. Then the time constant of the transducer must. E considered. My experience is that a microphone near the source of “noise” connected by wire and amplification device and transducer near the location where It is to be cancelled.
edit: typo
The basic idea of ANC is to find repetitive sound and play the opposite of it. If the repetitive sound continue to play we'll be able to cancel it. That goes in direct contradiction to to the other answers, but I'll clarify.
Playing the opposite sound means playing it again with a precise power and delay, possibly inverting the waveform. The delay itself varies for each frequency. For example, for a 20Hz sound we have to replay the inverted sound on a precise multiple of 1/20 = 0.05s. For 23Hz, for example, the delay has to be a multiple of 1/23 ~= 0.04347s.
Since any waveform can be produced by sum of sinusoidal, one way of doing it would be to only worry about the N biggest sinusoids, measured in power (square of the amplitudes). For finding the sinusoidal's frequencies and power we use the Fourier Transform, typically with the FFT algorithm.
If we take, for example N=8, it means we are trying to eliminate the 8 most powerfull wave components. For each of them we store:
wave's amplitude
wave's offset, taking the computer's clock as a base.
than we constantly play 8 sinusoids, each on the correct power and with the correct delay. The hard part is what happens next. We need to keep listening to adapt, but now we are listening to the environment sound + our own sound. This algorithm is harder to implement, but conceptually is easier, and one could easily figure out how to do it by himself.
So, contrary to what the other answers say, managing the time delay is critical. Is not possible to create an ANC system without doing it. If you only care about the frequency domain, the only thing you could possibly do is filter those frequencies. On an ANC system this makes not sense.
Related
I'm trying to come up with a better way of providing an instantaneous average when input signal is very slow. This seems like a math-y kinda question so if it should be over there let me know.
I have events that are measured as a pulse. Normally I can collect the pulses in a counter and then read the counter value at a fixed interval, say 1/4 second. I can then take the count value and divide by the number of seconds so n/0.25 and get a rate. I then apply a low pass filter to clean up the average and that works great normally.
What do I do when the events happen once every 1-60 seconds? The obvious choice is to wait until I have a sufficient number of counts and divide by total time. However, I need to provide the user with a reading every few seconds so waiting is not an option. I need some way to estimate the value.
I've thought of one solution that's kinda hard to explain. I was wondering if there was a standard way of doing this. I'm pretty sure I have to utilize a different kind of "data," the lack of an event. The goal is to estimate until enough time/events have passed to really calculate a rate and to transition from estimate to real rate seamlessly.
I wanted to know if there is any efficient way of finding the distance between 2 devices(a transmitter and a receiver) which is accurate to atleast the order of a couple of inches.
I am basically want to detect the movement of the transmitter from the receiver and how far it has moved from its original position.
I was thinking in terms of using a wireless hotspot/bluetooth connection. I cannot Use some form of audio/medium which can be detected by humans.
Could anybody help me with this?
To my mind, assuming there is no common synchronisation signal between the devices, there are 2 differents way to do this (not really easy):
1. Measure received power : some receivers provide RSSI (Received Signal Strength Indication). RSSI is a measure of how much power you received. If you know the transmitted power, you can estimate the transmission loss (from the transsmission channel) by taking different measure of RSSI at different distance. It will really depends on the channel (environment, frequency, throughput, ..), so don't change it for the measure. Once you got enough points, try to fit it by a curve. You can now predict distance by having RSSI.
2. Measure round trip time : this is called RADAR, and is really more difficult but is the classic way to measure distance and speed. Broadband systems (like WiFi) are better for this kind of measure. By the way you also can do the same with audio for short distances (SONAR), without being detected if you use frequencies higher than 20kHz.
Ok, so a laser on earth hits a mirror on the moon and bounces back. On the ATmega128 microprocessor, we use TIMER1 to capture the clock ticks when the laser shot out and the clock ticks when it returned, subtract and get a "distance" in clock ticks. (16MHz clock on ATmega128).
So we are supposed to determine how different can this measured distance be from the actual distance and what can cause it. As well as compute max error in each legal prescaler of TIMER1.
Looking at TIMER1's registers and input capture information in the ATmega128 datasheet I cannot find any kind of percentage error with the input capture. Like, this seems like a conceptual question, yet we are supposed to pull values out of the air and calculate something?
My question is if anyone knows anything of ATmega128, what values are being referred to in determining error from reading distance with timer ticks? My only guess is the error occurs when you use higher and higher prescalers because you lose preciseness when prescalers get larger. But again this is a conceptual answer and don't understand how I would calculate anything.
The counters/prescalars can be assumed to be perfect and will not cause any loss of resolution.
Your original clock source will be a predominant source of errors. If you are using an external clock with a crystal, these are usually good to 50 ppm (part per million) or better. If you are using an internal clock, the error is much higher (1% is not unreasonable for some microcontrollers).
The whole thing gets tricky if you remember your general relativity (you do have a PhD in Physics right?). The earth's rotation and gravity come into play wrt the speed of light and distance...
What would be the best way to triangulate a wireless network passively. Are there tools available? Algorithms? Libraries?
My goal would be to create a relative map of various objects that sends or receive signals using signal strength (DB's), signal/noise ratio, signal phase, etc. from a few location points. With enough sampling, i'm guessing it would be possible to create a good 2d/3d map.
I'm searching for stuff in any language / platform.
Some keywords: wi-fi site survey, visualization, coverage, location, positioning
Thinking about using kismet to gather the data and then process it. Maybe Free Space Path Loss for RF in the 2.4Ghz range to calculate a relative distance. And optionally try to use RF obstacle attenuation estimation values (based on some user input) to give better estimates. Then use trilateration to generate possible relative coordinates.
You can't use the GPS technique because the timing is nothing like accurate enough.
The best you can do is Trilateration based on the signal strength from each base station and assume that range is proportional to signal.
You will probably need to force a connection to each base station in turn in order to measure the signal strength.
Interesting question. Initial thoughts were using output from something like the WiSpy spectrum analyzer. I like the idea of using a directional antenna. Looks like some research (may) be underway.
Instead of trilateration you could use bilinear interpolation. This is said to be better for non-linear distance vs. signal strength data like wifi in an urban environment would be. http://courses.cit.cornell.edu/ee476/FinalProjects/s2007/ayl26_ym82/ayl26_ym82/index.htm has the background math and the what I assume is AVR C for doing it with magnetic field sensors.
Using signal strength to judge distance could easily be thrown off by differences in materials blocking line-of-sight to each of the sampling points. It would probably be better to do the sampling with a directional antenna, and from each sampling point, find the bearing that maximizes signal strength to each device you want to locate. With this technique, you can use only two or three sampling locations, depending on the accuracy with which you can estimate the bearings.
Ars Technica has an article about this, citing the Fraunhofer Institute and Skyhooks Wireless. This technology is built into every iPhone and iPad.
Actually I think you should try using an algorithm like the GPS one (wikipedia).. of course you can simplify it according to your need, for example:
you need to install on every item that should broadcast its position (the navigation signal) an application that actually does it
you should use a different channel for every single item to be sure not to generate collisions (it depends also on how much you broadcast the signal)
so if you place at least 4 broadcasters you can triangulate on every client to allow it to calculate its position. Naturally the bcasters should be as much similar as possible in response..
by the way these are just ideas..
After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes).
Which method works best for specific situations? Please consider:
Catering to different system specifications;
Ease of development/maintenance;
Ease of porting;
Final performance.
I lean towards a variable framerate model, but internally some systems are ticked on a fixed timestep. This is quite easy to do by using a time accumulator. Physics is one system which is best run on a fixed timestep, and ticked multiple times per frame if necessary to avoid a loss in stability and keep the simulation smooth.
A bit of code to demonstrate the use of an accumulator:
const float STEP = 60.f / 1000.f;
float accumulator = 0.f;
void Update(float delta)
{
accumulator += delta;
while(accumulator > STEP)
{
Simulate(STEP);
accumulator -= STEP;
}
}
This is not perfect by any means but presents the basic idea - there are many ways to improve on this model. Obviously there are issues to be sorted out when the input framerate is obscenely slow. However, the big advantage is that no matter how fast or slow the delta is, the simulation is moving at a smooth rate in "player time" - which is where any problems will be perceived by the user.
Generally I don't get into the graphics & audio side of things, but I don't think they are affected as much as Physics, input and network code.
It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance.
At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast)
Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too)
The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok)
Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game)
If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates.
If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models.
One option that I, as a user, would like to see more often is dynamically changing the level of detail (in the broad sense, not just the technical sense) when framerates vary outside of a certian envelope. If you are rendering at 5FPS, then turn off bump-mapping. If you are rendering at 90FPS, increase the bells and whistles a bit, and give the user some prettier images to waste their CPU and GPU with.
If done right, the user should get the best experince out of the game without having to go into the settings screen and tweak themselves, and you should have to worry less, as a level designer, about keeping the polygon count the same across difference scenes.
Of course, I say this as a user of games, and not a serious one at that -- I've never attempted to write a nontrivial game.
The main problem I've encountered with variable length frame times is floating point precision, and variable frame times can surprise you in how they bite you.
If, for example, you're adding the frame time * velocity to a position, and frame time gets very small, and position is largish, your objects can slow down or stop moving because all your delta was lost due to precision. You can compensate for this using a separate error accumulator, but it's a pain.
Having fixed (or at least a lower bound on frame length) frame times allows you to control how much FP error you need to take into account.
My experience is fairly limited to somewhat simple games (developed with SDL and C++) but I have found that it is quite easy just to implement a static frame rate. Are you working with 2d or 3d games? I would assume that more complex 3d environments would benefit more from a variable frame rate and that the difficulty would be greater.