I would like to create a digital (square) signal on my sound card. It works great if I generate high frequencies. But, since I can't output DC on a sound card, for lower frequencies the resulting digital bits will all slowly fade to 0.
This is what the soundcards high pass does to my square wave:
http://www.electronics-tutorials.ws/filter/fil39.gif
What's the mathematical function of a signal, that, when passed through a high pass will become square?
Ideally, the solution is demonstrated in gnuplot.
The sound card cuts out the low frequencies in the waveform, so you need to boost those by some amount in what you pass to it.
A square wave contains many frequencies (see the section on the Fourier series here). I suspect the easiest method of generating a corrected square wave is to sum a Fourier series, boosting the amplitudes of the low frequency components to compensate for the high-pass filter in the sound card.
In order to work out how much to boost each low frequency component, you will first need to measure the response of the high-pass filter in your soundcard, by outputting sine waves of various frequencies but constant amplitude, and measuring for each frequency the ratio r(f) of the amplitude of the output to the amplitude of the input. Then, an approximation to a square wave output can be generated by multiplying the amplitude of each frequency component f in the square wave fourier series by 1/r(f) (the 'inverse filter').
It's possible that the high-pass filter in the soundcard also adjusts the phase of the signal. In this case, one might be better off modelling the high pass as an RC filter, (which is probably how the soundcard is doing the filtering), and invert both the amplitude and phase response from that.
Some of the previous answers have correctly noted that it is the high-pass filter (AC coupling capacitor on the soundcard's output) is what is preventing the low frequency square waves from "staying on" so they decay quickly.
There is no way to completely beat this filter from software or it wouldn't be there, now would it? If you can live with lower amplitude square waves at the lower frequencies, you can approximate them by sending out something like a triangle wave. From a transient analysis perspective, the theory of operation here is that as the coupling capacitor is discharging (blocking DC) you are increasing its bias voltage to counteract that discharge thus maintaining the square wave's plateau for a while. Of course you eventually run out of PCM headroom (you can't keep increasing the voltage indefinitely), so a 24-bit card is better in this respect than a 16-bit one as it will give you more resolution. Another, more abstract way to think of this is that the RC filter works as a differentiator, so in order to get the flat peaks of the square wave you need to give it the flat slopes of the triangle wave at the input. But this is an idealized behavior.
As quick proof of concept, here's what a 60Hz ±1V triangle signal becomes when passing through a 1uF coupling cap on a 1Kohm load; it approximates a ±200mV square wave
Note that the impedance/resistance of the load matters quite a bit here; if you lower it to, say, 100ohm the output amplitude decrease dramatically. This is how the coupling caps block DC on speakers/headphone because these devices have much lower impedance than 1Kohm.
If I can find a bit more time later today, I'll add a better simulation, with a better shaped stimulus instead of the simple triangle wave, but I can't get that from your average web-based circuit simulator software...
Well, if you're lucky you can get one of those $0.99 USB sound cards where the manufacturer has cut corners so much that they didn't install coupling caps. https://www.youtube.com/watch?v=4GNRzwfP7RE
Unfourtunately, you cannot get a good approximation of a square wave. Sound hardware is intentionally slew rate limited and would not be able to produce a falling or rising edge beyond its intended frequency range.
You can approximate a badly deformed square wave by alternating a high and low PCM code (+max, -max) every N samples.
You can't actually produce a true square wave, because it has infinite bandwidth. You can produce a reasonable approximation of a square wave though, at frequencies between say 10 Hz and 1 kHz (below 10 Hz you may have problems with the analogue part of your sound card etc, and above around 1 kHz the approximation will become increasingly inaccurate, since you can only reproduce a relatively small number of harmonics).
Tp generate the waveform the sample values will just alternate between +/- some value, e.g. full scale, which would be -32767 and +32767 for a 16 bit PCM stream. The frequency will be determined by the period of these samples. E.g. for a 44.1 kHz sample rate, if you have say 100 samples of -32767 and then 100 samples of +32767, i.e. period = 200 samples, then the fundamental frequency of your square wave will be 44.1 kHz / 200 = 220 Hz.
I found an application that I build on it.
http://www.blogger.com/blogger.g?blogID=999906212197085612#editor/target=post;postID=7722571737880350755
you can generate the format you want and even the pattern you need.
The code uses SLIMDX.
Related
Maybe somebody has some links about determining distance where the scanners are more than 15 meters apart?
All the articles I found describe how to determine the distance, but for example in room 5/5 with 4 scanners or even closer together. I want to know what accuracy can I have when scanners are they are further apart.
Unfortunately beacon distance estimates are wildly inaccurate at distances over a few meters. It is nearly impossible to discern between beacons 15 meters and 25 meters away.
Why? Because radio signals get weaker with distance, and at distances that near the maximum range of Bluetooth beacons (typically 20-40 meters in home/school/office indoor environments with walls and instructions) the signal level nears the noise floor. There is very little absolute difference between the signal level of a beacon at 25 meters from one at 15 meters. By looking for such tiny differences in signal you are mostly seeing error and non-distance influences on the signal measurement.
The best you can do with a distance calculation is determine that it is likely the beacon is far away (e.g. > 10 meters). One you know it is far away determining how far is beyond the limits of the technology.
my BH1750 I2C light sensor is giving me a reading in lux but I need a lumen-value.
From what I read I just multiply the lux-reading by the surface area of the sensor to get my lumen-value.
But from the datasheet on page 6 I'm getting a very small surface area of 0.25mm by 0.3mm. That doesn't seem right. What am I doing wrong?
I'm getting a reading of about 8,000 lx on this cloudy afternoon which should be somewhere around 600 lumens.
You seem to have a wrong understanding of photometric quantities. Let me try to get this straight with an analogon: Consider a water fountain that emits water. This fountain will stand for our light source.
The total amount of water that the fountain emits can be measured as m³/s (cubic meters per second). This is a characteristic of the fountain, which could be called the water power. Going back to photometry, this power is equivalent to the luminous flux, which is measured in lumen. Therefore, the luminous flux describes how much light a light source emits. You can restrict this definition to a given set of directions (e.g., measure the luminous flux of a light bulb only in a downward cone). This will give you the total amount of light that travels in that cone. For the fountain example, this can be done equivalently. Just measure the water emitted into a given cone. The unit is still m³/s or lumen.
Now, let us not only consider the fountain (as the light source) but also the surrounding surfaces. We can pick an arbitrary point on the surrounding surface (a point with no area) and measure how much water/light arrives at this point. This might be a bit hard to imagine because it is a differential quantity. You can approximate this value by measuring the amount of water that arrives in a small neighborhood of the point and dividing by the area of that neighborhood. This is actually what your sensor is doing. The resulting unit is m³/s/m² (cubic meters per second per square meter) or for the photometric case lm / m² (lumen per square meter), which is the definition of lux (unit of illuminance). Therefore, different points can have different illuminance. Especially, points far away from the light source usually have a smaller illuminance. You can calculate the total luminous flux by integrating the illuminance of the entire surface area. This is equivalent to measuring the amount of water at very many small surface pieces around the fountain (i.e. illuminance multiplied by area) and summing them up.
With this background knowledge, we see that it does not make sense to convert lux to lumen. They measure completely different things. Intuitively, illuminance tells you how much light shines at a given point, which is usually what you want. What you did (by multiplying the illuminance by the sensor area) is calculating the total luminous flux that arrives at the sensor (total amount of water at a given surface patch). Naturally, this measure grows as your sensor gets bigger (there will be more light arriving at the surface; or equivalently, as you consider bigger and bigger patches around the fountain, you will collect more and more water). Therefore, it does also not make sense to state that 8 klx should be 600 lm.
I'm building an app (in Qt) that includes a few graphs in it which are dynamic (meaning refreshes to new values rapidly), and gets there values from a background thread.
I want the first graph, whose details are important refreshing at one speed (100 Hz) and 4 other graphs refreshing in lower speed (10Hz).
The problem is, that when I'm refreshing them all at the same rate (100 Hz) the app can't handle it and the computer stucks, but when the refresh rate is different the first signal gets artifacts on it (comparing to for example running them all an 10Hz).
The artifacts are in the form of waves (instead of straight line for example I get a "snake").
Any suggestions regarding why it has artifacts (rendering limits I guess) and what can be done about it?
I'm writing this as an answer even if this doesn't quite answer your question, because this is too long for a comment.
When the goal is to draw smooth moving graphics, the basic unit of time is frame. At 60 Hz drawing rate, the frame is 16.67 ms. The drawing rate needs to match the monitor drawing rate. Drawing faster than the monitor is totally unnecessary.
When drawing graphs, the movement speed of graph must be kept constant. If you wonder why, walk 1 second fast, then 1 seconds slow, 1 second fast and so on. That doesn't look smooth.
Lets say the data sample rate is 60 Hz and each sample is represented as a one pixel. In each frame all new samples (in this case 1 sample) is drawn and the graph moves one pixel. The movement speed is one pixel per frame, in each frame. The speed is constant, and the graph looks very smooth.
But if the data sample rate is 100 Hz, during one second in 40 frames 2 pixels are drawn and in 20 frames 1 pixel is drawn. Now the graph movement speed is not constant anymore, it varies like this: 2,2,1,2,2,1,... pixels per frame. That looks bad. You might think that frame time is so small (16.67 ms) that you can't see this kind of small variation. But it is very clearly seen. Even single varying speed frames can be seen.
So how is this data of 100 Hz sample rate is drawn smoothly? By keeping the speed constant, in this case it would be 1.67 (100/60) pixels per frame. That of course will require subpixel drawing. So in every frame the graph moves by 1.67 pixels. If some samples are missing at the time of drawing, they are simply not drawn. In practice, that will happen quite often, for example USB data acquisition cards can give the data samples in bursts.
What if the graph drawing is so slow that it cannot be done at 60 Hz? Then the next best option is to draw at 30 Hz. Then you are drawing one frame for every 2 images the monitor draws. The 3rd best option is 20 Hz (one frame for every 3 images the monitor draws), then 15 Hz (one frame for every 4 images) and so on. Drawing at 30 Hz does not look as smooth as drawing at 60 Hz, but the speed can still be kept constant and it looks better than drawing faster with varying speed.
In your case, the drawing rate of 20 Hz would probably be quite good. In each frame there would be 5 new data samples (if you can get the samples at a constant 100 Hz).
I want to have an estimation of the location of a user using the surrounding cell towers. For each tower, I have a location and a signal strength. Now I use a simple means of the coordinates but it is not very accurate (the user is not necessarily between the two towers).
I guess the solution is to draw a circle around each tower (the less the signal strength is, the larger it will be) and them compute the intersection between the circles. I usually don't have more than 3 cell towers.
Any idea how ? I found the Delaunay triangulation method but I don't think it applies here.
Thank you
You need to convert each signal strength to an estimate of distance and then use each distance (as the radius of a circle) in order to triangulate. You'll need at least three transmitters to resolve ambiguity, and accuracy will not be great, since signal strength is only very approximately related to distance and is affected by numerous external factors in the real world. Note that in ideal conditions, signal strength follows an inverse square law with distance.
I'm a beginner with FFT concepts and so what I understand is that if I put in 1024 signals, I'll get 513 bins back ranging from 0hz to 22050Hz (in the case of a 44100Hz sampling rate). Using KISS FFT in Cinder the getBinSize function returns the expected 513 values for an input of 1024 signals. What I don't understand is why duplicate peaks show up. Running a test audio sample that goes through frequencies (in order) of 20Hz to 22000Hz I see two peaks the entire time. It looks something like:
_____|________|_____
As the audio plays, the peaks seem to move towards each other so the second peak really does seem to be a mirrored duplicate of the first. Every example I've been through seems to just go ahead and plot all 513 values and they don't seem to have this mirroring issue. I'm not sure what I'm missing.
Ok, after reading up on this I found the solution. The reason for the mirroring is because I use an FFT on real numbers (real FFT). The normal FFT as everyone knows works on complex numbers. Hence the imaginary part is "set" to 0 in the real FFT, resulting in a mirroring around the middle (or technically speaking the mirroring is around 0 and N/2).
Here is a detailed discussion: http://www.edaboard.com/thread144315.html
(the page is no longer avaliable, but there is a copy on archive.org)
And read p 238 - 242 on this book (Chapter 12). It's fantastic, so buy it. I think there is a free pdf version on the author’s website: http://www.dspguide.com/
You are possibly plotting the magnitude of all 1024 FFT result bins of a 1024 length FFT, but the upper half is just a mirror image of the lower half (since real-only input to a complex fft doesn't provide enough degrees of freedom to make the upper half unique).
The peaks will move towards each other when mirror images of each other about the center.
Another possibility is that your FFT was somehow only of length 512.