Amplitude of Audio Tracks - qt

I want to develop an audio editor using Qt.
For this, I need to plot a waveform of the music track which I think should be a plot of peak amplitude of sound versus time(please correct me if I am wrong).
Currently, I have been using Phonon::AudioOutput class object as an audio sink and connected it with my Phonon::MediaObject class object to play the audio file.
Now, to draw the waveform I need to know the amplitude of audio track at every second (,or so) from this AudioOutput object so that I can draw a line (using QPainter) of length proportional to sound frequency at different times and hence, obtain my waveform.
So, please help me on how to obtain amplitude of audio tracks at different times.
Secondly,am I using the correct way of plotting waveforms of audio tracks - plotting amplitudes of sound against time by plotting lines by QPainter object on a widget at different times.
Thanks.

There is code which does both of the things you ask about (calculating peak amplitude, and plotting audio waveforms) in the Spectrum Analyzer example which ships with Qt (in the demos/spectrum directory).
Screenshot of Spectrum Analyzer demo running on Symbian http://labs.trolltech.com/blogs/wp-content/uploads/2010/05/spectrum.png
This demo also calculates and displays a frequency spectrum. As another commenter points out, this is distinct from a waveform plot: the spectrum is a plot of amplitude against frequency, whereas the waveform plots amplitude against time.
The demo uses QtMultimedia rather than Phonon to capture and render audio. If you are only interested in playing audio, and don't need to record it, Phonon may be sufficient, but be aware that streaming support (i.e. Phonon::MediaSource(QIODevice *)) is not available on all platforms. QAudioInput and QAudioOutput on the other hand are well supported, at least for PCM audio data, on all the main platform targetted by Qt.

Related

Labview 13 - Waveform graph not accepting all data

I am having trouble being able to plot three different sets of data onto a waveform graph. The waveform chart has no problem in accepting all the three sets of data, and displaying. However i need a history of data that i can export to an excel document and examine.
The circuit is setup as follows:
An NI DAQ 6001 takes a temperature reading from an LM35 that is measuring a brass block. Separate circuitry drives a current through a peltier device to maintain a specific temperature on this brass block. It is fundamentally controlled by PID control to allow an operator to choice a temperature that the brass block can be set at. To be able to tune the system perfectly i need to make a set of step changes and record the data and be able to graph at a later data to determine such characteristics as: linear / non-linearity / Oscillation / stability.
Unfortunately i do not know how to upload my program but i have attached a screen shot
enter image description here
enter image description here
It looks like you're using direct 1D array wired to the waveform graph. You should build all waveforms using Build Array function to form a 2D array which will eventually display as different plots.
A proper way in displaying waveform in graph includes time component. You would need to build waveform (Block Diagram --> Function Pallette --> Programming --> Waveform --> Build Waveform) with start time, delta Time for each 1D array. Then you can bundle this waveform into a 1D array to plot multiple plots.
Sharing your VI can help solving quickly. Just select all of your block diagram using Ctra+A and save it as VI snippet using Edit-->Create VI Snippet from Selection, then attach that image generated in this stack overflow response.

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

Implementing sliding realtime 2D plot in Qt

I am getting streaming measurement data from an ultrasonic device moving inside a pipeline, and I want to make a sliding/realtime plot of these measurements. The Y axis would represent a gradient of the 360 degrees around the pipe, and the X axis would represent the length-wise position in millimeters. In other words, the X axis will update and move at the same rate as the scanner while new data is arriving (approx 40Hz). The value at each (x,y) coordinate represents one measurement, which should be mapped to a color in a colormap.
I am new to graphics (systems&backend guy) and I have been looking at QImage, QWT and QCustomPlot but none of them seem to straight-forward solve the problem without having to manually build a 2D matrix, draw it in a QImage, and update and shift the coordinates of each datapoint and redraw to move/scroll it. QCustomplot does this very nicely for graphs, but I don't see how it can be applied to their colormaps.
Any hints to frameworks or packages that provide primitives (or widgets) for this kind of plot would be much welcomed.
This can be done with Qwt. The trick is creating a wrapper around the series data that triggers a replot every time you add a data point. If you want to get fancy you can add a timer that removes old data from the series and triggers another replot.
See the CPU, oscilloscope, and realtime examples that come with the Qwt source code. They implement this trick.

Is there an easy (and not too slow) way to compare two images in Qt/QML to detect motion

I would like to implement a motion detecting camera in Qt/QML for Nokia N9. I hoped that there would be some built in methods for computing image differences but I can't find any in the Qt documentation.
My first thoughts were to downscale two consecutive images, convert to one bit per pixel, compute XOR, and then count the black and white pixels.
Or is there an easy way of using a library from somewhere else to achieve the same end?
Edit:
I've just found some example code on the Qt developer network that looks promising:
Image Composition Example.
To compare images qt has QImage::operator==(const QImage&). But i don't think it will work for motion detection.
But this may help: Python Motion Detection Library + Demo.

Generating 3D TV stereoscopic output programmatically

Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy

Resources