QT QAudiooutput only playing through left ear while in stereo - qt

What i have currently is an attempt to have a signal generator play in stereo. What is currently happening is while the format has been accepted it winds up only playing the audio from the left ear. However when i switch to mono it works fine through both ears however what i want to do is have it where i can control which ear i am listening from. For instance if i only want it to play on the left ear all i will hear is the audio on the left ear, however i also want to ability to switch ears or use both. The current method i am using is as follows.
format.setFrequency(44100);
format.setChannels(2);
format.setSampleSize(16);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
//This is just the format setup.
// i am using push mode for this program
audio_outputStream = new QAudioOutput(format, this);
audio_outputDevice = audio_outputStream->start();
//now when i write the numbers from the signal generator i use this method
QByteArray array;
array.append(integerValueThatIsConvertedToConstChar,SizeOfInt);
//And i write the data from the ByteArray to the IODevice as such
audio_outputDevice.write(array.data(),MaxSizeOfInt);
//Afterwhich i remove the front of the ByteArray for the next number to be appended and wrote in.
array.remove(0,SizeOfInt);
As stated before the above steps i use in my code do work with monotone with one channel but only plays on the left side when using Stereo with 2 channels. My goal is to be able to control which channel i am writing to through push mode. I currently do not see why it currently only uses one of the two channels, am I simply not writing to the second channel?

Related

Why does Samsung TV IR sensor ignore repeat codes?

Hey there trusty SO community,
I'm working on a project for my new Samsung "The Frame" TV using a IR remote controller. I am trying to send a "repeating" command (when a button is held continuously). I have no problem sending single commands, I am able to control almost everything I need at this point.
Unfortunately, sending the "power" code on the Samsung Frame TV simply selects the "art" mode. Using the original RF remote, the power button has the same action: "art" mode vs TV mode. In order to turn off the TV with the original RF remote, one has to hold down the power button for a second or two.
So I assume that the IR remote interface would be the same.
I have tried sending all sorts of commands (left arrow, right arrow, volume+, etc...) in repeat mode, but the TV only ever responds to the first command and ignores the "repeat" signals. To be clear, the repeat code blocks are being sent, but for some reason ignored. I was able to validate the repeats are sent correctly using a separate sender and receiver.
I have a working prototype of a custom remote that uses this arduino library
I have also read through this SO thread which is incredibly helpful in understanding the protocol.
I wasn't able to find exact information on the protocol of a repeat command, perhaps there is a bug in the library?
line of code that works for single commands:
IrSender.sendSamsung(0x0707, 0x02, 0); //address, command=pwr, number of repeats = 0
line of code that sends the repeat code
IrSender.sendSamsung(0x0707, 0x02, 10); //address, command=pwr, number of repeats = 10
Thanks in advance,
It turns out that version 3.7.1 and earlier of Arduino-IRremote library implements a "special" repeat handling message. After comparing the repeat protocol to a Samsung Brand remote control, it was found that the repeat messages are actually identical to the single message.
An update was made to Arduino-IRremote in this ticket to fix the Samsung repeat messages.
A release has not yet been created as of typing this, but assume version 3.7.2 or greater will support Samsung correctly.

Qt: API to write raw QAudioInput data to file just like QAudioRecorder

I'm trying to monitor an audio input and record the audio to a file but only when a level threshold is exceeded. There seems to be two main options for recording in Qt; QAudioRecorder and QAudioInput.
Long story short: I'm trying to find the API that can take raw audio sample data read from QAudioInput and record it to a file just like QAudioRecorder does, but strangely it doesn't seem to exist. To give an example, the setup of the QAudioRecorder would be something like the following (but instead of specifying a input device with setAudioInput() you pass it sampled bytes):
QAudioEncoderSettings audioSettings;
QAudioRecorder recorder = new QAudioRecorder;
audioSettings.setCodec("audio/PCM");
audioSettings.setQuality(QMultimedia::HighQuality);
recorder.setEncodingSettings(audioSettings);
recorder.setContainerFormat("wav");
recorder.setOutputLocation(QUrl::fromLocalFile("/tmp/test.wav"));
recorder.setAudioInput("default");
recorder.record();
I'm using QAudioInput because I need access to the raw samples. The problem with QAudioInput is, Qt does not seem to provide an easy way to take the raw samples I get out of the QAudioInput and pipe them into a file encoding them along the way. QAudioRecorder does this nicely, but you can't feed raw samples into QAudioRecorder; you just tell it which device to record from and how you want it stored.
Note I tried using QAudioInput and QAudioRecorder together - QAudioInput for the raw access and QAudioRecorder whenever I need to record, but there is two main issues with that: A) Only one of these can be reading a given device at a time. B) I want to record the data at and just before the threshold is exceeded and not just after the threshold is exceeded.
I ended up using QAudioRecorder+QAudioProbe. There are a couple of limitations though:
Firstly the attached QAudioProbe only works if QAudioRecorder is actually recording, so I had to write a wrapper over QAudioRecorder to switch on/off recording by switching output device to|from actual_file|/dev/null.
Second, as I stated "I want to record the data at and just before the threshold is exceeded and not just after the threshold is exceeded". Well, I had to compromise on that. The probe is used to detect the recording condition, but there is no way to stuff the data from the probe back into the recorder. I mean, I guess you could record to a buffer file in idle state and somehow prepend part of that data ... but the complexity wasn't worth it for me.
Aside; there was another issue with QAudioRecorder that motivated me to write a wrapper over it. Basically I found QAudioRecorder::stop() sometimes hangs indefinitely. To get around this, I had to heap allocate a recorder and delete it and init a new one with each new recording.

Is there any way in QT to get raw audio data from mic?

I am trying to implement a pitch shifter in QT. So I need to get raw data from mic, somehow transform it and play it back. But I can't figure out how to get raw data, send it to buffer and then, when it's transformed, play it back. At the moment I'm doing something like that, but this is working without buffer and I can't change data that is going to be played.
QAudioFormat format;
format.setSampleRate(96000);
format.setChannelCount(1);
format.setSampleSize(32);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::UnSignedInt);
audio = new QAudioInput(format, this);
connect(audio, SIGNAL(stateChanged(QAudio::State)), this, SLOT(handleStateChanged(QAudio::State)));
QIODevice* device = audio->start();
connect(device,SIGNAL(readyRead()),this,SLOT(process()));
QAudioOutput* output = new QAudioOutput(format, this);
output->start(device);
Is there any way to do what I'm trying to do?
There is a QAudio class worth looking into.
Other options that come to mind are QIODevice with QDataStream.
This post may get you headed in the right direction.
I think the classes you're looking for are QAudioInput and QAudioOutput, both part of the Qt Multimedia module.
QAudioInput offers the ability to get a byte stream from the microphone, which you can use for whatever you need. Once you apply whatever transformations you need, you can push it to speakers using QAudioOutput. Both offer a few different settings relating to the buffer size, sample frequency, etc that you can play with to fit your specific needs.
Here are some links you may find useful:
Qt Audio Overview
QAudioInput documentation
QAudioOutput documentation

How to use streaming audio data from microphone for ASR in Qt

I'm working on a speech recognition project and my program can recognize words from audio files. Now I need to work with the audio stream coming from microphone. I'm using QAudio for getting sound data from mic and QAudio has a function to start the process. This start(* QBuffer) function writes the data into a QBuffer(inherited from QByteArray) object. When I'm not dealing with continuous stream, I can stop recording from mic anytime I want and copy the whole data from QBuffer into a QByteArray and I can do whatever I wanna do with the data. But in continuous stream QBuffer's size increases by time and becomes 100Mb in 15 mins.
So I need to use some kind of circular buffer but I can't figure out how to do that especially with this start(*QBuffer) function. I also avoid of cutting the streaming sound at a point where the speech continues.
What is the basic way to handle streaming audio data for speech recognition?
Is it possible to change the start(*QBuffer) function into start(*QByteArray) and make the function to overwrite on that QByteArray to build and circular buffer?
Thanks in advance
boost.com is offering a circular buffer
http://www.boost.org/doc/libs/1_37_0/libs/circular_buffer/doc/circular_buffer.html#briefexample
It should meet what you need
Alain

change recording file programmatically in directshow

I made a console application, using directshow, that record from a live source (now a webcam, then a tv capture card), add current date and time in overlay and then save audio and video as .asf.
Now I want that the output file is going to change every 60 minutes without stopping the graph. I must not loose any seconds of the live stream.
The graph is something like this one:
http://imageshack.us/photo/my-images/543/graphp.jpg/
I took a look at the GMFBridge but I have some compiling problem with their examples.
I am wondering if there is a way to split what exist from the overlay filter and audio source, connect them to another asf writer (paused) and then switch them every 60 minutes.
The paused asf filter's file name must change (pp.asf, pp2.asf, pp4.asf ...). Something like this:
http://imageshack.us/photo/my-images/546/graph1f.jpg/
with pp1 paused. I found some people in internet that say that the asf writer deletes the current file if the graph does not go in stop mode.
Well, I have the product (http://www.videophill.com) that does exactly what you described (its used for broadcast compliance recording purposes) - and I found that only way to do that is this:
create a dshow graph that will be used only to capture the audio and video
then, at the end of the graph, insert samplegrabber filters, both for audio and video
then, use IWMWritter to create and save wmv file, using samples fetched from samplegrabber filters
when time comes, close one IWMWritter and create another one.
That way, you won't lose single frame when switching the output files.
Of course, there is also question of queue-ing and storing the samples (when switching the writters) and properly re-aligning the audio/video timestamps, but from my research, that's the only 'normal' way to do it, and I used in practice.
The solution is in writing a custom DShow filter with two input pins in your case. One for audio stream and the other for video stream. Inside that filter (doesn't have to be inside from the architecture point of view, because you can also use callbacks for example and do the job somewhere else) you should create asf files. While switching files, A/V data would be stored in cache (e.g. big enough circular buffer). You can also watch and modify A/V sync in that filter. For writing ASF files I would recommend Windows Media Format SDK.You can also add output pins if you like to pass A/V data further if necessary for preview, parallel streaming etc...
GMFBridge is a viable, but complicated solution, a more direct approach I have implemented in the past is querying your ASF Writer for the IWMWriterAdvanced2 interface and setting a custom sink. Within that interface you have methods to remove and add sinks to your ASF writer. The sink automatically connected will write to the file that you speficifed. One way to write whereever you want to is
1.) remove all default sinks:
pWriterAdv->RemoveSink(NULL);
2.) register a custom sink:
pWriterAdv->AddSink((IWMWriterSink*)&streamSink);
The custom sink can be a class that implements IWMWriterSink, which requires implementing callback methods that are called i.e. when the ASF header is written (OnHeader(/* [in] */ INSSBuffer *pHeader);) and when a data packet is written (OnDataUnit(/* [in] */ INSSBuffer *pDataUnit);) - in your implementation you can then write them wherever you want, for example offer additional methods on this class where you can specify the file name you want to write to.
Note that this solution does not quite get you were you want to if you need to write out the header information in each of the 60 minute files - after the initial header you will only get ASF packet data. A workaround for that could be to re-write the intial header before any packet data of each file, however this will produce an unindexed (non-seekable) ASF file.

Resources