I am playing audio in my code like this:
// decode routine
QAudioFormat format;
format.setFrequency(aCodecCtx->sample_rate);
format.setChannels(2);
format.setSampleSize(16);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
QAudioDeviceInfo info(QAudioDeviceInfo::defaultOutputDevice());
if (!info.isFormatSupported(format)) {
cout<<"raw audio format not supported by backend, cannot play audio." <<endl;
format = info.nearestFormat(format);
}
QAudioOutput * audio = new QAudioOutput(format);
connect(audio,SIGNAL(stateChanged(QAudio::State)),SLOT(stateChanged(QAudio::State)));
if( !buffer->open(QBuffer::ReadWrite) )
cout << "Couldnt open Buffer" << endl;
cout << "buffer.size()=" << buffer->size() <<endl;
audio->start(buffer);
I was running this code in a worker thread because the decoder routine is heavy. But no sound was playing. I shifted this code to the main thread and everything works fine.
Why is this so? The QAudioOutput doc does not say anything about which thread it needs to run on
I forgot to start an event loop in the worker thread. Otherwise the thread exits and that's why the audio does not play
Related
I'm currently working with a program for predicting the locations of satellites in real-time. Something similar to this. The underlying library uses system time as input.
time_t now(time(0));
This program accurately predicts the real-time position of satellites when I run it on a C++ console application using Qt Creator.
The problem is when I use it in a fully-fledged Qt Gui application with a QApplication object in the main function. In the program, the prediction function is periodically by the timer event function. That way I update the positions every 2 seconds. Unfortunately, The output doesn't match (either on the GUI or when I print it). It is like the orbital propagator is using a different time when calculating the satellite positions.
void TrackingManager::timerEvent(QTimerEvent *event)
{
int nNumSats = m_Satellites.size() ;
//std::cout << __func__ << " - Number of satellites = " << nNumSats << std::endl;
std::vector<SatPosition> vSatPositions;
if (nNumSats >= 0)
{
time_t now(time(0));
std::cout << __func__ << "time(0) = " << asctime(gmtime(&now)) << std::endl;
for(int i = 0; i < nNumSats; i++)
{
// Get satellite names and calculate position, altitude etc
SatPosition spPos;
GetInstantPredict(m_Satellites[i], now, spPos);
vSatPositions.push_back(spPos);
}
emit UpdateSatPosition(vSatPositions);
}
}
Even more confusing, the program works fine when I run the debugger (GDB on Ubuntu). It is as if GDB somehow manages to "fix" the problem. Does this make any sense?
I am new to asio.
Here is guide I was following writing my daytime tcp-server: https://think-async.com/Asio/asio-1.18.1/doc/asio/tutorial/tutdaytime3.html . I was trying to reproduce a reasonable example that would show that asunchronous code is actually asynchronous. I didn't modify anything else, just small piece of code in tcp_server class. I am adding this delay in order to see that after we are waiting timer to expire, we can gracefully handle other client connections/requests. So, did I miss something? Because in my case delay basically doesn't work ;(
void handle_accept(const tcp_connection::pointer &new_connection,
const asio::error_code &error) {
asio::steady_timer timer(io_context_, asio::chrono::seconds(5));
std::cout << "Before timer" << std::endl;
timer.async_wait(std::bind(&tcp_server::handle_wait, this, error, new_connection));
}
void handle_wait(asio::error_code &ec, const tcp_connection::pointer &new_connection) {
if (!ec) {
new_connection->start();
}
std::cout << "Inside timer" << std::endl;
start_accept();
}
void handle_accept(const tcp_connection::pointer &new_connection,
const asio::error_code &error) {
asio::steady_timer timer(io_context_, asio::chrono::seconds(5));
std::cout << "Before timer" << std::endl;
timer.async_wait(std::bind(&tcp_server::handle_wait, this, error, new_connection));
}
The timer is a local variable.
async_wait returns immediately, without blocking.
The function exits, destructing the timer. The timer cancels all pending operations (with error::operation_aborted).
🖛 Make sure the lifetime of the timer extends well enough to allow for it expire (or at least until it stops being relevant). In your case there will probably already be a tcp::acceptor member in your class, and the timer could reside next to it.
I'm creating a music player for PC. I want to visualize the FFT of the song. I've crated an entire class that buffers 1024 points of data does the FFT and displays it (this is handled by another class). My program was developed in my laptop which uses Debian Testing x64. My work pc uses Centos 7 x64. When I compiled my program (both use Qt 5.7.0) on my work PC the FFT visualization was garbage. Snooping into my code I found that the sample type provided by QAudioBuffer from QAudioProbe was Signed (in my work PC) while it was float in my Laptop. Here is the code that is called whenever QAudioProbe emits that data has been buffered:
void SpectrumController::setAudioBuffer(QAudioBuffer buffer){
// Used to momentarily stop the process.
if (!enableBuffering) return;
// Only process stereo frames
if (buffer.format().channelCount() != 2) return;
if (buffer.format().sampleType() == QAudioFormat::SignedInt){
//qWarning() << "Signed";
QAudioBuffer::S16S *data = buffer.data<QAudioBuffer::S16S>();
bufferData(data,buffer.frameCount());
}
else if (buffer.format().sampleType() == QAudioFormat::UnSignedInt){
//qWarning() << "Unsigned";
QAudioBuffer::S16U *data = buffer.data<QAudioBuffer::S16U>();
bufferData(data,buffer.frameCount());
}
else if(buffer.format().sampleType() == QAudioFormat::Float){
//qWarning() << "Float";
QAudioBuffer::S32F *data = buffer.data<QAudioBuffer::S32F>();
bufferData(data,buffer.frameCount());
}
}
template<typename T>
void SpectrumController::bufferData(T *data, qint32 N){
for (qint32 i = 0; i < N; i++){
//if (qAbs(data[i].left) > largest){largest = qAbs(data[i].left); qDebug() << "Largest" << largest;}
//currentBuffer << ((qreal)data[i].left/(largest));
//qWarning() << "Added data" << currentBuffer.last();
currentBuffer << data[i].left;
if (datcounter < 100000){
*writer << data[i].left;
*writer << "\n";
datcounter++;
}
else if (writeFile->isOpen()){
qWarning() << "Closed file";
writeFile->close();
}
if (currentBuffer.size() == FFT_SIZE){
dataBuffer << currentBuffer;
currentBuffer.clear();
if (!isRunning) run();
}
What I ended up doing is writing, to a file, the first 100.000 points of data gathered by both my laptop and my work PC in order to plot them.
This is what I've got
What I think is that difference is in the base system's handling of the the mp3, which, in turn, is what Qt uses. I think is gstreamer. Centos uses a much older version. The plot on the right corresponds to my laptop while the plot on the left corresponds to my work pc.
Any ideas on how to fix this? Or am I just stuck with no way of accessising the raw audio data correctly?
UPDATE:
Even though this is not a Fix or anything like that, the data in the other channel (data[i].right) did have more appropiate data. I'm using the right channnel, for now.
I'm trying to set up a simple working example to play a .raw file and the audio seems to be distorted. When the .raw file plays, I can still make out everything, its just fairly distorted, like listening to a radio station going out of range.
QString mResourcePath ="test.raw";
QFile audio_file(mResourcePath);
if(audio_file.open(QIODevice::ReadOnly)) {
audio_file.seek(4); // skip wav header
QByteArray audio_data = audio_file.readAll();
audio_file.close();
QBuffer audio_buffer(&audio_data);
audio_buffer.open(QIODevice::ReadOnly);
qDebug() << audio_buffer.size();
QAudioFormat format;
format.setSampleSize(8);
format.setSampleRate(8000);
format.setChannelCount(1);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::BigEndian);
format.setSampleType(QAudioFormat::UnSignedInt);
QAudioDeviceInfo info(QAudioDeviceInfo::defaultOutputDevice());
if (!info.isFormatSupported(format)) {
qWarning()<<"raw audio format not supported by backend, cannot play audio.";
return;
}
qDebug() << info.deviceName();
QAudioOutput output(info, format);
output.start(&audio_buffer);
// ...then wait for the sound to finish
QEventLoop loop;
QObject::connect(&output, SIGNAL(stateChanged(QAudio::State)), &loop, SLOT(quit()));
do {
loop.exec();
} while(output.state() == QAudio::ActiveState);
}
The title of your question indicates that you wish to play u-law audio, which is logarithmic PCM. Yet, the line
format.setCodec("audio/pcm");
initializes playback for linear PCM. 2 possible solutions:
Initialize playback with the appropriate log PCM codec. The docs report that QAudioDeviceInfo::supportedCodecs() will return a list of supported codecs.
Convert the log PCM samples to linear PCM in real time. It's pretty low impact and can be performed using a lookup table.
I have the following scenario:
QProcess*p;
// later
p->start();
//later
p->terminate(); // there might be unread data in stdout
//later
p->start();
I read the process stdout. After I call p->start() the second time, could there still be unread data left in the stdout buffers from the first p->start()? That would be a problem for me. Do I need to flush the buffers or something?
Okay, I've checked the sources. The QProcess::start() method explicitly clears both output buffers, so it should be okay, at least in this sense:
void QProcess::start(const QString &program, const QStringList &arguments, OpenMode mode)
{
Q_D(QProcess);
if (d->processState != NotRunning) {
qWarning("QProcess::start: Process is already running");
return;
}
#if defined QPROCESS_DEBUG
qDebug() << "QProcess::start(" << program << "," << arguments << "," << mode << ")";
#endif
d->outputReadBuffer.clear();
d->errorReadBuffer.clear();
I still think it's a bad style to reuse the same object, though.