I'm working with Qt 5.13.2 on a Yocto-based embedded Linux box and specifically QSoundEffect. Qt has been configured to use ALSA rather than PulseAudio. ALSA's aplay is able to play a WAV file smoothly. QSoundEffect playback is notably choppy. I've been trying to adjust ALSA's configuration in .asoundrc to smooth things out. For example:
pcm.!default {
type hw
card 0
rate 44100
}
ctl.!default {
type hw
card 0
periods 100
period_size 4410
buffer_size 35280
}
This does solve the choppy/stuttering playback but it also has the undesirable side effect of blocking simultaneous QSoundEffect plays. If I don't use a .asoundrc file, I can get simultaneous playback. But of course, the stuttering is there.
So, the question is: what are the default values for various settings (not well-documented, btw)? Or better yet, what setting should I be looking at? BTW, if I use the defaults by not having a .asoundrc file, I see "(snd_pcm_recover) underrun occurred" messages when I play a QSoundEffect.
I figured out the initial part of things. First, having any .asoundrc file present regardless of the existence of the buffer or period settings overrides any settings that exist in /etc/asound.conf. That at least makes sense as far as the documentation is concerned.
It took some digging but apparently what I need is called mixing and there is no mixing happening by default. Fortunately, the ALSA documentation has a working example.
From alsa-project.org
pcm.!default {
type plug
slave.pcm "dmixer"
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:1,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
}
bindings {
0 0
1 1
}
}
ctl.dmixer {
type hw
card 0
}
I put this into a .asoundrc file and changed hw:1,0 to hw:0,0 because my default sound card is 0. Voila! I'm now able to play multiple QSoundEffects simultaneously.
I'm still getting buffer underrun notices so there's likely some work left to do to figure out the settings for period size and buffer size.
Related
I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.
Okay i am currently trying to make a Voice chat software using NAudio and c#.
But i currently have a problem, latency seems to bet worse and worse the longer the application runs.
Now, i am a total beginner, so i have no idea what can be the cause of it.
But to troubleshoot, i would like to know if i can get the total latency to see how much it adds over time.
Total Latency = Input buffer + network latency + output buffer (and more if there is any, i am using UDP).
So if i have something like:
Label.text = TotalLatency();
It will get updated all the time.
while (!bStop)
{
byte[] datanbefore = waveStream.GetBuffer();
autoResetEvent.WaitOne();
waveStream.Position = 0;
captureBuffer.Read(offset, waveStream, halfBuffer, LockFlag.None);
readFirstBufferPart = !readFirstBufferPart;
offset = readFirstBufferPart ? 0 : halfBuffer;
//TODO: Fix this ugly way of initializing differently.
//Mute Mic when button is checked
if (MuteMic.Checked)
{
waveStream = new MemoryStream(halfBuffer);
}
byte[] datanaudio = waveStream.GetBuffer();
udpClient.Send(datanaudio, datanaudio.Length, otherPartyIP.Address.ToString(), 5550);
}
So here is the sending part. I am not really sure how the buffering works, as i started the application using a free sample, and have been changing it here and there, but some parts still remain, but i think that buffer can be improved though.
while (!bStop)
{
//Receive data.
byte[] byteData = udpClient.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
Here is the Receive part, and it´s much simpler, it just get´s the data from the UDP, ass it to a buffer and play it.
You can work out roughly the input and output latency by knowing the buffer sizes of WaveIn and WaveOut. By default in NAudio they are each 100ms.
For network latency, you could try timestamping your audio packets although the clocks of both machines would need to be in sync.
I'm getting the OMX_ErrorUnsupportedSetting error event after providing the buffers to an audio decoder component on Raspberry Pi. I tried anything that came into my mind to change the parameters but still the callback arrives. Is there any way in the OpenMAX standard to try to investigate what parameter is causing that event?
This is what I'm doing:
Created the component;
disabled all the ports;
set state to idle;
set port format to use OMX_AUDIO_CodingAAC;
set port definition to use OMX_AUDIO_CodingAAC, 4 buffers of 6144 bytes each;
set profile to these values (not sure if needed): profileType.nSampleRate = 48000; profileType.nFrameLength = 0; profileType.nChannels = 6; profileType.nBitRate = 288000; profileType.nAudioBandWidth = 0; set OMX_PARAM_CODECCONFIGTYPE with bCodecConfigIsComplete to 1;
set OMX_IndexParamBrcmDecoderPassThrough to true.
After all the buffers are sent to the component, I suddenly get OMX_ErrorUnsupportedSetting event and the port is not enabled. Any idea of what I may be doing wrong or how I can inspect the parameter which is causing the error?
I've been told by the manufacturer that the reason this is happening is that no audio decoder except PCM is available at the moment.
I am in a need to send data thru serial port in vxworks. I am using the following code. But
it is not working.can anyone point out what went wrong?
int f;
if(f=open("/tyCo/1",O_RDWR,0)<0)
{
printf("Error opening serial port.");
return 1;
}
write(f,"hello",5);
after running this code, no data is comming thru serial port but instead it comes thru
terminal(Tornado shell). The system has two serial devices /tyCo/1 and /tyCo/0. I tried them both, but the problem persists.
Thanks in adavnce
Likhin.
Have you set the baud rate?
if (iocl(m_fd, FIOBAUDRATE, rate )) == ERROR )
{
//throw error
}
It is possible that you are using the wrong name for the device, and that Tornado Shell is set to your default device. From vxdev.com:
If a matching device name cannot be found, then the I/O function is directed
at a default device. You can set this default device to be any device in the
system, including no device at all, in which case failure to match a device
name returns an error. You can obtain the current default path by using
ioDefPathGet( ). You can set the default path by using ioDefPathSet( ).
The 3rd parameter of "open" command is, if I am not wrong, the mode. I do not really understand what it is needed for in vxworks, except for code comparability with UNIX. In short -try to give some value like 0644 or 0666. I think this will help.
The adobe documentation says that when listening for a keypress event from a phone you should listen for Key.Down, however when I trace the Key.getCode() of keypresses I see a number not the string "Key.Down". I am tesing this locally in device central and do not have a phone to test this with at present. Here is my code -
keyListener = new Object();
keyListener.onKeyDown = function() {
switch (Key.getCode()) {
trace(Key.getCode()) // outputs 40
case (Key.DOWN) : // according to the docs
pressDown();
break;
}
}
My question is - is this simply because Im testing in device central and when I run it on the phone I will need to be listening for Key.Down? or is the documentation wrong? Also is the numeric code (40) consistent across all devices? What gives adobe?
thanks all
Key.Down is equal to 40 so it will recognize it as the same. So you can use whichever one you prefer, however, I would recommend using Key.Down because it will be easily recognizeable for those who dont have Key Codes memorized (most of us).
These are the Key Code Values for Javascript. However, I think they are pretty much universal