I am running a QML app using Qt 5.14.2 and notice that my sound effects will not play (generate sound) unless I also have background music playing. I have confirmed with console logging no errors, valid audio file, etc. (11 kbps, 16-bit PCM, 1 channel, wav). I also tried 41kbps sampling rate, no change.
I can reproduce this using Win32 and Android a8. Audio files are asset files (not QRC).
Sample code:
SoundEffect {
id: effect
source : "click.wav"
loops : 1
volume : 0.8
muted : false
}
function play() {
effect.play()
}
The problem with sample code in this type of situation is that a simplistic sample may work. It's possible that playing a 48k stereo file first somehow messes up the audio player - finding a minimally reproducible example is not realistic. So hopefully a general Qt issue is known by someone regarding this topic. I see posts going back to 2017 with similar questions, but no solution.
The wav file should play without dependence on other wav files/sound effects playing.
Something important to note, the sound effect audio file is 0.477 seconds long.
After much trial and error I narrowed this down to a Qt audio player issue. When the audio file is short (eg: 0.5 sec) it sometimes doesn't play as a SoundEffect.
As a solution I appended 1 second of silence to the end of the audio file and now it plays consistently.
Related
I have coded two versions of video player based on QMediaPlayer and on Vlc-qt. In both cases I have the incorrect value for the full time video. The player show me that total time is 7 seconds, but in fact the time approx. 5 minutes. And of course, the slider of position shows not correct.
I was confused, so that maybe I did something wrong. But this video file was tested with MS video player, and I see the same problem.
Video for testing can be found at https://1drv.ms/u/s!AgCzZ90Ttbz65jqiluS2NS95Id0U
My guess is that the file contains the wrong information about the time. Or maybe not codec provides such information in not correct way.
Can anybody clarify to me what the reason of the problem and how it should be fixed.
My application must read one video track and several audio tracks, and be able to specify one section of the file and play it in loop. I have created a setup with Media Foundation, using the sequencer source and creating several topologies with the start and end point of the section I want to loop. It works, except for the fact that there is a 0.5 to 1 sec time of stabilization of the playback just when it goes back to the starting point.
First, I made it with individual audio files and one video file. This was quite bad for some files, sometimes all the files were completely out of sync, sometimes the video was frozen for several seconds, then went very fast to catch with the audio.
I had a good improvement using only one file, that includes the video and the multiple audio tracks. However, for most files, there is still a problem about the smoothness of the transition.
With a poor quality video AVI file, I could make it work smoothly, which would mean that the method I use is correct. I have noticed that the quality of the loop smoothness is strongly related to the CPU used on a file when simply playing it.
I use the "SetTopology" on the session, using a series of topologies, so normally it should preroll the next one during the playback of the current one, right ? Or am I missing something there ?
My app works also on Mac, where I have used a similar setup with AVFoundation, and it works fine with the same media files I use on Windows.
What can I do to have the looping work smoothly with better quality video on Windows ? Is there something to do about it ?
When I play the media file without looping, I notice that when I preroll it to some point, then when I hit the START button, the media starts instantly and with no glitch. Could it work better if I was using two independent simple playback setups, start the first, preroll the second, then stop the first and start the second programmatically at the looping point ?
I want to play all the mp3 musics in my computer.
list.files(c("c:/","d:/","e:/"),pattern="mp3",full.names=TRUE,recursive=TRUE)->x
library(tuneR)
sapply(x,function(y){play(y)})
there is a little problem ,when one music was played ,i had to close the mplayer window and the next music will be played ,but when it finished ,i had to clode the mplayer window again,
how can i make it play automatically?
The official documentation of tuneR package says that
in function
play(object, player, ...)
If no player and no further
arguments are given under Windows, the default is: "/play /close".
So it should work. But if it doesn't, maybe you should specify another player than mplayer?
I am recording FLV videos with Red5 server and playing them back in a Flex app. I am aware that Red5 does not properly inject the FLV MetaData, so I am using an external commandline tool to get the metadata in there.
Because I am injecting the metadata, my duration of the video is correct.
The problem I am having, and this is true with all FLV players I try to play the video with (even 3rd party stand-alone video players), is the PlayHead time is never started at 0. When I load up the FLV to play and lets say the video is 10 seconds long, the current time label on the playhead starts at 1-2seconds instead of 0 and the horizontal slider current time indicator also is moved away from 0 and is set to 1-2 seconds along the slidebar. the video plays back fine from what I can see though.
Is there a byte in the FLV that I need to change so that it will start the playhead at 0? I realize this is probably something to do with Red5, so if anyone has any work-arounds or potential things to watch out for that may be causing this, I would really appreciate it!
Just to update this in case someone else encounters this, it turned out that the version of Red5 I was using (0.9 I believe) was the issue. I upgraded to 1.0RC1 and immediately the video timeline was corrected to 0.00 - 10.00 (assuming it was a 10 second video).
I was afraid to upgrade to 1.0RC1 because I feared the java app I created would encounter issues with the upgrade since I developed it on an earlier version and read so many posts about things not working with upgrading.. but I guess I got lucky, it works perfectly!
I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.