I want to fade out the volume of an audio file while fading in the sound of a video in QML.
This should be no problem with animations, but I'm hitting a wall here.
It seems like the volume property is somehow shared between all instances of all media elements in QML. See for example the following code:
Rectangle
{
id:mainScreen
focus: true
Video
{
id:video
anchors.fill: parent
source: "path/to/file.mp4";
volume:1
onVolumeChanged: console.warn("video: "+volume)
autoPlay: true
}
Audio
{
source: "path/to/file.mp3";
id:audio
volume:1
onVolumeChanged: console.warn("audio: "+volume)
autoPlay: true
}
Keys.onPressed:
{
audio.volume = Math.random(1);
}
}
When I press a key, the onVolumeChanged-Handlers of both video and audio are called.
Is there a way to control the volume of the elements independently?
Or should I file a Qt bug report? This is the OpenGL MSVC2010 build of Qt 5.2.0 in case it matters.
Kakadu, you were right. I hit this bug and the provided patch fixes it!
With that patch, everything works as-is.
Related
An image often being the easiest way to explain something, here is a little screengrab of the problem I'm having:
If you look at the right side of the window, you can see that the content is resized with a visible lag / delay. It's a problem that happens in quite a lot of applications, but I was wondering if there is a way to fix this in a Qt application using QQuickView and QML content.
Basically my application is created like this:
QQuickView view;
view.resize(400, 200);
view.setResizeMode(QQuickView::ResizeMode::SizeRootObjectToView);
view.setSource(...);
The QML's content is just an item with 2 rectangles to highlight the problem.
Edit: here is a simplified version of the QML file (yes, the simplified version also suffers from the same problem ;p)
import QtQuick 2.12
Item {
Rectangle {
color: "black"
anchors { fill: parent; margins: 10 }
}
}
Edit2: Running this small QML snippet through the qmlscene executable also shows the same delay / lag.
Edit3: The same problem occurs on some Linux distros but not on some others: on my Ubuntu it works fine, but on my CentOS 7 is shows the same delay / glitches as on Windows. Both Qt version were 5.12.3. On an old OSX it works fine (tested on Qt 5.9) I'm really lost now ^^
Is there any way to prevent this kind of delay ? The solution will probably be platform specific since it seems the problem comes from the fact that the native frame is resized before Qt has the possibility to get the event, and so the content gets resized with a 1 frame delay ... but I'd like to know if anyone has an idea on how to handle this ?
Any help or pointer appreciated :)
Regards,
Damien
As you mentioned in your update - content gets resized with a 1 frame delay.
And there is a quite simple hack to handle this.
Use nativeEventFilter, handle
WM_NCCALCSIZE with pMsg->wParam == TRUE and remove 1px from top or from bottom.
if( pMsg->message == WM_NCCALCSIZE and pMsg->wParam == TRUE )
{
LPNCCALCSIZE_PARAMS szr = NULL;
szr = reinterpret_cast<LPNCCALCSIZE_PARAMS>( pMsg->lParam );
if( szr->rgrc[0].top != 0 )
{
szr->rgrc[0].top -= 1;
}
}
Regards, Anton
QCoreApplication::setAttribute(Qt::AA_UseDesktopOpenGL);
QCoreApplication::setAttribute(Qt::AA_UseOpenGLES);
QCoreApplication::setAttribute(Qt::AA_UseSoftwareOpenGL);
There is no obvious lag after I use OpenGL.
If so, try the following codeļ¼
if (msg == WM_NCCALCSIZE) {
*result = WVR_REDRAW;
return true;
}
(I posted this initially on the Xamarin Forums, but then decided I might get a faster answer here?)
TL;DR: Some layouts will count a tap on a transparent background, others won't. Setting InputTransparent on a container sets it for all of its children, and I feel like children should be able to override the parent. I need to create elements that overlay another element and pass taps through a transparent region but still have tappable buttons. When I try this with a Grid, it doesn't work. I don't want to go back to AbsoluteLayouts. I'm mostly working in iOS right now, I'm not quite sure if it's a problem in Android yet. Windows Phone/UWP isn't on the table.
Longer version:
I'm rewriting some layouts that worked OK in an older Xamarin Forms (1.3 I think). We recently upgraded to 2.1, and it wreaked havoc on the layout code's bad decisions. I'm tasked with updating the layouts to behave themselves. While I recognize 2.2 has been released, I just tried an upgrade and everything I'm typing seems true in that version as well, so it's not a 2.1 vs. 2.2 issue, or at least if some improvements are made they aren't helping me.
It's a mapping application, so the centerpiece of all layouts is an expensive, temperamental OpenGL element. This element very much does not like to be reparented, so I've adopted a layout sort of like this imaginary XAML:
<ContentPage>
<CustomLayout>
<OurHeaderControl />
<TheMapControl />
<OurFooterControl />
<MapOverlay />
</CustomLayout>
</ContentPage
The purpose of "MapOverlay" is to implement our workflows by adding Xamarin elements on top of the header/footer areas and/or the map. For example, one layout adds a list of directions to the bottom above the footer, so it has less room for the map to appear. The custom layout understands this and lays out the map after the overlay so it can ask for the correct map bounds.
In this layout, I cannot tap on anything the MapOverlay is over. I can make it InputTransparent and tap those things, but then all of its children are also not tappable. This was not true in the old layouts.
Here's the only differences I see between the old layouts and the new:
The old layouts were a complicated mess of AbsoluteLayouts. It looked something like this, I didn't write it:
<ContentPage>
<AbsoluteLayout> // "main layout"
<AbsoluteLayout> // "map layout"
<Map /> // An AbsoluteLayout containing the OpenGL view.
</AbsoluteLayout>
<AbsoluteLayout> // "child layout"
<SubPage /> // An AbsoluteLayout
</AbsoluteLayout>
</AbsoluteLayout>
</ContentPage>
The main layout contains AbsoluteLayouts to constrain the child views. One child view is itself an AbsoluteLayout that contains the Map and a handful of other elements associated with it. The other child is the overlay, which is always an AbsoluteLayout that contains the elements relevant to that overlay. These layouts all reference each other in cycles and update each other as layout events change. It's a fascinating ping-ponging that eventually settles down. Usually. Sometimes things just disapper. Obviously there's a reason I'm rewriting it.
But I can click on what I need to click on at every layer, and I don't get that.
So, let's talk about what I need to work, and maybe figure out if it's a bug why it's not working, or if it's a coincidence that it worked with other layouts. Here's a non-XAML page layout that demonstrates, my project's got its roots in the days when you couldn't use XAML in shared libraries:
I need to be able to tap both buttons in this UI and have them respond.
public class MyPage : ContentPage {
public MyPage() {
var mainLayout = new AbsoluteLayout();
// Two buttons will be overlaid.
var overlaidButton = new Button() {
Text = "Overlaid",
Command = new Command((o) => DisplayAlert("Upper!", "Overlaid button.", "Ah."))
};
mainLayout.Children.Add(overlaidButton, new Rectangle(0.25, 0.25, AbsoluteLayout.AutoSize, AbsoluteLayout.AutoSize), AbsoluteLayoutFlags.PositionProportional);
// The top overlay layout will be a grid.
var overlay = new Grid() {
RowDefinitions = { new RowDefinition() { Height = new GridLength(1.0, GridUnitType.Star) } },
ColumnDefinitions = {
new ColumnDefinition() { Width = new GridLength(1.0, GridUnitType.Star) },
new ColumnDefinition() { Width = new GridLength(1.0, GridUnitType.Star) },
},
BackgroundColor = Color.Transparent
};
var overlayingButton = new Button() {
Text = "Overlaying",
Command = new Command((o) => DisplayAlert("Upper!", "Overlaying button.", "Ah."))
};
overlay.Children.Add(overlayingButton, 0, 1);
mainLayout.Children.Add(overlay, new Rectangle(0, 0, 1.0, 1.0), AbsoluteLayoutFlags.All);
// This pair of property sets makes the overlaid button clickable, but not the overlaying!
// overlay.InputTransparent = true;
// upperOverlayButton.InputTransparent = false;
Content = mainLayout;
}
}
This only lets me tap the "overlaying" button even when I change the Grid to an AbsoluteLayout.
I'm stumped. It took me 2 weeks to debug the initial layouts and come up with a new solution. I really don't want to disassemble all of our layouts and put everything in one big AbsoluteLayout or a custom layout. In WPF, there were two kinds of transparent: "transparent background" meant the background could still hit test, and "null background" meant the background would not hit test. Is there some way to overlay layouts in Xamarin like this?
Or, more appropriate, why is the convoluted nest of numerous AbsoluteLayouts in our old layouts working like I need it to, but this much simpler layout isn't?
updates
Here's some additional information I remembered:
This behavior is iOS specific. On Android, both the example code and our code work.
I'm not the first person to have this problem: On StackOverflow. On Xamarin's Forums.
In general it seems as if the behavior with iOS in regards to how InputTransparent is being handled in a Grid compared to the other two platforms. I'm not particularly certain whether I'd quantify the current behavior as a bug at this time, but I understand that it's frustrating to run into a disparity in platform behavior.
There is a fix of sorts for your situation, though, if I'm understanding it correctly. It appears similar a similar report was filed before and behavior regarding iOS was mentioned via this SO link. The question is posed in the scope of a non-Forms iOS app, but the logic can be applied here.
By using a custom renderer (let's use a CustomGrid as an example), you can specifically implement the iOS implementation of the Grid to follow the aforementioned link's manner of finding underlying views:
CustomGrid.cs (PCL):
public class CustomGrid : Grid
{
public CustomGrid() { }
}
CustomGrid.cs (iOS):
[assembly: ExportRenderer(typeof(CustomGrid), typeof(CustomGridRenderer))]
public class CustomGridRenderer : ViewRenderer
{
public override UIKit.UIView HitTest(CoreGraphics.CGPoint point, UIKit.UIEvent uievent)
{
UIView hitView = base.HitTest(point, uievent);
if (hitView == this)
{
return null;
}
return hitView;
}
}
In this manner you should not explicitly set InputTransparent for iOS, and any taps on the Grid itself are sent through to anything below. Since Android works with InputTransparent, though, in this particular case you can wrap that inside a Device.OnPlatform statement and skip implementing the Android custom renderer if you don't want to:
Device.OnPlatform(Android: () =>
{
overlay.InputTransparent = true
});
Using your above code modified to use the CustomGrid and iOS renderer, I'm able to tap both buttons.
SpinBoxStyle from QtQuick.Controls.Styles allows you to change the appearance of a SpinBox, and a part of that is the ability to redesign the up/down arrow buttons. However neither SpinBox nor the style gives you the ability to query the up/down arrow button state, so you can't check if it is pressed or hovered over.
This seems like too much of an oversight, so what part of the API docs have I missed?
I've tried adding a MouseArea to the control delegate itself, but some reason it never receives any events - the controls still work though which suggests that they are 'stealing' the events first.
SpinBox {
style: SpinBoxStyle {
incrementControl: Rectangle {
implicitHeight: 10
implicitWidth: 10
color: "blue"
MouseArea {
anchors.fill: parent
hoverEnabled: true
onEntered: console.log( "Hello" ) // Never printed
}
}
}
}
Apparently you're supposed to use the styleData properties to detect hovered and pressed states, but they aren't documented. Please create a bug report for that.
import QtQuick 2.3
import QtQuick.Controls 1.2
import QtQuick.Controls.Styles 1.2
SpinBox {
style: SpinBoxStyle {
incrementControl: Rectangle {
implicitHeight: 10
implicitWidth: 10
color: styleData.upHovered && !styleData.upPressed
? Qt.lighter("blue") : (styleData.upPressed ? Qt.darker("blue") : "blue")
}
}
}
I'm not sure why the style was implemented this way, but if you look further into the source code, you can see that there are always MouseAreas for the up and down controls. This is very confusing to me; if you're not supposed to provide an interactive control because there will always be MouseAreas shadowing them, why call it incrementControl and decrementControl? Names like increment and decrement might suffice, given that they're not able to receive almost any interaction (clicking works at least, for some reason). If you find this a bit confusing, you may also want to file a separate bug report for the API.
git log --follow -p shows that this code hasn't changed much since the introduction of styles, so I'd say the current implementation (and API) is just outdated, and hopefully there are opportunities for improving this in the future.
I'm trying to stop a QML video and show its last frame when playback has finished. Does anybody know how to do this? (Sorry, this seems to be not as trivial as it sounds...)
At the moment, my problem is that the Video element simply becomes invisible/hidden after playback is done. (onVisibleChanged is never called.)
When I use the hack in onStatusChanged in my code, the video disappears for a moment after the end and then shows the end of the video.
What I'm doing is simply:
Video {
anchors.fill: parent
fillMode: VideoOutput.PreserveAspectFit;
source: "path/to/file"
autoPlay: true
onStatusChanged: {
console.warn("StatusChanged:"+status+"|"+MediaPlayer.Loaded)
if (status == MediaPlayer.EndOfMedia)
{
// seek a bit before the end of the video since the last frames
// are the same here, anyway
seek(metaData.duration-200)
play()
pause()
}
}
onVisibleChanged:
{
console.log(visible)
}
}
It's possible that I'm missing something, but I could not find anything on this topic in the docs. Also, using separate MediaPlayer and VideoOutput does not change the behavior.
For the record, I'm using the latest Qt 5.2 on Windows (msvc2010+OpenGL-build).
I'm still looking for a better solution to this. What I've come up with is to pause the video one second before it's over:
MediaPlayer {
autoLoad: true
id: video
onPositionChanged: {
if (video.position > 1000 && video.duration - video.position < 1000) {
video.pause();
}
}
}
Why one second? On my machine if you try to pause it about 500ms before the end, the video manages to run to completion and disappear from view without even registering the pause() call. Thus 1 second is sort of a good safe value for me.
Frankly, I'd prefer if there was a more explicit way to tell the MediaPlayer what to do at the end of the video. I know for a fact that GStreamer, which is what Qt uses on Linux and Mac, notifies you when the video is almost over so that you can decide what to do next - e.g. pause the video or loop it seamlessly.
I share your pain (mainly interested in OSX and iOS, where the same problem occurs). The only solution I have is to pair each video (which at least are "canned" app resources, not dynamic content off the net) with a png image of their final frame. When the video starts, enable display of the image under it (although it's not actually visible at that point). When the video ends abruptly, the image is left visible.
This works perfectly on iOS, but on (some?) Macs there may a slight brightness jump between the video and the image (guessing: something to do with OSX display preferences' screen calibration not affecting video?)
An option to the MediaPlayer or VideoOutput element types to freeze on the last frame would indeed be much simpler.
One other possibility I've considered but not tried would be stacking two videos. The one on top is the main player, but the one underneath would just be seeked to, say, the last millisecond of the video and paused. Then when the main video finishes and disappears... there's as-good-as the final frame of the video showing there underneath. This'd basically be the same as the image-based solution, but with the image underneath being dynamically created using a video player. I've found mobile HW's video and Qt's wrappings of it to be temperamental enough (admittedly more back in the earlier days of Qt5) to really not want to try and do anything too clever with it at all though.
A few years later and I am facing the same issue. However, I found a workaround (for Qt >= 5.9) that allows me to pause the video within 100 ms of the end:
It seems that the issue is related to the notifyInterval property (introduced in Qt 5.9). It is by default set to 1000ms (same as the 1000ms observed in other answers, not a coincidence I believe). Thus, changing it to a very small value when the video is almost done allows for catching a position very close to the end and pausing the video there:
Video {
id: videoelem
source: "file:///my/video.mp4"
notifyInterval: videoelem.duration>2000 ? 1000 : 50
onPositionChanged: {
if(position > videoelem.duration-2*notifyInterval) {
if(notifyInterval == 1000)
notifyInterval = 50
else if(notifyInterval == 50)
videoelem.pause()
}
}
onStatusChanged: {
if(status == MediaPlayer.Loaded)
videoelem.play()
}
}
Hope this helps somebody!
in MediaPlayer element: Pause the video 100ms before end based on duration
MediaPlayer {
id: video1
onPlaybackStateChanged: {
if(playbackState==1){
durTrig.stop();
durTrig.interval=duration-100;
durTrig.restart();
}
}
}
Timer:{
id:durTrig
running:false
repeat: false
onTriggered:{
video1.pause();
}
}
I'm implementing WebRTC video chat. I want to implement the following case:
By default video element has background-image via css and if there are no video input then user see his (or interlocutor's) avatar:
No video expected result:
No video actual result:
As you can see from the screenshots I have black rectangles above my fancy backgrounds. I want to make this ugly black rectangle transparent and keep my video's backgrounds visible.
Actually it will be awesome to resolve the problem without introducing any additional markup.
Appreciate your help =)
Update:
"No video" means that user/users don't have web cams and stream has only audio track.
Bingo!
Reading documentation in depth gave some results =) It was as easy:
<video poster="image.jpg">
One simple attribute made me happy
Try Alpha transparency in Chrome video or waitUntilRemoteStreamStartsFlowing.
function onaddstream(event) {
remote_video.src = webkitURL.createObjectURL(event.stream);
// remote_video.mozSrcObject = event.stream;
waitUntilRemoteStreamStartsFlowing();
}
function waitUntilRemoteStreamStartsFlowing()
{
if (!(remote_video.readyState <= HTMLMediaElement.HAVE_CURRENT_DATA
|| remote_video.paused || remote_video.currentTime <= 0))
{
// remote stream started flowing!
}
else setTimeout(waitUntilRemoteStreamStartsFlowing, 50);
}