Framebuffer access order - qt

I've got two applications on my Raspberry Pi drawing things on the display. The first one is a video stream, the second an overlay with transparency. The video stream is rendered by the official raspivid program, and my overlay with eglfs (Qt QML).
How can I choose the order images are rendered so that my overlay is on top of the video, and not the opposite like now. Who manages the buffer access? How to configure it?

Related

Scada-LTS Graphical view

I've installed Scada-LTS v.2.5 on a Raspberry Pi. Now I want to make a graphical view. How ist it possible to add images as a background? I can see the images in /opt/tomcat/apachev9/webapps/ScadaBR/uploads but I can't use it in the Webbrowser

How to prevent QT from drawing to the screen? (prevent flickering when video played with gstreamer)

This is QT5. Its on an embedded Yocto system, with QT drawing to the framebuffer, no X11. The problem is this. I want to play a video using gstreamer. So, I tried to launch gstreamer with gst-launch-1.0 linked to a touch event in QT. Problem is, it flickers as QT also tries to render frames.
Next, we tried Q media player. However, this proprietary gstreamer doesn't support playbin, so, I went into QGstreamerPlayerSession and modified the constructor to use gst_parse_launch to set up my pipeline instead of playbin.
This works, in that my video plays. However, there is still the same flickering! I tried to throw up a white rectangle before launching the video, but it still flickers.
How could I prevent QT from redrawing? Do I need an empty scene before playing the video? Or is there a function call to pause redrawing?
I could of course send a SIGSTOP to QT, play the video in an external application, then resume with a SIGCONT. That works, but is obviously a very inelegant and restrictive solution (I need the app to be processing in the background still as its controlling other things as well).

A-Frame: how to simulate tracked controllers when developing on desktop?

My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers
This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go

Can we have different resolutions for Preview and Capture of the same DirectShow graph?

I have 2 streams, one for Preview and one for capture in my DirectShow application. Right now, we observe that the Preview is slow for 1080P video and 1280*720 video. I would like to know if we have any method to have different resolutions for the capture and preview streams. If we have any, I can use the high resolution at capture side alone and at Preview, I am OK to display low resolutions.
Thanks
You can connect the capture output to a Infinite Tee filter. Use one output of the tee filter to capture, connect the second output to a resize filter, and use that to preview.
On some hardware you can use preview output directly in a different resolution. But not all hardware supports that. for example some hardware does not scale the preview, so if you use a lower resolution, you only see the top left part of the video. And other hardware does not have a preview pin at all...

Apply a Filter to a Camera video stream publishing to Media Server

I am Trying to Apply some information Like text and An Image as an overlay to create a overlay effect as lice a security camera with time and date on the video along with a png based Logo.
I can record VIDEO Using Flex and FMS or any other Media Server. But I want to save a modified version of the stream being uploaded.
Use camTwist (mac) or webcammax (pc) as your video input...you can use that program to use the webcam and then add whatever over it (text, date). Use Flash Media Live Encoder and select camTwist as your video source. Stream and save your recordings using that FMLE.
Sorenson Squeeze is pretty awesome for this and is cross platform. Camtwist is cool but it's not really a 'pro' app, that makes me sound way snobbish. It's actually pretty fun and a good suggestion for a simple result but I haven't found anything better than Sorenson Squeeze for the features and control it gives you.

Resources