I have a 3D environment done in Unity which I wan't to have as an Item in Qt (QML). I've tried a few different paths, but none has proved to be efficient enough or I'm unable to get it to work.
My current working solution is to do the following each frame
In Unity, use ReadPixels of my RenderTexture (GPU) to get a regular Texture (RAM).
Encode to JPG and send bytearray through TCP socket.
In Qt, instantiate a QImage from the data and save it for later use.
In the render function of QQuickFramebufferObject::Renderer, use glTexImage2D to render the image to my active texture.
Obviously this is not an optimal solution. This performs maybe 10 fps with a 128x64 texture size (for testing). My understanding is that the bottleneck is transferring data from gpu and back.
In my latest attempts I have tried to get the ID of the RenderTexture using renderTexture.GetNativeTexturePtr(). Then in Qt I'm trying to get the pixel data through glGetTexImage, but I keep getting 0's in the data. When later using glDrawPixels, the Qt application crashes.
So my question now is, do anyone know if it's possible to share the texture between processes and if so, how?
Related
I'd like to take multiple video streams and display them one at a time, with the ability to swap between them. I was thinking about taking the video output from OBS and stream it to a private server using RMTP and nginx. Then I'd write some code (C/C++ maybe) to swap which stream is being displayed.
My first question is, would this even work? Would I be able to process the video being streamed to the server using this method, or would I need to send it to the server a different way? (preferably still using OBS)
My second question is, what would be a good place to get started for processing the streams? Are there any tutorials or forms that could be helpful?
I've never done any sort of video processing, so if I'm missing a key component I apologize ahead of time.
If I understand you correctly OBS gives you a couple of options to achieve this. You can create a different scene for each video input, select the scene to display the input of your choice. If you use the studio version it has a built in transition effects. Alternatively you can add all the video sources to one scene then move your desired source to the top. Using this method you can resize the sources and display more than one at a time.
Yes,upstairs is right.or you can use mimoLive.app if you are a macOS user
I am trying to develop an application with Qt 5.5 and OpenGL. The basic work of the application will be to load simple objects, modify their positions in a scene and save them together with other attributes (material, name, parents/child relations etc...).
The only thing I am struggling about for a week now is that I really don't know how I should take on the problem of synchronizing data. Let's say I have some kind of SceneGraph class which takes care of all SceneObjects. Those SceneGraphs should be rendered in a SceneView-Widget which can be used to modify it's Objects via transformations. Now how would I tell every SceneView that an Object changed it's position?
I thought of the Model/View architecture for a moment but I am not really sure how this implementation should look like.
What would be the best way to handle Objects like that in different Windows/Widgets but still have one single piece of data?
SceneObject:
Holds the mesh-information (verticies, uvs, etc..)
Has a name (QString)
Has a material
Has a transform storing position, rotation and scaling information
(Important: these datatypes should be synchronized in all views)
SceneGraph:
Contains different SceneObjects and is passed to SceneViews
SceneView:
The QWidget responsible for drawing the Scene correctly in any QWindow.
Has it's own camera to move around.
Handles UserInput and allows transformation of SceneObjects.
You could use the signal and slot to observe position updates of SceneObjects and process them in SceneView.
I have a lot of 2D, time-variant data (aka a movie) that I'd like to visualise inside a Qt interface. The idea is that the results can be viewed as a movie, browsed using a time-slider and then individual data points should be selectable to get more information about that point. (The data being shown is generated from simulations, and then converted to RGB through some colormap, so I'm not really looking for a component that plays mp4)
I have some experience using a QGraphicsScene, which makes it easy to get the cursor location & react to mouse events. But is it suitable for video? Or am I better off with some kind of QImage directly on a widget?
Ok, so it works well in PyQt, not so well in PySide.
I'm using a QPixmap wrapped in a QPixmapItem that gets added to the scene. To update the frame, I change the contents of the pixmap object and call update() on the scene.
Performance is good enough for video (although I don't need high frame rates for this project).
In PySide I ran into weird issues when I used more than 1 pixmap item, in PyQt it works just fine.
I have the following algorithm (working):
Acquire image from webcam
Process image
Send image to GUI and show it
The GUI interface is programmed with Qt, and all image acquirement and processing is been doing with OpenCV. There are 3 classes involved, call them Acquire, Process and Gui.
Acquire (Inherits from QObject) grabs the image and calls to Process (Does not inherit from QObject) to make the image processing. Process returns the result to Acquire, who emits a signal caught by Gui (Inherits from QObject), who converts the image (in Mat format) to QImage and draws it.
I am introducing changes into the Process class and I would like to have a visual feedback. As everything is been executed into the Qt's loop I can not use the cv::namedWindow and cv::imshow functions (nothing appears).
The question is: There is any quick method to make visual debugging to know what is happening inside Process without make Process and Gui friends, or connecting them using the signal/slot mechanism or any other solution that involves big changes into the program structure?
You can create another class and put all code for debug output into it. Connect Process to this class to send debug information.
I'm running a qt embedded application and mplayer, both of them on framebuffer.
When I start video playing through mplayer, I get a lot of flickers around the movie.
See the following movie:
http://youtu.be/kbKpfjLHzTY
How to fix it?
For Qt 4 with QWS,the embedded linux graphics sub-system for writing directly to a frame buffer you can run the following in a the -qws server's GUI thread before invoking mplayer,
QWSServer *server = QWSServer::instance();
if(server)
server->enablePainting(false); // Suspend Qt's drawing.
You can use a SIGCHLD or something to figure out when mplayer is finished and re-anble painting. Another way would be to position mplayer's output window and use a QWSEmbedWidget to tell Qt not to draw there.
Both QWS and mplayer open the frame buffer and draw to it directly. There is nothing to marshal access to the display device. The QWS sub-system allows multiple Qt application to draw to the screen at the same time. However, it has no control over other processes accessing the frame buffer. For this reason, X11 or other display managers like Wayland, etc can be used. This is generally the method use in Qt5.