Our company is in need to capture the rendering of a Qt3d scene. For this I created a small example application, that illustrates the usage of our capturing.
On the left-hand side you will find the 3D scene and on the right-hand side there is a QLabel with a QPixmap showing the captured screen.
Now, for some reason I really don't understand the captured screenshot really looks different compared to the 3D scene on the left hand side. Even more confusing the saved PNG images looks different compared to QLabel on the right hand side, but actually I'm more interested in the PNG for my use case.
It saves me the following PNG file:
I already tried to insert the QRenderCapture in different place of the frame graph, but none of the places gave reasonable results.
This is my frameGraph->dumpObjectTree output:
Qt3DExtras::QForwardRenderer::
Qt3DRender::QRenderSurfaceSelector::
Qt3DRender::QViewport::
Qt3DRender::QCameraSelector::
Qt3DRender::QClearBuffers::
Qt3DRender::QFrustumCulling::
Qt3DRender::QCamera::
Qt3DRender::QCameraLens::
Qt3DCore::QTransform::
Qt3DRender::QRenderCapture:: // Insertion of QRenderCapture
Qt3DRender::QFilterKey::
It seems, that I capture the 3d scene somewhere during the rendering, so that there might be some timing issues. (Maybe the usage of QEventLoop is not admissible in this use case, but I really don't know why.)
The documentation of QRenderCapture implies, that QRenderCapture should be the last leafnode of the frame graph, but this is what I did.
From QRenderCapture documentation:
The QRenderCapture is used to capture rendering into an image at any render stage. Capturing must be initiated by the user and one
image is returned per capture request. User can issue multiple render
capture requests simultaneously, but only one request is served per
QRenderCapture instance per frame.
And also:
Used to request render capture. Only one render capture result is
produced per requestCapture call even if the frame graph has multiple
leaf nodes. The function returns a QRenderCaptureReply object, which
receives the captured image when it is done. The user is responsible
for deallocating the returned object.
Can someone here help me out?
#include <QApplication>
#include <QVBoxLayout>
#include <QLabel>
#include <QPushButton>
#include <Qt3DRender/QRenderCapture>
#include <Qt3DRender/QCamera>
#include <Qt3DExtras/QSphereMesh>
#include <Qt3DExtras/QDiffuseSpecularMaterial>
#include <Qt3DExtras/QForwardRenderer>
#include <Qt3DExtras/Qt3DWindow>
Qt3DCore::QEntity* transparentSphereEntity() {
auto entity = new Qt3DCore::QEntity;
auto meshMaterial = new Qt3DExtras::QDiffuseSpecularMaterial();
meshMaterial->setAlphaBlendingEnabled(true);
meshMaterial->setDiffuse(QColor(255, 0, 0, 50));
auto mesh = new Qt3DExtras::QSphereMesh();
mesh->setRadius(1.0);
entity->addComponent(mesh);
entity->addComponent(meshMaterial);
return entity;
}
int main(int argc, char* argv[])
{
QApplication a(argc, argv);
auto frame = new QFrame;
auto view = new Qt3DExtras::Qt3DWindow();
auto camera = new Qt3DRender::QCamera;
camera->lens()->setPerspectiveProjection(45.0f, 1., 0.1f, 10000.0f);
camera->setPosition(QVector3D(0, 0, 10));
camera->setUpVector(QVector3D(0, 1, 0));
camera->setViewCenter(QVector3D(0, 0, 0));
view->defaultFrameGraph()->setCamera(camera);
auto frameGraph = view->defaultFrameGraph();
frameGraph->dumpObjectTree();
auto camSelector = frameGraph->findChild<Qt3DRender::QCamera*>();
auto renderCapture = new Qt3DRender::QRenderCapture(camSelector);
frameGraph->dumpObjectTree();
auto rootEntity = new Qt3DCore::QEntity();
view->setRootEntity(rootEntity);
auto sphere = transparentSphereEntity();
sphere->setParent(rootEntity);
auto btnScreenshot = new QPushButton("Take Screenshot");
auto labelPixmap = new QLabel;
frame->setLayout(new QHBoxLayout);
frame->layout()->addWidget(QWidget::createWindowContainer(view));
frame->layout()->addWidget(btnScreenshot);
frame->layout()->addWidget(labelPixmap);
frame->setMinimumSize(1000, 1000 / 3);
frame->show();
QObject::connect(btnScreenshot, &QPushButton::clicked, [&]() {
QEventLoop loop;
auto reply = renderCapture->requestCapture();
QObject::connect(reply, &Qt3DRender::QRenderCaptureReply::completed, [&] {
reply->image().save("./data/test.png");
labelPixmap->setPixmap(QPixmap::fromImage(reply->image()));
loop.quit();
});
loop.exec();
});
return a.exec();
}
Related
Is it possible for QTextLayout to render several characters, but to process/handle it as one character. For example rendering a code point like: [U+202e], and when moving the caret/calculating positions, it is treated as one character.
Edited:
Please check this following issue, were I explain what I'm trying to do. It for the edbee Qt component. It's using QTextLayout for line rendering.
https://github.com/edbee/edbee-lib/issues/127
Possibly it isn't possible with QTextLayout, the documentation is quite limited.
According to Qt docs:
"The class has a rather low level API and unless you intend to implement your own text rendering for some specialized widget, you probably won't need to use it directly." - https://doc.qt.io/qt-5/qtextlayout.html#details
You should probably use a QLineEdit or a QTextEdit (each has a method called setReadOnly(bool)).
Before answering the question, I will point out that the CursorMode enum (https://doc.qt.io/qt-5/qtextlayout.html#CursorMode-enum) seems very promising for this problem, but to me, the documentation isn't clear on how to use it or set it.
Now to answer your question in regards to QLineEdit or QTextEdit, it's a bit complicated, but it's the same for QLineEdit and QTextEdit, so lets look at QTextEdit.
Firstly, mouse clicks: QTextEdit has a signal called cursorPositionChanged(), which will be helpful here. You'll want to connect that to a custom slot, which can make use of the function moveCursor(QTextCursor::MoveOperation operation, QTextCursor::MoveMode mode = QTextCursor::MoveAnchor) (https://doc.qt.io/qt-5/qtextedit.html#moveCursor). Notice that there are very helpful enumeration values for you here in QTextCursor::MoveOperation regarding word hopping (https://doc.qt.io/qt-5/qtextcursor.html#MoveOperation-enum). How do we put all of this together? Well, probably the right way to do it is to determine the width of the chars to the left of the cursor's position and the width of the chars to the right of the cursor's position when the cursorPositionChanged() signal is emitted and go to the side of the word that has less width. However, I'm not sure how to do that. At this point I'd settle with checking the number of chars to the left and right and going to the side with less.
Secondly, keyboard presses: This goes a bit out of my knowledge, but almost everything drawable and iteractable inherits from QWidget. Take a look at https://doc.qt.io/qt-5/qwidget.html#keyPressEvent and it's possible that overriding that in your own implementation of QTextEdit is necessary to get the left arrow and right arrow keypresses to jump words (once you get that part it's pretty easy, just use the same function as last section for moving the cursor, or in the case of QLineEdit, cursorWordForward()/cursorWordBackward()).
All this being said, I've so far been assuming that you're not deleting anything or selecting anything. Selection can be a real pain depending on if you allow multiple selections, but the functions are all there in the documentation to implement those things.
Example of mouse click impl:
myclass.hpp
#include <QTextEdit>
#include <QTextCursor>
#include <QObject>
#include <QString>
int distance_to_word_beginning_or_end(const QString &str, int index, bool beginning);
class MyClass {
MyClass();
~MyClass();
private:
QTextEdit *text_edit;
public slots:
void text_edit_changed_cursor_location();
};
myclass.cpp
#include "myclass.hpp"
int distance_to_word_beginning_or_end(const QString &str, int index, bool beginning)
{
// return the distance from the beginning or end of the word from the index given
int inc_or_dec = (beginning) ? -1 : 1;
int distance = 0;
while (index >= 0 && index < str.length())
{
if (str.at(index) == ' ' || str.at(index) == '\n' || str.at(index) == '\t')
{
return distance;
}
distance++;
index += inc_or_dec;
}
return --distance;
}
MyClass::MyClass()
{
text_edit = new QTextEdit();
QObject::connect(text_edit, &QTextEdit::cursorPositionChanged, this, &MyClass::text_edit_changed_cursor_location);
}
MyClass::~MyClass()
{
delete text_edit;
}
void MyClass::text_edit_changed_cursor_location()
{
QString text_edit_string = text_edit->text();
QTextCursor text_edit_cursor = text_edit->textCursor();
auto current_position = text_edit_cursor.position();
QTextCursor new_text_cursor;
int distance_to_beginning = distance_to_word_beginning_or_end(text_edit_string, current_position, true);
int distance_to_end = distance_to_word_beginning_or_end(text_edit_string, current_position, false);
auto movement_type;
if (distance_to_beginning > distance_to_end)
{
new_text_cursor.setPosition(current_position + distance_to_end);
} else {
new_text_cursor.setPosition(current_position - distance_to_beginning);
}
text_edit->setTextCursor(new_text_cursor);
}
I've installed VTK 8.2.0 with CMake for use with QT but when trying to run some VTK examples I have some issues shown below:
Note: The colour banding issues in the image is from the gif compression and is not part of the issue
#include <vtkSmartPointer.h>
#include <vtkActor.h>
#include <vtkCubeSource.h>
#include <vtkPolyData.h>
#include <vtkPolyDataMapper.h>
#include <vtkRenderWindow.h>
#include <vtkRenderWindowInteractor.h>
#include <vtkRenderer.h>
int main(int, char *[])
{
// Create a cube.
vtkSmartPointer<vtkCubeSource> cubeSource =
vtkSmartPointer<vtkCubeSource>::New();
// Create a mapper and actor.
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(cubeSource->GetOutputPort());
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
// Create a renderer, render window, and interactor
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
// Add the actors to the scene
renderer->AddActor(actor);
renderer->SetBackground(.3, .2, .1);
// Render and interact
renderWindow->Render();
renderWindowInteractor->Start();
return EXIT_SUCCESS;
}
https://vtk.org/Wiki/VTK/Examples/Cxx/GeometricObjects/Cube
For all of the examples, it appears that the previous frame is not being cleared and the background is not being shown over it.
I don't appear to get any error messages when compiling or running the program. Might it be a driver issue?
Any help would be much appreciated!
I want to create an OGL data processor using QGLFunctions shaders and framebuffers. I don't need any widgets. But to create valid Shader and framebuffer instances, I need a valid QGLContext with support for the appropriate glExtensions.
With zero context, of course, nothing works. With context of zero QPaintDevice too. With Qpixmap as device it creates a valid context, but it lacks glExtensions for Shader and framebuffer.
#include <QGLFramebufferObject>
#include <QGLShaderProgram>
#include <QtOpenGL/QGLFunctions>
// ...
void GLProcessor::init()
{
auto format = QGLFormat::defaultFormat();
if (!context()){
m_context = new QGLContext(format, new QPixmap(1, 1));
bool ok = m_context->create();
qDebug() << "CREATING CONTEXT "<< ok;
Q_ASSERT(context()->isValid());
}
context()->makeCurrent();
initializeGLFunctions(context());
m_binFBO = new QGLFramebufferObject(lowsize ,lowsize ,QGLFramebufferObject::NoAttachment, GL_TEXTURE_2D, GL_RED);
m_outFBO = new QGLFramebufferObject(lowsize ,1 ,QGLFramebufferObject::NoAttachment, GL_TEXTURE_2D, GL_RED);
setupShaders();
// ...
}
There is an option, of course, to do as always is to get the context from the QGLWidget and hide it. But somehow inelegant. PS CUDA, OpenCL, AMP and so on I don't need. For my tasks need OpenGL.
How do I use shaders and framebuffers in qt4 without creating a QGLWidget?
We have a QTableView which are filled with some arbitrary data. User can reorder rows of the table by make verticalHeader moveable. Here is a sample code:
#include <QApplication>
#include <QTableWidget>
#include <QDebug>
#include <QVBoxLayout>
#include <QPushButton>
#include <QHeaderView>
int main(int argc, char* argv[])
{
QApplication a(argc, argv);
QWidget base;
QTableWidget* tablWid = new QTableWidget(&base);
tablWid->verticalHeader()->setSectionsMovable(true);
tablWid->verticalHeader()->setDragEnabled(true);
//////////////////////////////////////////////////////////////////////////
// Fill the model with some data
tablWid->model()->insertColumn(0);
tablWid->model()->insertRows(0,10);
for (int i = 0; i < 10; ++i)
tablWid->model()->setData(tablWid->model()->index(i, 0), "Item " + QString::number(i));
//////////////////////////////////////////////////////////////////////////
QPushButton* dumpButton = new QPushButton("Dump Model", &base);
QObject::connect(dumpButton, &QPushButton::clicked, [tablWid]()->void {
for (int j = 0; j < tablWid->model()->rowCount();++j){
qDebug() << tablWid->model()->index(j, 0).data().toString();
}
});
QVBoxLayout* baseLay = new QVBoxLayout(&base);
baseLay->addWidget(tablWid);
baseLay->addWidget(dumpButton);
base.show();
return a.exec();
}
We want to read cell contents in the same order as are seen in the QTableView (as seen in the view NOT stored in the model). Currently by calling model->data() we access the cell contents as are stored in the model NOT as seen in the view (ordered are changed by vertical section moves).
How is it possible to read cell contents such this way?
So if I understood it correctly you want to reorder the columns by moving the headers and then want to know how the view looks like.
I believe ( 90% certain ) when you reorder the headers it does not trigger any change in the model! and then if you just start printing the data of the model you will only see the data in the order how it was initially before you swapper/reordered some column with the header.
But you can maintain your own little data structure maintaining the order of the headers and when you will reorder a header the slot columnMoved() will be invooked at that point of time you can utilize the method columnViewportPosition to figure out the positions of all the columns and update your small data structure storing the order of the columns.
So while printing the data you should always assume that headers are in the order as in your own data structure.
Hope that will do what you are looking for!
I'm trying to make an app using Kinect (OpenNI), processing the image (OpenCV) with a GUI.
I tested de OpenNI+OpenCV and OpenCV+Qt
Normally when we use OpenCV+Qt we can make a QWidget to show the content of the camera (VideoCapture) .. Capture a frame and update this querying for new frames to device.
With OpenNI and OpenCV i see examples using a for cycle to pull data from Kinect Sensors (image, depth) , but i don't know how to make this pulling routing mora straightforward. I mean, similar to the OpenCV frame querying.
The idea is embed in a QWidget the images captured from Kinect. The QWidget will have (for now) 2 buttons "Start Kinect" and "Quit" ..and below the Painting section to show the data captured.
Any thoughs?
You can try the QTimer class to query the kinect at fixed time intervals. In my application I use the code below.
void UpperBodyGestures::refreshUsingTimer()
{
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(MainEventFunction()));
timer->start(30);
}
void UpperBodyGestures::on_pushButton_Kinect_clicked()
{
InitKinect();
ui.pushButton_Kinect->setEnabled(false);
}
// modify the main function to call refreshUsingTimer function
UpperBodyGestures w;
w.show();
w.refreshUsingTimer();
return a.exec();
Then to query the frame you can use the label widget. I'm posting an example code below:
// Query the depth data from Openni
const XnDepthPixel* pDepth = depthMD.Data();
// Convert it to opencv for manipulation etc
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
// Normalize Depth image to 0-255 range (cant remember max range number so assuming it as 10k)
DepthBuf = DepthBuf / 10000 *255;
DepthBuf.convertTo(DepthBuf,CV_8UC1);
// Convert opencv image to a Qimage object
QImage qimage((const unsigned char*)DepthBuf.data, DepthBuf.size().width, DepthBuf.size().height, DepthBuf.step, QImage::Format_RGB888);
// Display the Qimage in the defined mylabel object
ui.myLabel->setPixmap(pixmap.fromImage(qimage,0).scaled(QSize(300,300), Qt::KeepAspectRatio, Qt::FastTransformation));