Some background info about my issue. My goal is to optimize drawing of images coming from webcam, the images come as QVideoFrame and are currently loaded in to QImage and drawn from there. This solution works fine, but drawing QImage is very slow on X11. Drawing one image takes about 20ms which doesn't sound like much but when you do this for every frame this cut's the framerate of the camerafeed to half.
I did some research and testing, drawing QPixMaps in X11 can be done about 10 times faster than drawing QImages.
This is how the drawing process is done currently
if(mVFcurrentFrame.map(QAbstractVideoBuffer::ReadOnly))
{
QImage image(mVFcurrentFrame.bits(), mVFcurrentFrame.width(), mVFcurrentFrame.height(), mVFcurrentFrame.bytesPerLine(), imageFormat);
painter->drawImage(0,0,image); //Takes about 20ms
mVFcurrentFrame.unmap();
}
What i have tried so far:
Converting the QImage to QPixMap, this works but the conversion is as slow as painting the Qimage
Loading the QVideoFrame straight to QPixMap with QPixMap::loadFromData(), can't make it work.
So my question is, can i convert QVideoFrame straight to QPixMap and draw it instead of using QImage and how would you do the QVideoFrame to QPixmap conversion without using QImage in between?
I have tried using QPixMap::loadFromData() method to load the video frame but so far i have been unable to make it work.
If this isn't possible could i thread the QImage to QPixMap conversion or optimize the drawing in some other way?
This is my problem too.
camera frames are shown very slowly in QLabel.
my code is here:
QCamera *camera = new QCamera(this);
camera->setCaptureMode(QCamera::CaptureViewfinder);
QVideoProbe *videoProbe = new QVideoProbe(this);
bool ret = videoProbe->setSource(camera);
if (ret) {
connect(videoProbe, SIGNAL(videoFrameProbed(const QVideoFrame &)),
this, SLOT(present(const QVideoFrame &)));
}
camera->start();
...
...
bool MainWindow::present(const QVideoFrame &frame)
{
QVideoFrame cloneFrame(frame);
if(cloneFrame.map(QAbstractVideoBuffer::ReadOnly))
{
QImage img(
cloneFrame.size(), QImage::Format_ARGB32);
qt_convert_NV21_to_ARGB32(cloneFrame.bits(),
(quint32 *)img.bits(),
cloneFrame.width(),
cloneFrame.height());
label->setPixmap(QPixmap::fromImage(img));
cloneFrame.unmap();
}
return true;
}
Related
I get raw video data from the V4L2 driver using VIDIOC_DQBUF and I want to render this frames in qt using QVideoFrame as described here: https://blog.katastros.com/a?ID=9f708708-c5b3-4cb3-bbce-400cc8b8000c
This code works well but has huge performance issues.
Here is the problematik code part when doing this:
QVideoFrame f(size, QSize(width, height), width, QVideoFrame::Format_YUV420P);
if (f.map(QAbstractVideoBuffer::WriteOnly)) {
memcpy(f.bits(), data, size);
f.setStartTime(0);
f.unmap();
emit newFrameAvailable(f);
}
The memcpy operation for my 4K video reduces the framerate from 35fps to 5fps on my arm based embedded system.
This constructor is supposed to constructs a video frame from a buffer with the given pixel format and size in pixels. However I cannot find any example of this:
QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format)
I just need to pass valid buffer to QVideoFrame. I don't need to map or unmap the QVideoFrame. Like this:
unsigned char * pBuffer = get_pointer_to_a_frame();
QVideoFrame frame((QAbstractVideoBuffer *) pBuffer, QSize(width, height), QVideoFrame::Format_YUV420P);
frame.setStartTime(0);
emit newFrameAvailable(frame);
Any zero-copy QVideoFrame usage will wellcome.
I have an application where I copy some raw image data into a QImage directly:
QImage* img = new QImage(desc.Width, desc.Height, QImage::Format_RGB32);
for (y = 0; y < img->height(); y++)
{
memcpy(img->scanLine(y), &rawData[y * pRes->RowPitch], pRes->RowPitch);
}
return img;
Later this QImage is drawn via a call
painter.drawPixmap();
Unfortunately drawPixmap() cannot handle a QImage directly, so it first has to be converted:
m_bgImage = new QPixmap();
m_bgImage->convertFromImage(image);
Due to timing reasons I would like to drop this additional conversion step.
Thus my question: are there any function in QPixmap that allow direct image data manipulation right as in QImage?
My idea would be to start with a QPixmap from the very beginning, copy the raw image data into the QPixmap object and then use it directly.
Thanks :-)
First of all you won't need that loop to create the QImage. You can:
QImage* img = new QImage(&rawData, desc.Width, desc.Height, pRes->RowPitch * 4, QImage::Format_RGB32);
Then you can
painter.drawImage(QPointF(0,0),*img);
If there is any specific reason to use QPixmap (like QPixmap caching) you will have no other choice than convert it to QPixmap first.
Is there a way to pass an image as a name not as a path to Qpixmap in QT(C++),, for example i have the following code in which the processed image should be displayed using Qpixmap label but when i tried that i have to save it and then to pass it to Qpixmap to be displayed,,, but it's not an efficient way so could anybody help me please i'm new to Qt and have no experience..
void MainWindow::on_pushButton_3_clicked()
{
cv::Mat hsv_img, seg_img,infected_area, hsv_infected,seg_input,hsv_seg;
Mat filter_img,hsv;
cv::Mat input_image = imread(this->file_name.toAscii().data());
medianBlur(input_image,filter_img,7);
cvtColor(filter_img, hsv, CV_BGR2HSV);
hsvSeg(filter_img,hsv, hsv_img, seg_img,14.0,0.0,117.0,255.0,133.0,179.0);//call hsv segmentation function to segment leaf
cvtColor(seg_img, hsv_seg, CV_BGR2HSV);
hsvSeg(seg_img,hsv_seg, hsv_infected, infected_area,0.09*255,0.01*255,0.1*255,0.14*255,1.0*255,1.0*255);//call hsv segmentation to segment infected areas
imwrite( "C:/Image.jpg", filter_img );
this->ui->ImageView_2->setPixmap(QPixmap("C:/Image.jpg"));// here the problem
Try this:
cvtColor(mat, mat, CV_BGR2RGB);
QImage qimg((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
ui->label->setPixmap(QPixmap::fromImage(qimg));
I'm trying to make an app using Kinect (OpenNI), processing the image (OpenCV) with a GUI.
I tested de OpenNI+OpenCV and OpenCV+Qt
Normally when we use OpenCV+Qt we can make a QWidget to show the content of the camera (VideoCapture) .. Capture a frame and update this querying for new frames to device.
With OpenNI and OpenCV i see examples using a for cycle to pull data from Kinect Sensors (image, depth) , but i don't know how to make this pulling routing mora straightforward. I mean, similar to the OpenCV frame querying.
The idea is embed in a QWidget the images captured from Kinect. The QWidget will have (for now) 2 buttons "Start Kinect" and "Quit" ..and below the Painting section to show the data captured.
Any thoughs?
You can try the QTimer class to query the kinect at fixed time intervals. In my application I use the code below.
void UpperBodyGestures::refreshUsingTimer()
{
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(MainEventFunction()));
timer->start(30);
}
void UpperBodyGestures::on_pushButton_Kinect_clicked()
{
InitKinect();
ui.pushButton_Kinect->setEnabled(false);
}
// modify the main function to call refreshUsingTimer function
UpperBodyGestures w;
w.show();
w.refreshUsingTimer();
return a.exec();
Then to query the frame you can use the label widget. I'm posting an example code below:
// Query the depth data from Openni
const XnDepthPixel* pDepth = depthMD.Data();
// Convert it to opencv for manipulation etc
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
// Normalize Depth image to 0-255 range (cant remember max range number so assuming it as 10k)
DepthBuf = DepthBuf / 10000 *255;
DepthBuf.convertTo(DepthBuf,CV_8UC1);
// Convert opencv image to a Qimage object
QImage qimage((const unsigned char*)DepthBuf.data, DepthBuf.size().width, DepthBuf.size().height, DepthBuf.step, QImage::Format_RGB888);
// Display the Qimage in the defined mylabel object
ui.myLabel->setPixmap(pixmap.fromImage(qimage,0).scaled(QSize(300,300), Qt::KeepAspectRatio, Qt::FastTransformation));
I am looking for a way to simply paste some Qimage into bigger one, starting with some given (x,y). Now, I am copying pixel by pixel all Qimage.
QImage srcImage = QImage(100, 100);
QImage destImage = QImage(200, 200);
QPoint destPos = QPoint(25, 25); // The location to draw the source image within the dest
srcImage.fill(Qt::red);
destImage.fill(Qt::white);
QPainter painter(&destImage);
painter.drawImage(destPos, srcImage);
painter.end();
Yes, use a QPainter to paint into a QPaintDevice, QImage is a QPaintDevice, so it works.