I'm trying to display a 3D scene(OpenGL-OpenCV) in a QGraphicsView object in QT. The scene has 5 planes: top, bottom, right, left and front. I'm taking images from my webcam and mapping them to the front plane. I have successfully displayed 4 of 5 planes, the front plane is missing.
I followed this tutorial to load the OpenGL scene: http://doc.trolltech.com/qq/qq26-openglcanvas.html
However, I don't know how to treat the IplImage to be displayed in the QT Object. Do you guys have any suggestion?
This is something that I salvaged out of a blog posting,
This will provide you with a QImage that you can display using Qt.
you should tailor it to fit your needs.
QImage img;
constructor()
{
// setup capture device
CvCapture *cvCapture = cvCreateCapture(0);
}
getQImageFromIplImage()
{
// this frame gets a frame from capture device
IplImage *frame = new IplImage();
frame = cvQueryFrame(cvCapture);
// create an IplImage with 8bit color depth
IplImage *iplImg = cvCreateImage(cvSize(frame->width, frame->height),IPL_DEPTH_8U, 3);
// copy image captured from capture device to new image, converting pixel data from OpenCV's default BGR format to Qt's RGB format
cvCvtColor(frame, iplImg, CV_BGR2RGB);
// create a this newly converted RGB pixel data with a QImage
qImg = QImage((uchar *)iplImg->imageData, iplImg->width, iplImg->height, QImage::Format_RGB888);
}
for full code, check out:
http://www.morethantechnical.com/2009/03/05/qt-opencv-combined-for-face-detecting-qwidgets/
Related
I have an application where I copy some raw image data into a QImage directly:
QImage* img = new QImage(desc.Width, desc.Height, QImage::Format_RGB32);
for (y = 0; y < img->height(); y++)
{
memcpy(img->scanLine(y), &rawData[y * pRes->RowPitch], pRes->RowPitch);
}
return img;
Later this QImage is drawn via a call
painter.drawPixmap();
Unfortunately drawPixmap() cannot handle a QImage directly, so it first has to be converted:
m_bgImage = new QPixmap();
m_bgImage->convertFromImage(image);
Due to timing reasons I would like to drop this additional conversion step.
Thus my question: are there any function in QPixmap that allow direct image data manipulation right as in QImage?
My idea would be to start with a QPixmap from the very beginning, copy the raw image data into the QPixmap object and then use it directly.
Thanks :-)
First of all you won't need that loop to create the QImage. You can:
QImage* img = new QImage(&rawData, desc.Width, desc.Height, pRes->RowPitch * 4, QImage::Format_RGB32);
Then you can
painter.drawImage(QPointF(0,0),*img);
If there is any specific reason to use QPixmap (like QPixmap caching) you will have no other choice than convert it to QPixmap first.
recently i start to learn Qt and now i'm working on GCS project that it must have a map with some tiled imges and and some graphics item like Plan,the path and also on over off all some gauge.
so we have 3 kind of item:
Tiled map in the background so that its change by scrolling .
in the middle there is a picture of airplane that move by gps changes and also its way .
on the all on off these items there 3 or 4 gauge like speed meter, horizontal gauge and altimeter gauge there are must be solid in somewhere of graphicsview and not change when scrolling down/up or left right
The question is what is the best way to implement this ?
here is first look of my project:
in first look gauge are not over map but i want to be ! i want to have bigger map screen with gauges include it !
And here is map updater code :
void mainMap::update()
{
m_scene->clear();
QString TilePathTemp;
QImage *imageTemp = new QImage();
int X_Start=visibleRect().topLeft().x()/256;
int X_Num=qCeil((float)visibleRect().bottomRight().x()/256.0f-(float)visibleRect().topLeft().x()/256.0f);
int Y_Start=visibleRect().topLeft().y()/256;
int Y_Num=qCeil((float)visibleRect().bottomRight().y()/256.0f-(float)visibleRect().topLeft().y()/256.0f);
LastCenterPoint->setX(visibleRect().center().x());
LastCenterPoint->setY(visibleRect().center().y());
X_Start=(X_Start-X_MAP_MARGIN)>0?(X_Start-X_MAP_MARGIN):0;
Y_Start=(Y_Start-Y_MAP_MARGIN)>0?(Y_Start-Y_MAP_MARGIN):0;
X_Num+=X_MAP_MARGIN;
Y_Num+=Y_MAP_MARGIN;
qDebug()<<"XS:"<<X_Start<<" Num:"<<X_Num;
qDebug()<<"YS:"<<Y_Start<<" Num:"<<Y_Num;
for(int x=X_Start;x<=X_Start+X_Num;x++){
for(int y=Y_Start;y<=Y_Start+Y_Num;y++){
if(Setting->value("MapType",gis::Hybrid).toInt()==gis::Hybrid) TilePathTemp=Setting->value("MapPath","/Users/M410/Documents/Map").toString()+"/Hybrid/gh_"+QString::number(x)+"_"+QString::number(y)+"_"+QString::number(ZoomLevel)+".jpeg" ;
else if(Setting->value("MapType",gis::Sattelite).toInt()==gis::Sattelite) TilePathTemp=Setting->value("MapPath","/Users/M410/Documents/Map").toString()+"/Sattelite/gs_"+QString::number(x)+"_"+QString::number(y)+"_"+QString::number(ZoomLevel)+".jpeg" ;
else if(Setting->value("MapType",gis::Street).toInt()==gis::Street) TilePathTemp=Setting->value("MapPath","/Users/M410/Documents/Map").toString()+"/Street/gm_"+QString::number(x)+"_"+QString::number(y)+"_"+QString::number(ZoomLevel)+".jpeg" ;
QFileInfo check_file(TilePathTemp);
// check if file exists and if yes: Is it really a file and no directory?
if (check_file.exists() && check_file.isFile()) {
// qDebug()<<"Exist!";
imageTemp->load(TilePathTemp);
QPixmap srcImage = QPixmap::fromImage(*imageTemp);
//QPixmap srcImage("qrc:/Map/File1.jpeg");
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(srcImage);
item->setPos(QPointF(x*256, y*256));
m_scene->addItem(item);
// centerOn( width() / 2.0f , height() / 2.0f );
} else {
qDebug()<<"NOT Exist!";
}
}
}
Really, you should consider using QML. The advantage of using QML instead of QGraphicsView is you can iterate a lot faster than if you were working directly in C++. The primary downside is generally increased memory usage and incompatibility with QWidgets.
So if you need unique graphics, and very little "standard widget" stuff, you should use QML first and then QGraphicsView ONLY IF requirements dictate it.
Specific to your project though, Qt has a Map type which could be useful: https://doc.qt.io/qt-5/qml-qtlocation-map.html
I have created a simple application and I need export from pixmap to the 16-bit bmp image. I have several pixmap items so I have the for loop like this where I first create QImage and convert it to Format_RGB16:
for(QList<image_handler * >::iterator it=imageItems->begin(); it!=imageItems->end(); it++)
{
...
// image_handler inherits QPixmap
QFile export_image(path+"/img_"+code+".bmp");
QImage export_img = (*it)->toImage().convertToFormat(QImage::Format_RGB16);
export_img.save(&export_image, "BMP");
...
}
where image_handler is my custom QPixmap. Images are exported at path given, with correct filename. However when I look at properties of file (in windows) I can see that image depth is 24-bit. Unfortunately I need them to be 16-bit.
What I am doing wrong here? Or is this a bug in Qt? Then how can I export 16-bit bmps from pixmap?
Turns out, that Qt forcibly converts image, before saving it to bmp.
qt-src/src/gui/image/qbmphandler.cpp:777:
bool QBmpHandler::write(const QImage &img)
{
QImage image;
switch (img.format()) {
case QImage::Format_ARGB8565_Premultiplied:
case QImage::Format_ARGB8555_Premultiplied:
case QImage::Format_ARGB6666_Premultiplied:
case QImage::Format_ARGB4444_Premultiplied:
image = img.convertToFormat(QImage::Format_ARGB32);
break;
case QImage::Format_RGB16:
case QImage::Format_RGB888:
case QImage::Format_RGB666:
case QImage::Format_RGB555:
case QImage::Format_RGB444:
image = img.convertToFormat(QImage::Format_RGB32);
break;
default:
image = img;
}
...
So, if you need to save bmp 16bit, you'll have to do it manually, filling header and using QImage::bits() and QImage::byteCount().
Is there a way to pass an image as a name not as a path to Qpixmap in QT(C++),, for example i have the following code in which the processed image should be displayed using Qpixmap label but when i tried that i have to save it and then to pass it to Qpixmap to be displayed,,, but it's not an efficient way so could anybody help me please i'm new to Qt and have no experience..
void MainWindow::on_pushButton_3_clicked()
{
cv::Mat hsv_img, seg_img,infected_area, hsv_infected,seg_input,hsv_seg;
Mat filter_img,hsv;
cv::Mat input_image = imread(this->file_name.toAscii().data());
medianBlur(input_image,filter_img,7);
cvtColor(filter_img, hsv, CV_BGR2HSV);
hsvSeg(filter_img,hsv, hsv_img, seg_img,14.0,0.0,117.0,255.0,133.0,179.0);//call hsv segmentation function to segment leaf
cvtColor(seg_img, hsv_seg, CV_BGR2HSV);
hsvSeg(seg_img,hsv_seg, hsv_infected, infected_area,0.09*255,0.01*255,0.1*255,0.14*255,1.0*255,1.0*255);//call hsv segmentation to segment infected areas
imwrite( "C:/Image.jpg", filter_img );
this->ui->ImageView_2->setPixmap(QPixmap("C:/Image.jpg"));// here the problem
Try this:
cvtColor(mat, mat, CV_BGR2RGB);
QImage qimg((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
ui->label->setPixmap(QPixmap::fromImage(qimg));
I'm trying to make an app using Kinect (OpenNI), processing the image (OpenCV) with a GUI.
I tested de OpenNI+OpenCV and OpenCV+Qt
Normally when we use OpenCV+Qt we can make a QWidget to show the content of the camera (VideoCapture) .. Capture a frame and update this querying for new frames to device.
With OpenNI and OpenCV i see examples using a for cycle to pull data from Kinect Sensors (image, depth) , but i don't know how to make this pulling routing mora straightforward. I mean, similar to the OpenCV frame querying.
The idea is embed in a QWidget the images captured from Kinect. The QWidget will have (for now) 2 buttons "Start Kinect" and "Quit" ..and below the Painting section to show the data captured.
Any thoughs?
You can try the QTimer class to query the kinect at fixed time intervals. In my application I use the code below.
void UpperBodyGestures::refreshUsingTimer()
{
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(MainEventFunction()));
timer->start(30);
}
void UpperBodyGestures::on_pushButton_Kinect_clicked()
{
InitKinect();
ui.pushButton_Kinect->setEnabled(false);
}
// modify the main function to call refreshUsingTimer function
UpperBodyGestures w;
w.show();
w.refreshUsingTimer();
return a.exec();
Then to query the frame you can use the label widget. I'm posting an example code below:
// Query the depth data from Openni
const XnDepthPixel* pDepth = depthMD.Data();
// Convert it to opencv for manipulation etc
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
// Normalize Depth image to 0-255 range (cant remember max range number so assuming it as 10k)
DepthBuf = DepthBuf / 10000 *255;
DepthBuf.convertTo(DepthBuf,CV_8UC1);
// Convert opencv image to a Qimage object
QImage qimage((const unsigned char*)DepthBuf.data, DepthBuf.size().width, DepthBuf.size().height, DepthBuf.step, QImage::Format_RGB888);
// Display the Qimage in the defined mylabel object
ui.myLabel->setPixmap(pixmap.fromImage(qimage,0).scaled(QSize(300,300), Qt::KeepAspectRatio, Qt::FastTransformation));