I would like to make the buffer of cv::mat point to the buffer of QImage but not copy
the data of QImage into the cv::mat.
cv::Mat const reference_qimage_to_mat(QImage const &img, int format)
{
cv::Mat mat(img.height(), img.width(), format);
for(int i = 0; i != mat.rows; ++i)
{
//pseudo code, wouldn't work
//mat.ptr(i) = img.scanLine(i);
}
return mat;
}
I try to search the answer by google but I could only find how to copy
the data of QImage into cv::mat.Thanks
the cv::Mat object is simply a header for the image data, so you can do it upon your object construction:
cv::Mat mat(img.height(), img.width(), type, img.bits());
where type depends on your data, CV_8UC1 for single channel, CV_8UC3 for RGB, etc.
Related
I need read the raw data(bgra5551) into QImage and display it and write the QImage to raw data back(bgra5551), Qt only supports Format_RGB555, is there any way make Format_RGB555 transparent(alpha channel 255)?
I tired this:
QImage getImage(uchar* data, int width, int height){
return QImage(data, width, height, QImage::Format_RGB555).copy();
}
uchar* getData(QImage img){
return img.bits();
}
QImage imgRGB555 = getImage(bgra5551, width, height);
// problem1: imgRGB555 is not transparent.
uchar* org = getData(imgRGB555);
// problem2: bgra5551 != org
I'm creating a socket-based program to send a screenshot from one user to another user. I need to convert a screenshot to a byte array before sending. After I convert my screenshot to a QByteArray I insert 4 bytes to the beginning of the array to mark that it is a picture (it is the number 20 to tell me it is a picture and not text or something else).
After I send the byte array via a socket to other user, when it is received I read the first 4 bytes to know what it is. Since it was a picture I then convert it from a QByteArray to QPixmap to show it on a label. I use secondPixmap.loadFromData(byteArray,"JPEG") to load it but it not load any picture.
This is a sample of my code:
void MainWindow::shootScreen()
{
originalPixmap = QPixmap(); // clear image for low memory situations
// on embedded devices.
originalPixmap = QGuiApplication::primaryScreen()->grabWindow(0);
scaledPixmap = originalPixmap.scaled(500, 500);
QByteArray bArray;
QBuffer buffer(&bArray);
buffer.open(QIODevice::WriteOnly);
originalPixmap.save(&buffer,"JPEG",5);
qDebug() << bArray.size() << "diz0";
byteArray= QByteArray();
QDataStream ds(&byteArray,QIODevice::ReadWrite);
int32_t c = 20;
ds << c;
ds<<bArray;
}
void MainWindow::updateScreenshotLabel()
{
this->ui->label->setPixmap(secondPixmap.scaled(this->ui->label->size(), Qt::KeepAspectRatio, Qt::SmoothTransformation));
}
void MainWindow::on_pushButton_clicked()
{
shootScreen();
}
void MainWindow::on_pushButton_2_clicked()
{
secondPixmap = QPixmap();
QDataStream ds(&byteArray,QIODevice::ReadOnly);
qint32 code;
ds>>code;
secondPixmap.loadFromData(byteArray,"JPEG");
updateScreenshotLabel();
}
Your MainWindow::on_pushButton_2_clicked implementation looks odd. You have...
QDataStream ds(&byteArray,QIODevice::ReadOnly);
which creates a read-only QDataStream that will read it's input data from byteArray. But later you have...
secondPixmap.loadFromData(byteArray,"JPEG");
which attempts to read the QPixmap directly from the same QByteArray -- bypassing the QDataStream completely.
You can also make use of the QPixmap static members that read from/write to a QDataStream. So I think you're looking for something like...
QDataStream ds(&byteArray,QIODevice::ReadOnly);
qint32 code;
ds >> code;
if (code == 20)
ds >> secondPixmap;
And likewise for your MainWindow::shootScreen implementation. You could reduce your code a fair bit by making use of QDataStream & operator<<(QDataStream &stream, const QPixmap &pixmap).
The task is to copy a frame from a QVideoFrame, and possibly do something to that Image and displaying the manipulated Image in QML.
...
m_lastFrame = QImage(videoFrame.width(), videoFrame.height(), QImage::Format_ARGB32);
memcpy(m_lastFrame.bits(), videoFrame.bits(),videoFrame.mappedBytes());
...
The above code causes a crash, since m_lastFrame is short of 32 bytes(3686400 vs 3686432)
videoFrame.mappedBytes() reports 3686432 bytes. What am I doing wrong here? Or how should I calculate the size of m_lastFrame().
The code is running on Mac OSx 10.9.5 Qt 5.1.1.
Some additional code:
...
if( videoFrame.map(QAbstractVideoBuffer::ReadOnly) ){
m_lastFrame = QImage(videoFrame.width(),videoFrame.height(),QImage::Format_ARGB32);
memcpy(m_lastFrame.bits(), videoFrame.bits(),videoFrame.mappedBytes() - 32);
...
}
...
Since that doesn't always work, see also comment at convert QVideoFrame to QImage , i.e.
QImage Camera::imageFromVideoFrame(const QVideoFrame& buffer) const
{
QImage img;
QVideoFrame frame(buffer); // make a copy we can call map (non-const) on
frame.map(QAbstractVideoBuffer::ReadOnly);
QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(
frame.pixelFormat());
// BUT the frame.pixelFormat() is QVideoFrame::Format_Jpeg, and this is
// mapped to QImage::Format_Invalid by
// QVideoFrame::imageFormatFromPixelFormat
if (imageFormat != QImage::Format_Invalid) {
img = QImage(frame.bits(),
frame.width(),
frame.height(),
// frame.bytesPerLine(),
imageFormat);
} else {
// e.g. JPEG
int nbytes = frame.mappedBytes();
img = QImage::fromData(frame.bits(), nbytes);
}
frame.unmap();
return img;
}
You can try creating a QImage by first mapping the QVideoFrame onto a QAbstractVideoBuffer in the following way :
bool CameraFrameGrabber::present(const QVideoFrame &frame)
{
Q_UNUSED(frame);
if (frame.isValid()) {
QVideoFrame cloneFrame(frame);
cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
const QImage image(cloneFrame.bits(),
cloneFrame.width(),
cloneFrame.height(),
QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat()));
emit frameAvailable(image);
qDebug()<<cloneFrame.mappedBytes();
cloneFrame.unmap();
return true;
}
If you want QImage in any other format just change the last parameter during creating the image, to the whichever format you like :
QImage::Format_xxx ;
instead of
QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat()));
I receive a byte array which contains PNG file from network. I need to set this to a pixmap and set it as a texture to my QGlWidget. As I run the program below, pixmap is in debug mode and does not contain anything. However, bytes contains the whole byte array received from network.
void MainWindow::dataFromServer(QByteArray bytes)
{
// QByteArray bytes;
QBuffer buffer(&bytes);
QPixmap pixmap;
// pixmap = QPixmap::grabWidget(this);
buffer.open(QIODevice::WriteOnly);
pixmap.save(&buffer, "PNG"); // writes pixmap into bytes in PNG format
emit sendPixmapToWidget(pixmap);
}
and here I set the pixmap to texture:
void GlWidget::pixmapCatchFromForm(QPixmap pixmap)
{
deleteTexture(texture);
// image->loadFromData(bytes, "PNG");
texture = bindTexture(pixmap);
qDebug() << texture; // returns 1
updateGL();
}
QPixmap::save(..) saves the QPixmap's contents to the buffer, surely you want to use QPixmap::loadFromData(..) to do the opposite?
is there any way to store exact cv::Mat format data in sqlite3 using Qt.. as i will be using the same cv::Mat format in future..
i tried converting image to unsigned char* and them storing it.. but this didn't worked for me.. any other technique ??
You can serialize cv::Mat to QByteArray (see kde/libkface):
QByteArray mat2ByteArray(const cv::Mat &image)
{
QByteArray byteArray;
QDataStream stream( &byteArray, QIODevice::WriteOnly );
stream << image.type();
stream << image.rows;
stream << image.cols;
const size_t data_size = image.cols * image.rows * image.elemSize();
QByteArray data = QByteArray::fromRawData( (const char*)image.ptr(), data_size );
stream << data;
return byteArray;
}
Then store to DB.
To convert from QByteArray after reading from DB:
cv::Mat byteArray2Mat(const QByteArray & byteArray)
{
QDataStream stream(byteArray);
int matType, rows, cols;
QByteArray data;
stream >> matType;
stream >> rows;
stream >> cols;
stream >> data;
cv::Mat mat( rows, cols, matType, (void*)data.data() );
return mat.clone();
}
It works for me.