Video too fast in Qt using OpenCV - qt

I am playing a video on label in Qt. I am using Open CV for the same. The video is playing but it is too fast. How can I decrease the playback speed. I tried using setCaptureProperty but it is not working. My code is as follows
HeaderFile Declarations:
CvCapture *capture;
IplImge *frame;
cv::Mat source_image;
cv::Mat dest_image;
QTimer *imageTimer;
Button click slot:
void MainWindow::onButtonClick()
{
capture = cvCaptureFromFile("/mp.mp4");
while(capture
{
frame = cvQueryFrame((capture);
source_image = frame;
cv::resize(source_image,source_image,cv::Size(420,180),0,0);
cv::cvtColor(source_image,source_image,CV_BGR2RGB);
QImage qimg = QImage((const unsigned char*)source_image.data,source_image.cols,source_imge.rows,QImage::Format_RGB888);
label->setPixmap(QPixmap::fromImage(qimg));
label->resize(label->pixmap()->size());
}
}
Somebody please guide on this...Thank You :)

I use Qtimer in this way,not the while loop,like following:
void on_button_click()
{
cap.open(0);
timer->start(50);
}
void readframe()
{
//display image in label
cap>>frame;
Mat2QImage(); // convert mat to QImage;
...
//setpixmap();
...
}
and in the main window,
connet(timer,timeout(),this,readframe());

Related

How to save a frame using QMediaPlayer?

I want to save an image of a frame from a QMediaPlayer. After reading the documentation, I understood that I should use QVideoProbe. I am using the following code :
QMediaPlayer *player = new QMediaPlayer();
QVideoProbe *probe = new QVideoProbe;
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
qDebug()<<probe->setSource(player); // Returns true, hopefully.
player->setVideoOutput(myVideoSurface);
player->setMedia(QUrl::fromLocalFile("observation.mp4"));
player->play(); // Start receving frames as they get presented to myVideoSurface
But unfortunately, probe->setSource(player) always returns false for me, and thus my slot processFrame is not triggered.
What am I doing wrong ? Does anybody have a working example of QVideoProbe ?
You're not doing anything wrong. As #DYangu pointed out, your media object instance does not support monitoring video. I had the same problem (and same for QAudioProbe but it doesn't interest us here). I found a solution by looking at this answer and this one.
The main idea is to subclass QAbstractVideoSurface. Once you've done that, it will call the method QAbstractVideoSurface::present(const QVideoFrame & frame) of your implementation of QAbstractVideoSurface and you will be able to process the frames of your video.
As it is said here, usually you will just need to reimplement two methods :
supportedPixelFormats so that the producer can select an appropriate format for the QVideoFrame
present which allows to display the frame
But at the time, I searched in the Qt source code and happily found this piece of code which helped me to do a full implementation. So, here is the full code for using a "video frame grabber".
VideoFrameGrabber.cpp :
#include "VideoFrameGrabber.h"
#include <QtWidgets>
#include <qabstractvideosurface.h>
#include <qvideosurfaceformat.h>
VideoFrameGrabber::VideoFrameGrabber(QWidget *widget, QObject *parent)
: QAbstractVideoSurface(parent)
, widget(widget)
, imageFormat(QImage::Format_Invalid)
{
}
QList<QVideoFrame::PixelFormat> VideoFrameGrabber::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const
{
Q_UNUSED(handleType);
return QList<QVideoFrame::PixelFormat>()
<< QVideoFrame::Format_ARGB32
<< QVideoFrame::Format_ARGB32_Premultiplied
<< QVideoFrame::Format_RGB32
<< QVideoFrame::Format_RGB24
<< QVideoFrame::Format_RGB565
<< QVideoFrame::Format_RGB555
<< QVideoFrame::Format_ARGB8565_Premultiplied
<< QVideoFrame::Format_BGRA32
<< QVideoFrame::Format_BGRA32_Premultiplied
<< QVideoFrame::Format_BGR32
<< QVideoFrame::Format_BGR24
<< QVideoFrame::Format_BGR565
<< QVideoFrame::Format_BGR555
<< QVideoFrame::Format_BGRA5658_Premultiplied
<< QVideoFrame::Format_AYUV444
<< QVideoFrame::Format_AYUV444_Premultiplied
<< QVideoFrame::Format_YUV444
<< QVideoFrame::Format_YUV420P
<< QVideoFrame::Format_YV12
<< QVideoFrame::Format_UYVY
<< QVideoFrame::Format_YUYV
<< QVideoFrame::Format_NV12
<< QVideoFrame::Format_NV21
<< QVideoFrame::Format_IMC1
<< QVideoFrame::Format_IMC2
<< QVideoFrame::Format_IMC3
<< QVideoFrame::Format_IMC4
<< QVideoFrame::Format_Y8
<< QVideoFrame::Format_Y16
<< QVideoFrame::Format_Jpeg
<< QVideoFrame::Format_CameraRaw
<< QVideoFrame::Format_AdobeDng;
}
bool VideoFrameGrabber::isFormatSupported(const QVideoSurfaceFormat &format) const
{
const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
const QSize size = format.frameSize();
return imageFormat != QImage::Format_Invalid
&& !size.isEmpty()
&& format.handleType() == QAbstractVideoBuffer::NoHandle;
}
bool VideoFrameGrabber::start(const QVideoSurfaceFormat &format)
{
const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
const QSize size = format.frameSize();
if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) {
this->imageFormat = imageFormat;
imageSize = size;
sourceRect = format.viewport();
QAbstractVideoSurface::start(format);
widget->updateGeometry();
updateVideoRect();
return true;
} else {
return false;
}
}
void VideoFrameGrabber::stop()
{
currentFrame = QVideoFrame();
targetRect = QRect();
QAbstractVideoSurface::stop();
widget->update();
}
bool VideoFrameGrabber::present(const QVideoFrame &frame)
{
if (frame.isValid())
{
QVideoFrame cloneFrame(frame);
cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
const QImage image(cloneFrame.bits(),
cloneFrame.width(),
cloneFrame.height(),
QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat()));
emit frameAvailable(image); // this is very important
cloneFrame.unmap();
}
if (surfaceFormat().pixelFormat() != frame.pixelFormat()
|| surfaceFormat().frameSize() != frame.size()) {
setError(IncorrectFormatError);
stop();
return false;
} else {
currentFrame = frame;
widget->repaint(targetRect);
return true;
}
}
void VideoFrameGrabber::updateVideoRect()
{
QSize size = surfaceFormat().sizeHint();
size.scale(widget->size().boundedTo(size), Qt::KeepAspectRatio);
targetRect = QRect(QPoint(0, 0), size);
targetRect.moveCenter(widget->rect().center());
}
void VideoFrameGrabber::paint(QPainter *painter)
{
if (currentFrame.map(QAbstractVideoBuffer::ReadOnly)) {
const QTransform oldTransform = painter->transform();
if (surfaceFormat().scanLineDirection() == QVideoSurfaceFormat::BottomToTop) {
painter->scale(1, -1);
painter->translate(0, -widget->height());
}
QImage image(
currentFrame.bits(),
currentFrame.width(),
currentFrame.height(),
currentFrame.bytesPerLine(),
imageFormat);
painter->drawImage(targetRect, image, sourceRect);
painter->setTransform(oldTransform);
currentFrame.unmap();
}
}
VideoFrameGrabber.h
#ifndef VIDEOFRAMEGRABBER_H
#define VIDEOFRAMEGRABBER_H
#include <QtWidgets>
class VideoFrameGrabber : public QAbstractVideoSurface
{
Q_OBJECT
public:
VideoFrameGrabber(QWidget *widget, QObject *parent = 0);
QList<QVideoFrame::PixelFormat> supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const;
bool isFormatSupported(const QVideoSurfaceFormat &format) const;
bool start(const QVideoSurfaceFormat &format);
void stop();
bool present(const QVideoFrame &frame);
QRect videoRect() const { return targetRect; }
void updateVideoRect();
void paint(QPainter *painter);
private:
QWidget *widget;
QImage::Format imageFormat;
QRect targetRect;
QSize imageSize;
QRect sourceRect;
QVideoFrame currentFrame;
signals:
void frameAvailable(QImage frame);
};
#endif //VIDEOFRAMEGRABBER_H
Note : in the .h, you will see I added a signal taking an image as a parameter. This will allow you to process your frame anywhere in your code. At the time, this signal took a QImage as a parameter, but you can of course take a QVideoFrame if you want to.
Now, we are ready to use this video frame grabber:
QMediaPlayer* player = new QMediaPlayer(this);
// no more QVideoProbe
VideoFrameGrabber* grabber = new VideoFrameGrabber(this);
player->setVideoOutput(grabber);
connect(grabber, SIGNAL(frameAvailable(QImage)), this, SLOT(processFrame(QImage)));
Now you just have to declare a slot named processFrame(QImage image) and you will receive a QImage each time you will enter the method present of your VideoFrameGrabber.
I hope that this will help you!
After Qt QVideoProbe documentation:
bool QVideoProbe::setSource(QMediaObject *mediaObject)
Starts monitoring the given mediaObject.
If there is no media object associated with mediaObject, or if it is
zero, this probe will be deactivated and this function wil return
true.
If the media object instance does not support monitoring video, this
function will return false.
Any previously monitored objects will no longer be monitored. Passing
in the same object will be ignored, but monitoring will continue.
So it seems your "media object instance does not support monitoring video"
TL;DR: https://gist.github.com/JC3/a7bab65acbd7659d1e57103d2b0021ba (only file)
I had a similar issue (5.15.2; although in my case I was on Windows, was definitely using the DirectShow back-end, the probe attachment was returning true, the sample grabber was in the graph, but the callback wasn't firing).
I never figured it out but needed to get something working so I kludged one out of a QAbstractVideoSurface, and it's been working well so far. It's a bit simpler than some of the other implementations in this post, and it's all in one file.
Note that Qt 5.15 or higher is required if you intend to both process frames and play them back with this, since the multi-surface QMediaPlayer::setVideoOutput wasn't added until 5.15. If all you want to do is process video you can still use the code below as a template for pre-5.15, just gut the formatSource_ parts.
Code:
VideoProbeSurface.h (the only file; link is to Gist)
#ifndef VIDEOPROBESURFACE_H
#define VIDEOPROBESURFACE_H
#include <QAbstractVideoSurface>
#include <QVideoSurfaceFormat>
class VideoProbeSurface : public QAbstractVideoSurface {
Q_OBJECT
public:
VideoProbeSurface (QObject *parent = nullptr)
: QAbstractVideoSurface(parent)
, formatSource_(nullptr)
{
}
void setFormatSource (QAbstractVideoSurface *source) {
formatSource_ = source;
}
QList<QVideoFrame::PixelFormat> supportedPixelFormats (QAbstractVideoBuffer::HandleType type) const override {
return formatSource_ ? formatSource_->supportedPixelFormats(type)
: QList<QVideoFrame::PixelFormat>();
}
QVideoSurfaceFormat nearestFormat (const QVideoSurfaceFormat &format) const override {
return formatSource_ ? formatSource_->nearestFormat(format)
: QAbstractVideoSurface::nearestFormat(format);
}
bool present (const QVideoFrame &frame) override {
emit videoFrameProbed(frame);
return true;
}
signals:
void videoFrameProbed (const QVideoFrame &frame);
private:
QAbstractVideoSurface *formatSource_;
};
#endif // VIDEOPROBESURFACE_H
I went for the quickest-to-write implementation possible so it just forwards supported pixel formats from another surface (my intent was to both probe and play back to a QVideoWidget) and you get whatever format you get. I just needed to grab subimages into QImages though, which handles most common formats. But you could modify this to force any formats you want (e.g. you might want to just return formats supported by QImage or filter out source formats not supported by QImage), etc.).
Example set up:
QMediaPlayer *player = ...;
QVideoWidget *widget = ...;
// forward surface formats provided by the video widget:
VideoProbeSurface *probe = new VideoProbeSurface(...);
probe->setFormatSource(widget->videoSurface());
// same signal signature as QVideoProbe's signal:
connect(probe, &VideoProbeSurface::videoFrameProbed, ...);
// the key move is to render to both the widget (for viewing)
// and probe (for processing). fortunately, QMediaPlayer can
// take a list:
player->setVideoOutput({ widget->videoSurface(), probe });
Notes
The only really sketchy thing I had to do was const_cast the QVideoFrame on the receiver side (for read-only access), since QVideoFrame::map() isn't const:
if (const_cast<QVideoFrame&>(frame).map(QAbstractVideoBuffer::ReadOnly)) {
...;
const_cast<QVideoFrame&>(frame).unmap();
}
But the real QVideoProbe would make you do the same thing so, I don't know what's up with that -- it's a strange API. I ran some tests with sw, native hw, and copy-back hw renderers and decoders and map/unmap in read mode seem to be functioning OK, so, whatever.
Performance-wise, the video will bog down if you spend too much time in the callback, so design accordingly. However, I didn't test QueuedConnection, so I don't know if that'd still have the issue (although the fact that the signal parameter is a reference would make me wary of trying it, as well as conceivable issues with the GPU releasing the memory before the slot ends up being called). I don't know how QVideoProbe behaves in this regard, either. I do know that, at least on my machine, I can pack and queue Full HD (1920 x 1080) resolution QImages to a thread pool for processing without slowing down the video.
You probably also want to implement some sort of auto-unmapper utility object for exception safe unmap(), etc. But again, that's not unique to this, same thing you'd have to do with QVideoProbe.
So hopefully that helps somebody else.
Example QImage Use
PS, example of packing arbitrarily-formatted QVideoFrames into a QImage in:
void MyVideoProcessor::onFrameProbed(const QVideoFrame &frame) {
if (const_cast<QVideoFrame&>(frame).map(QAbstractVideoBuffer::ReadOnly)) {
auto imageFormat = QVideoFrame::imageFormatFromPixelFormat(frame.pixelFormat());
QImage image(frame.bits(), frame.width(), frame.height(), frame.bytesPerLine(), imageFormat);
// *if* you want to use this elsewhere you must force detach:
image = image.copy();
// but if you don't need to use it past unmap(), you can just
// use the original image instead of a copy.
// <---- now do whatever with the image, e.g. save() it.
// if you *haven't* copied the image, then, before unmapping,
// kill any internal data pointers just to be safe:
image = QImage();
const_cast<QVideoFrame&>(frame).unmap();
}
}
Notes about that:
Constructing a QImage directly from the data is fast and essentially free: no copies are done.
The data buffers are only technically valid between map and unmap, so if you intend to use the QImage outside of that scope, you'll want to use copy() (or anything else that forces a detach) to force a deep copy.
You also probably want to ensure that the original not-copied QImage is destructed before calling unmap. It's unlikely to cause problems but it's always a good idea to minimize how many invalid pointers are hanging around at any given time, and also the QImage docs say "The buffer must remain valid throughout the life of the QImage and all copies that have not been modified or otherwise detached from the original buffer". Best to be strict about it.

How to save an image in a QGraphicsView into a bmp/jpg

I am a newbie for Qt.
The question is that: After turning an image into the QGraphicsView, I use a qrubberband to select the cropping area of the image. It is successful to select the croppping region at the moment.But I don't know how to save that cropping region into a jpg/bmp afterwards. Note that I made an ui component for the GraphicsView called CGraphicsView.
void CGraphicsView::mousePressEvent
( QMouseEvent* event)
{
mypoint = event->pos();
rubberBand = new QRubberBand(QRubberBand::Rectangle, this);//new rectangle band
rubberBand->setGeometry(QRect(mypoint, QSize()));
rubberBand->show();
}
void CGraphicsView::mouseMoveEvent(QMouseEvent *event)
{
if (rubberBand)
{
rubberBand->setGeometry(QRect(mypoint, event->pos()).normalized());//Area Bounding
}
}
void CGraphicsView::mouseReleaseEvent(QMouseEvent *event)
{
if (rubberBand)
{
QRect myRect(mypoint, event->pos());
rubberBand->hide();// hide on mouse Release
QImage copyImage; //<= this Qimage hold nothing
copyImage = copyImage.copy(myRect);
}
}
There is special method in Qt. It allows get screenshot of view.
QString fileName = "path";
QPixmap pixMap = QPixmap::grabWidget(graphicsView, rectRegion);
pixMap.save(fileName);
Save() method can save picture in different formats and with compression.
Also with grabWidget() method you can grab another widgets too. Moreover, this method take QRect as argument so you can create screenshot of region what you need.
You can save a part of your scene to an image like :
QPixmap pixmap=QPixmap(myRect.size());
QString filename = QFileDialog::getSaveFileName( this->parentWidget(), tr("Save As"), tr("image.png"));
if( !filename.isEmpty() )
{
QPainter painter( &pixmap );
painter.setRenderHint(QPainter::Antialiasing);
scene->render( &painter, pixmap.rect(),myRect, Qt::KeepAspectRatio );
painter.end();
pixmap.save(filename,"PNG");
}

Efficient way of displaying a continuous stream of QImages

I am currently using a QLabel to do this, but this seems to be rather slow:
void Widget::sl_updateLiveStreamLabel(spImageHolder_t _imageHolderShPtr) //slot
{
QImage * imgPtr = _imageHolderShPtr->getImagePtr();
m_liveStreamLabel.setPixmap( QPixmap::fromImage(*imgPtr).scaled(this->size(), Qt::KeepAspectRatio, Qt::FastTransformation) );
m_liveStreamLabel.adjustSize();
}
Here I am generating a new QPixmap object for each new image that arrives. Since QPixmap operations are restricted to the GUI Thread, this also makes the GUI feel poorly responsive.
I've seen there are already some discussions on this, most of them advising to use QGraphicsView or QGLWidget, but I have not been able to find a quick example how to properly use those, which would be what I am looking for.
I'd appreciate any help.
QPixmap::fromImage is not the only problem. Using QPixmap::scaled or QImage::scaled also should be avoided. However you can't display QImage directly in QLabel or QGraphicsView. Here is my class that display QImage directly and scales it to the size of the widget:
Header:
class ImageDisplay : public QWidget {
Q_OBJECT
public:
ImageDisplay(QWidget* parent = 0);
void setImage(QImage* image);
private:
QImage* m_image;
protected:
void paintEvent(QPaintEvent* event);
};
Source:
ImageDisplay::ImageDisplay(QWidget *parent) : QWidget(parent) {
m_image = 0;
setSizePolicy(QSizePolicy::Fixed, QSizePolicy::Fixed);
}
void ImageDisplay::setImage(QImage *image) {
m_image = image;
repaint();
}
void ImageDisplay::paintEvent(QPaintEvent*) {
if (!m_image) { return; }
QPainter painter(this);
painter.drawImage(rect(), *m_image, m_image->rect());
}
I tested it on 3000x3000 image scaled down to 600x600 size. It gives 40 FPS, while QLabel and QGraphicsView (even with fast image transformation enabled) gives 15 FPS.
Setting up a QGraphicsView and QGraphicsScene is quite straight-forward: -
int main( int argc, char **argv )
{
QApplication app(argc, argv);
// Create the scene and set its dimensions
QGraphicsScene scene;
scene.setSceneRect( 0.0, 0.0, 400.0, 400.0 );
// create an item that will hold an image
QGraphicsPixmapItem *item = new QGraphicsPixmapItem(0);
// load an image and set it to the pixmapItem
QPixmap pixmap("pathToImage.png") // example filename pathToImage.png
item->setPixmap(pixmap);
// add the item to the scene
scene.addItem(item);
item->setPos(200,200); // set the item's position in the scene
// create a view to look into the scene
QGraphicsView view( &scene );
view.setRenderHints( QPainter::Antialiasing );
view.show();
return app.exec();
}
I recommend not use QLabel but write own class. Every call of setPixmap causes layout system to recalculate sizes of items and this can propagate to topmost parent (QMainWindow) and this is quite big overhead.
Conversion and scaling also is a bit costly.
Finally best approach is to use profiler to detect where is the biggest problem.

How to monitor changes to an arbitrary widget?

I am starting a QT5 application with a rather complex design based on Qt Widgets. It runs on Beagleboard with a touchscreen. I will have a rather weird local invention instead of the LCD display. It's a laser painting on acrylic plate. It has no driver yet. To actually update a screen I must create a screenshot of the window as bitmap, turn it to grayscale and feed to a proprietary library, which will handle the laser. It should look cute, when ready. Unfortunately, the laser blinks on update, so I cannot just make screenshots on timer, or it will be jerky like hell.
I need to run a function every time a meaningful update of GUI happens, while preferably ignore things like button being pressed and released. Is there some way to create a hook without subclassing every single Qt Widget I will use? The only way to do this I know is to override paintEvent of everything. I want a simpler solution.
Possible assumptions are: the application will be running under X server with dummy display, will be the only GUI app running. Some updates happen without user input.
The code below does it. It doesn't dig too deeply into the internals of Qt, it merely leverages the fact that backing store devices are usually QImages. It could be modified to accommodate OpenGL-based backing stores as well.
The WidgetMonitor class is used to monitor the widgets for content changes. An entire top-level window is monitored no matter which particular widget is passed to the monitor(QWidget*) method. You only need to call the monitor method for one widget in the window you intend to monitor - any widget will do. The changes are sent out as a QImage of window contents.
The implementation installs itself as an event filter in the target window widget and all of its children, and monitors the repaint events. It attempts to coalesce the repaint notifications by using the zero-length timer. The additions and removals of children are tracked automagically.
When you run the example, it creates two windows: a source window, and a destination window. They may be overlapped so you need to separate them. As you resize the source window, the size of the destination's rendition of it will also change appropriately. Any changes to the source children (time label, button state) propagate automatically to the destination.
In your application, the destination could be an object that takes the QImage contents, converts them to grayscale, resizes appropriately, and passes them to your device.
I do not quite understand how your laser device works if it can't gracefully handle updates. I presume that it is a raster-scanning laser that runs continuously in a loop that looks roughly like this:
while (1) {
for (line = 0; line < nLines; ++line) {
drawLine();
}
}
You need to modify this loop so that it works as follows:
newImage = true;
QImage localImage;
while (1) {
if (newImage) localImage = newImage;
for (line = 0; line < localImage.height(); ++line) {
drawLine(line, localImage);
}
}
You'd be flipping the newImage flag from the notification slot connected to the WidgetMonitor. You may well find out that leveraging QImage, and Qt's functionality in general, in your device driver code, will make it much easier to develop. Qt provides portable timers, threads, collections, etc. I presume that your "driver" is completely userspace, and communicates via a serial port or ethernet to the micro controller that actually controls the laser device.
If you will be writing a kernel driver for the laser device, then the interface would be probably very similar, except that you end up writing the image bitmap to an open device handle.
// https://github.com/KubaO/stackoverflown/tree/master/questions/surface-20737882
#include <QtWidgets>
#include <array>
const char kFiltered[] = "WidgetMonitor_filtered";
class WidgetMonitor : public QObject {
Q_OBJECT
QVector<QPointer<QWidget>> m_awake;
QBasicTimer m_timer;
int m_counter = 0;
void queue(QWidget *window) {
Q_ASSERT(window && window->isWindow());
if (!m_awake.contains(window)) m_awake << window;
if (!m_timer.isActive()) m_timer.start(0, this);
}
void filter(QObject *obj) {
if (obj->isWidgetType() && !obj->property(kFiltered).toBool()) {
obj->installEventFilter(this);
obj->setProperty(kFiltered, true);
}
}
void unfilter(QObject *obj) {
if (obj->isWidgetType() && obj->property(kFiltered).toBool()) {
obj->removeEventFilter(this);
obj->setProperty(kFiltered, false);
}
}
bool eventFilter(QObject *obj, QEvent *ev) override {
switch (ev->type()) {
case QEvent::Paint: {
if (!obj->isWidgetType()) break;
if (auto *window = static_cast<QWidget *>(obj)->window()) queue(window);
break;
}
case QEvent::ChildAdded: {
auto *cev = static_cast<QChildEvent *>(ev);
if (auto *child = qobject_cast<QWidget *>(cev->child())) monitor(child);
break;
}
default:
break;
}
return false;
}
void timerEvent(QTimerEvent *ev) override {
if (ev->timerId() != m_timer.timerId()) return;
qDebug() << "painting: " << m_counter++ << m_awake;
for (auto w : m_awake)
if (auto *img = dynamic_cast<QImage *>(w->backingStore()->paintDevice()))
emit newContents(*img, w);
m_awake.clear();
m_timer.stop();
}
public:
explicit WidgetMonitor(QObject *parent = nullptr) : QObject{parent} {}
explicit WidgetMonitor(QWidget *w, QObject *parent = nullptr) : QObject{parent} {
monitor(w);
}
Q_SLOT void monitor(QWidget *w) {
w = w->window();
if (!w) return;
filter(w);
for (auto *obj : w->findChildren<QWidget *>()) filter(obj);
queue(w);
}
Q_SLOT void unMonitor(QWidget *w) {
w = w->window();
if (!w) return;
unfilter(w);
for (auto *obj : w->findChildren<QWidget *>()) unfilter(obj);
m_awake.removeAll(w);
}
Q_SIGNAL void newContents(const QImage &, QWidget *w);
};
class TestWidget : public QWidget {
QVBoxLayout m_layout{this};
QLabel m_time;
QBasicTimer m_timer;
void timerEvent(QTimerEvent *ev) override {
if (ev->timerId() != m_timer.timerId()) return;
m_time.setText(QTime::currentTime().toString());
}
public:
explicit TestWidget(QWidget *parent = nullptr) : QWidget{parent} {
m_layout.addWidget(&m_time);
m_layout.addWidget(new QLabel{"Static Label"});
m_layout.addWidget(new QPushButton{"A Button"});
m_timer.start(1000, this);
}
};
int main(int argc, char **argv) {
QApplication app{argc, argv};
TestWidget src;
QLabel dst;
dst.setFrameShape(QFrame::Box);
for (auto *w : std::array<QWidget *, 2>{&dst, &src}) {
w->show();
w->raise();
}
QMetaObject::invokeMethod(&dst, [&] { dst.move(src.frameGeometry().topRight()); },
Qt::QueuedConnection);
WidgetMonitor mon(&src);
src.setWindowTitle("Source");
dst.setWindowTitle("Destination");
QObject::connect(&mon, &WidgetMonitor::newContents, [&](const QImage &img) {
dst.resize(img.size());
dst.setPixmap(QPixmap::fromImage(img));
});
return app.exec();
}
#include "main.moc"

Video not playing continuously

I am playing video in Qt using opencv. I am having 6 tiled view cameras from which I am playing video. The problem is if one of the videos is not playing i.e finishes then the GUI freezes and exits. The error I get is you must reimplement QApplication::notify() and catch the exceptions there. How to do this?
The code I am using is as follows.
Somewhere in a function
void MainWindow::ActivateWindow()
{
//Some part of code to set Index for stacked widget
if(stackWidget->currentIndex()==9)
{
const int imagePeriod == 1000/25;
imageTimer->setInterval(imagePeriod);
connect(imageTimer,SIGNAL(timeout()),this,SLOT(demoSlot());
imageTimer->start();
}
}
In slot demoSlot
void MainWindow::demoSlot()
{
captureCamera1 cvCaptureFromFile("/root/mp.mp4");
captureCamera2 cvCaptureFromFile("/root/mp.mp4");
captureCamera3 cvCaptureFromFile("/root/mp.mp4");
while(imageTimer->isActive())
{
frameCamera1 = cvQueryFrame(captureCamera1);
frameCamera2 = cvQueryFrame(captureCamera2);
frameCamera3 = cvQueryFrame(captureCamera2);
sourceImageCam1 = frameCamera1;
sourceImageCam2 = frameCamera2;
sourceImageCam3 = frameCamera3;
cv::resize(sourceImageCam1,sourceImageCam1,cv::size(400,100),0,0);
cv::resize(sourceImageCam1,sourceImageCam1,cv::size(400,100),0,0);
cv::resize(sourceImageCam1,sourceImageCam1,cv::size(400,100),0,0);
cv::cvtColor(sourceImageCam1,sourceImageCam2,CV_BGR2RGB);
cv::cvtColor(sourceImageCam2,sourceImageCam2,CV_BGR2RGB);
cv::cvtColor(sourceImageCam2,sourceImageCam2,CV_BGR2RGB);
QImage tempImage1 = QImage((const unsigned char* sourceImageCam1.data,sourceImageCam1.cols,sourceImageCam2.rows,QImage::Format_RG888);
QImage tempImage2 = QImage((const unsigned char* sourceImageCam2.data,sourceImageCam2.cols,sourceImageCam2.rows,QImage::Format_RG888);
QImage tempImage3 = QImage((const unsigned char* sourceImageCam3.data,sourceImageCam3.cols,sourceImageCam3.rows,QImage::Format_RG888);
labelCameraCapture1->setPixmap(QPixmap::fromImage(tempImage1)); //label to display video
labelCameraCapture2->setPixmap(QPixmap::fromImage(tempImage2));
labelCameraCapture3->setPixmap(QPixmap::fromImage(tempImage3));
lblCameraCapture1->resize(lblCameraCapture1->Pixmap->size());
lblCameraCapture1->resize(lblCameraCapture1->Pixmap->size());
lblCameraCapture1->resize(lblCameraCapture1->Pixmap->size());
cvWaitkey(20);
qApp->processEvents();
}
if(imageTimer->isActive())
{
imageTimer->stop();
}
else
{
imageTimer->start();
}
}
In header file
cvCapture *captureCamera1;
cvCapture *captureCamera1;
cvCapture *captureCamera1;
IplImage frameCamera1;
IplImage frameCamera2;
IplImage frameCamera3;
cv::Mat sourceImageCam1;
cv::Mat sourceImageCam2;
cv::Mat sourceImageCam3;
This will do the trick changing that to 3 movies is simple.
class MainWindow : public QMainWindow {
Q_OBJECT
explicit QMainWindow(QWidget *parent) ....
// prepare timer and so on
public slots:
void startVideo() {
vid1.close();
vid1.open("/root/mp.mp4");
imageTimer->start();
}
void demoSlot() {
cv::Mat frame;
vid1 >> frame;
cv::cvtColor(frame,frame,CV_BGR2RGB);
QImage img((uchar*) frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888);
label1->setPixmap(QPixmap::fromImage(img));
}
private:
...
QTimer *imageTimer;
cv::VideoCapture vid1;
};
Check if frame captured from camera is NULL. Then simply skip processing steps for this camera.
And it'll be better to not mix C++ and C interfaces (I mean cv::Mat and IplImage).

Resources