I need help with openCV. System: WinXP, OpenCV 2.4.7.2, QT 5.2.1. I have problem with memory leak. When this function starts, and about 2min i have an error like this:"std::bad_alloc at memory location 0x0012a9e8...
void EnclosingContour(IplImage* _image){
assert(_image!=0);
//Clone src image and convert to gray
clone_image = cvCreateImage(cvGetSize(_image), IPL_DEPTH_8U, 1);
cvConvertImage(_image, clone_image, CV_BGR2GRAY);
//Some images for processing
dst = cvCreateImage( cvGetSize(_image), IPL_DEPTH_8U, 1);
temp = cvCreateImage( cvGetSize(_image), IPL_DEPTH_8U, 1);
//Make ROI
if (ui.chb_ROI->isChecked()){
cvSetImageROI(clone_image, cvRect(ui.spb_x1->value(), ui.spb_y1->value(),ui.spb_x2->value(),ui.spb_y2->value()));}
//Create image for processing
bin = cvCreateImage(cvGetSize(clone_image), IPL_DEPTH_8U, 1);
bin = cvCloneImage(clone_image);
//Canny before
if (ui.chb_canny_before->isChecked()){
cvCanny(bin, bin, ui.hsl_threshold_1->value(), ui.hsl_threshold_2->value());
}
//Adaptive threshold
if (Adaptive==true){
cvAdaptiveThreshold(bin, dst, ui.hsl_adaptive->value(), 0, 0, 3, 5);
bin = cvCloneImage(dst);
cvReleaseImage(&dst);
}
//Morphology operations
if (morphology==true){
cvMorphologyEx(bin, bin, temp, NULL, operations, 1);
cvReleaseImage(&temp);
}
//Canny after
if (ui.chb_canny_after->isChecked()){
cvCanny(bin, bin, ui.hsl_threshold_1->value(), ui.hsl_threshold_2->value());
}
//Zero ROI
cvZero(clone_image);
cvCopyImage(bin, clone_image);
cvResetImageROI(clone_image);
//Show
cvNamedWindow( "bin", 1 );
cvShowImage("bin", clone_image);
cvReleaseImage(&clone_image);
//storage for contours
storage = cvCreateMemStorage(0);
contours=0;
// find contours
if (ui.chb_ROI->isChecked()){
int contoursCont = cvFindContours(bin, storage,&contours,sizeof(CvContour),CV_RETR_LIST,method,cvPoint(ui.spb_x1->value(), ui.spb_y1->value()));
}else
{
int contoursCont = cvFindContours(bin, storage,&contours,sizeof(CvContour),CV_RETR_LIST,method,cvPoint(0,0));
}
assert(contours!=0);
//How many contours
// All contours
for(CvSeq* current = contours; current != NULL; current = current->h_next ){
//Draw rectangle over all contours
CvRect r = cvBoundingRect(current, 1);
cvRectangleR(_image, r, cvScalar(0, 0, 255, 0), 3, 8, 0);
//Show width of rect
ui.textEdit_2->setText(QString::number(r.width));
}
// Clean resources
cvReleaseMemStorage(&storage);
cvReleaseImage(&bin);
}
You are releasing 'temp' and 'dst' images inside 'if'. This is a sure recipe for memory leak, since they may not be released at all.
On a side note, you are using C interface of OpenCV which is deprecated and will be removed soon. If you switch to C++ interface (i.e. if you use Mat instead of IplImage*) than problem of memory leak because of unreleased image could not possibly happen.
Related
I want to display an image received in a short[] of pixels from a server.
The server(C++) writes the image as an unsigned short[] of pixels (12 bit depth).
My java application gets the image by a CORBA call to this server.
Since java does not have ushort, the pixels are stored as (signed) short[].
This is the code I'm using to obtain a BufferedImage from the array:
private WritableImage loadImage(short[] pixels, int width, int height) {
int[] intPixels = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
intPixels[i] = (int) pixels[i];
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, intPixels);
return SwingFXUtils.toFXImage(image, null);
}
And later:
WritableImage orgImage = convertShortArrayToImage2(image.data, image.size_x, image.size_y);
//load it into the widget
Platform.runLater(() -> {
imgViewer.setImage(orgImage);
});
I've checked that width=1280 and height=1024 and the pixels array is 1280x1024, that matches with the raster height and width.
However I'm getting an array out of bounds error in the line:
raster.setPixels(0, 0, width, height, intPixels);
I have try with ALL ImageTypes , and all of them produce the same error except for:
TYPE_USHORT_GRAY: Which I thought it would be the one, but shows an all-black image
TYPE_BYTE_GRAY: which show the image in negative(!) and with a lot of grain(?)
TYPE_BYTE_INDEXED: which likes the above what colorized in a funny way
I also have tried shifting bits when converting from shot to int, without any difference:
intPixels[i] = (int) pixels[i] & 0xffff;
So..I'm quite frustrated after looking for days a solution in the internet. Any help is very welcome
Edit. The following is an example of the images received, converted to jpg on the server side. Not sure if it is useful since I think it is made from has pixel rescaling (sqrt) :
Well, finally I solved it.
Probably not the best solution but it works and could help someone in ether....
Being the image grayscale 12 bit depth, I used BufferedImage of type TYPE_BYTE_GRAY, but I had to downsample to 8 bit scaling the array of pixels. from 0-4095 to 0-255.
I had an issue establishing the higher and lower limits of the scale. I tested with avg of the n higher/lower limits, which worked reasonably well, until someone sent me a link to a java program translating the zscale algorithm (used in DS9 tool for example) for getting the limits of the range of greyscale vlues to be displayed:
find it here
from that point I modified the previous code and it worked like a charm:
//https://github.com/Caltech-IPAC/firefly/blob/dev/src/firefly/java/edu/caltech/ipac/visualize/plot/Zscale.java
Zscale.ZscaleRetval retval = Zscale.cdl_zscale(pixels, width, height,
bitsVal, contrastVal, opt_sizeVal, len_stdlineVal, blankValueVal);
double Z1 = retval.getZ1();
double Z2 = retval.getZ2();
try {
int[] ints = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < Z1) {
pixels[i] = (short) Z1;
} else if (pixels[i] > Z2) {
pixels[i] = (short) Z2;
}
ints[i] = ((int) ((pixels[i] - Z1) * 255 / (Z2 - Z1)));
}
BufferedImage bImg
= new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
bImg.getRaster().setPixels(0, 0, width, height, ints);
return SwingFXUtils.toFXImage(bImg, null);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
return null;
Qt: 5.14.1
SDL: 2.0.12
OS: Windows 10
I'm working on a video player and I'm using Qt for UI and SDL for rendering frames.
I created the SDL window by passing my rendering widget's (inside a layout) winId() handle.
This works perfectly when I start a non-threaded Play().
However, this causes some issues with playback when resizing or moving the app window. Nothing serious, but since the play code is non threaded my frame queues fill-up which then causes the video to speed up until it catches to the audio.
I solved that by putting my play code inside Win32 thread created with CreateThread function.
Now when I move window the video continues to play as intended, but when resizing the app, rendering widget will stop refreshing the widget and only the last displayed frame before resize event will be shown.
I can confirm that video is still running and correct frames are still being displayed. The displayed image can even be resized, but its never refreshed.
The similar thing happens when I was testing Qt threads with SDL. Consider this code
class TestThread: public QThread
{
public:
TestThread(QObject *parent = NULL) : QThread(parent)
{
}
void run() override
{
for (;;)
{
SDL_Delay(1000/60);
// Basic square bouncing animation
SDL_Rect spos;
spos.h = 100;
spos.w = 100;
spos.y = 100;
spos.x = position;
SDL_SetRenderDrawColor(RendererRef, 0, 0, 0, 255);
SDL_RenderFillRect(RendererRef, 0);
SDL_SetRenderDrawColor(RendererRef, 0xFF, 0x0, 0x0, 0xFF);
SDL_RenderFillRect(RendererRef, &spos);
SDL_RenderPresent(RendererRef);
if (position >= 500)
dir = 0;
else if (position <= 0)
dir = 1;
if (dir)
position += 5;
else
position -= 5;
}
}
};
// a call from Init SDL and Start Thread button
...
// create new SDL borderless resizible window.
WindowRef = SDL_CreateWindow("test",10,10,1280,800,SDL_WINDOW_RESIZABLE | SDL_WINDOW_BORDERLESS);
// create and start thread
test_thread = new TestThread();
test_thread->start();
...
This will create a separate window from the Qt app window and will start rendering a bouncy square. However if any resize event occurs in the Qt app, the rendering context will be lost and the the same thing that happens in my video player will happen here.
I also found out that If I remove SDL_RenderPresent function from the Thread object and put it in a Main Qt Window, the rendering will continue after resize event. However this has proved as completely unreliable and will sometimes completely freeze my app.
I also can't figure out why my completely separate SDL window and renderer still freezes on resize.
I presume there is a clash somewhere with SDL renderer/window and Qt's drawing stuff, but I'm at a loss here.
Also, it's only the resize stuff. Everything else works.
Thanks.
Answer:
SDL_Renderer needs to be destroyed and recreated on window resize as well as any SDL_Texture created with previous renderer.
The same thing will happen even without qt.
However, I think this is just a workaround and not a real solution.
A simple code to recreate the issue.
int position = 0;
int dir = 0;
SDL_Window *window = NULL;
SDL_Renderer *sdlRenderer_ = NULL;
DWORD WINAPI MyThreadFunction( LPVOID lpParam )
{
for (;;)
{
SDL_Delay(1000/60);
// Basic square bouncing animation
SDL_Rect spos;
spos.h = 100;
spos.w = 100;
spos.y = 100;
spos.x = position;
SDL_SetRenderDrawColor(sdlRenderer_, 0, 0, 0, 255);
SDL_RenderFillRect(sdlRenderer_, 0);
SDL_SetRenderDrawColor(sdlRenderer_, 0xFF, 0x0, 0x0, 0xFF);
SDL_RenderFillRect(sdlRenderer_, &spos);
SDL_RenderPresent(sdlRenderer_);
if (position >= 500)
dir = 0;
else if (position <= 0)
dir = 1;
if (dir)
position += 5;
else
position -= 5;
}
}
int APIENTRY wWinMain(_In_ HINSTANCE hInstance, _In_opt_ HINSTANCE hPrevInstance, _In_ LPWSTR lpCmdLine,_In_ int nCmdShow)
{
SDL_Init(SDL_INIT_VIDEO);
window = SDL_CreateWindow("test",SDL_WINDOWPOS_UNDEFINED,SDL_WINDOWPOS_UNDEFINED,600,600,SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE);
if (!window)
printf("Unable to create window");
sdlRenderer_ = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED |SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
if (!sdlRenderer_)
printf("Unable to create renderer");
HANDLE playHandle = CreateThread(0, 0, MyThreadFunction, 0, 0, 0);
if (playHandle == NULL)
{
return 0;
}
SDL_Event e;
while(1)
{
SDL_PollEvent(&e);
if (e.type == SDL_WINDOWEVENT )
{
switch( e.window.event )
{
case SDL_WINDOWEVENT_SIZE_CHANGED:
int mWidth = e.window.data1;
int mHeight = e.window.data2;
SDL_DestroyRenderer(sdlRenderer_); // stops rendering on resize if commented out
sdlRenderer_ = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED |SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
break;
}
}
}
return 0;
}
EDIT:
Real solution.
Renderer doesn't need to be recreated. Seperate the Qt code from SDL main thread by making a seperate thread. Create all SDL stuff in that thread because SDL_Renderer needs to be created in a thread that handles SDL events.
Use SDL_PushEvent to signal this thread to render to screen.
This way textures don't need to be recreated.
Here is the code I have that whenever I run throws a "Iterator not incrementable" error at run time. If I comment out the sp++ line or the asteroids.push_back(*sp); line then it runs fine. So it has something to do with those lines...I saw in a previous post that the line sp->getSize() is incremented the pointer as well and may be the cause of the issues? thanks for the help!
while(sp != asteroids.end()){
if(sp->getSize() == .5 || sp->getSize() == 0.25){
glPushMatrix();
glScalef(.1, .1, .1);
glTranslatef(3,3,0);
sp->display_asteriod(sp->getSize(), random, randomTwo);
glPopMatrix();
asteroidCount++;
spawn.setSize(sp->getSize());
//spawn.setLife(it->getLife());
random = ((double) rand() / (RAND_MAX+1));
randomTwo = ((double) rand() / (RAND_MAX+1)) * 7;
spawn = createAsteroid(spawn);
x_speed_asteriod = (spawn.getXDirection())*(spawn.getRandomVelocity());// + x_speed_asteriod;
y_speed_asteriod = (spawn.getYDirection())*(spawn.getRandomVelocity());// + y_speed_asteriod;
spawn.setXSpeed(x_speed_asteriod);
spawn.setYSpeed(y_speed_asteriod);
if(spawn.getRandomAxis() == 0){
glRotatef(spawn.getAngleRotation(), 1, 0, 0);
}else if(spawn.getRandomAxis() == 1){
glRotatef(spawn.getAngleRotation(), 0, 1, 0);
}else if(spawn.getRandomAxis() == 2){
glRotatef(spawn.getAngleRotation(), 0, 0, 1);
}
//it = asteroids.begin() + asteroidCount;
//asteroids.insert(it, spawn);
//asteroids.resize(asteroidCount);
asteroids.push_back(*sp);
glPushMatrix();
glScalef(.1,.1,.1);
glTranslatef(spawn.getXPosition()-3, spawn.getYPosition()-3, 0);
spawn.display_asteriod(spawn.getSize(), random, randomTwo);
glPopMatrix();
}else{
sp++;
}
Your iterator sp is getting invalidated by the call to push_back. You are modifying the asteroids vector but you are still using the old iterator that you obtained before the modification.
This post contains a summary of rules for when iterators are invalidated.
Keeping track of new items to work on is often done using a queue (or a deque) in a safe way like this:
#include<deque>
vector<Asteroid> asteroids;
deque<Asteroid> asteroid_queue;
//add all current asteroids into the queue
asteroid_queue.assign(asteroids.begin(), asteroids.end());
while(!asteroid_queue.empty())
{
//grab the next asteroid to process
Asteroid asteroid = asteroid_queue.front();
//remove it from the queue
asteroid_queue.pop_front();
do_some_work()
//if necessary add a new asteroid .. be careful not to end in an infinite loop
asteroid_queue.push_back(..);
}
I am trying to draw a 10 millisecond grid in a QGraphicsScene in Qt. I am not very familiar with Qt... it's the first time I used it, and only because the application needs to be portable between Windows and Linux.
I actually don't have a problem drawing the grid, it's just the performance when the grid gets big. The grid has to be able to change size to fit the SceneRect if/when new data is loaded into the program to be displayed.
This is how I do it at the moment, I hate this but it's the only way I can think of doing it...
void Plotter::drawGrid() {
unsigned int i;
QGraphicsLineItem *line;
QGraphicsTextItem *text;
char num[11];
QString label;
unsigned int width = scene->sceneRect().width();
unsigned int height = scene->sceneRect().height();
removeGrid();
for (i = 150; i < width; i+= 10) {
line = new QGraphicsLineItem(i, 0, i, scene->sceneRect().height(), 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
_itoa_s(i - 150, num, 10);
label = num;
label += " ms";
text = new QGraphicsTextItem(label, 0, scene);
text->setDefaultTextColor(Qt::white);
text->setX(i);
text->setY(height - 10);
text->setZValue(2);
text->setScale(0.2);
//pointers to items stored in list for removal later.
gridList.append(line);
gridList.append(text);
}
for (i = 0; i < height; i+= 10) {
line = new QGraphicsLineItem(150, i, width, i, 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
gridList.append(line);
}
}
When scene->sceneRect().width() gets too big, however, the application becomes very sluggish. I have tried using a QGLWidget, but the improvements in speed are marginal at best.
I ended up using a 10x10 square pixmap and drew it as the backgroundBrush on my QGraphicsView, as is suggested in the link in the comment under my initial question.
I'm making a program that will display a few images from a directory beside each other.
When I scale the images to fit within the height of the window (ie - QGraphicsPixmapItem->scale(...)), it runs fairly well in windows, but runs unbearably slow in linux (Ubuntu 11.04).
If the images are not scaled, performance is similar on both systems.
I'm not sure if it has to do with the way each OS caches memory, since when I run the program under Linux, the memory used is always constant and around 5mb, when it's closer to 15-30mb under Windows depending on the images loaded.
Here is the related code:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
{
scene = new QGraphicsScene(this);
view = new QGraphicsView(scene);
setCentralWidget(view);
setWindowTitle(tr("ImgVw"));
bestFit = true;
view->setHorizontalScrollBarPolicy ( Qt::ScrollBarAlwaysOff );
view->setVerticalScrollBarPolicy ( Qt::ScrollBarAlwaysOff );
view->setDragMode(QGraphicsView::ScrollHandDrag);
view->setStyleSheet( "QGraphicsView { border-style: none; padding: 5px; background-color: #000; }" ); // set up custom style sheet
// Get image files from folder
QDir dir("test_img_folder");
QStringList fileList = dir.entryList();
fileList = fileList.filter(QRegExp(".*(\.jpg|\.jpeg|\.png)$"));
// Create graphics item for each image
int count = 0;
foreach(QString file, fileList)
{
if (count >= 0)
{
QPixmap g(dir.absolutePath() + QString("/") + file);
scene->addPixmap(g);
}
count++;
if (count >= 5) break;
}
}
void MainWindow::resizeEvent(QResizeEvent *event)
{
int pos = 0;
foreach(QGraphicsItem *item, scene->items(Qt::AscendingOrder))
{
double ratio = 1.0;
QGraphicsPixmapItem *pixmapItem = (QGraphicsPixmapItem*) item;
// Resize to fit to window
if (bestFit) {
double h = (double) (view->height()-10)/pixmapItem->pixmap().height();
ratio = min(h, 1.0);
pixmapItem->setScale(ratio);
}
// Position 5 pixels to the right of the previous image
item->setPos(pos,0);
pos += pixmapItem->pixmap().width()*ratio + 5;
}
// Resize scene to fit items
scene->setSceneRect(scene->itemsBoundingRect());
}
You could try different graphicssystems e.g. with the command line switch -graphicssystem raster|native|opengl or by setting the environment variable QT_GRAPHICSSYSTEM to "raster" etc.
In my experience, I agree with trying the QT_GRAPHICSSYSTEM environment variable hack. It took me some time in development of a new real-time QT4 application, with high bandwidth callbacks, to discover that setting QT_GRAPHICSSYSTEM = 'raster', prevented my RedHat Linux X11 system from gobbling up CPU time. So I suspect there is a resource issue when QT_GRAPHICSSYSTEM is not set or set to 'native'.