I am working with OpenCV for Qt.
I am working on doing a program which is able to detect several objects. So far I could make a face, eye and nose detector, but when I try to make a full body detection I get either totally wrong detections, no detections at all or the program crashes. For detecting the full body I just use the same code as for the other detections but with the haarcascade_fullbody.xml file. Is it not possible to use the same code? Why does it work for the other features and not for the full body?
I have also tried to implement a car detection using OpenCV's pretrained models from https://github.com/Itseez/opencv_extra/tree/master/testdata/cv/latentsvmdetector/models_VOC2007 but I get parsing errors.
Thanks in advance!
Code from MainWindow:
void MainWindow::on_btnFullBody_clicked()
{
WriteInLog("Full body detection requested");
QString xml = tr("%1/%2").arg(QApplication::applicationDirPath()).arg(FULL_BODY_FILE);
FeatureDetector detector(xml);
std::vector<QRect> rest;
float scaleFactor= 1.1f;
uint neighbours= 2;
bool ret = detector.DetectFeature(&mSelectedImage, rest, scaleFactor, neighbours);
if (!ret)
{
WriteInLog("No full body has been detected");
}
else
{
QVector<QRect> qRect = QVector<QRect>::fromStdVector(rest);
processedImage(qRect);
WriteInLog("Bodys detected: "+QString::number(qRect.size()));
}
}
Code from DetectFeature:
bool FeatureDetector::DetectFeature(QImage* image, std::vector<QRect> &returnList, float scaleFactor, uint neighbours)
{
returnList.clear();
bool ok = false;
qDebug() << "Starting...";
if (!image->isNull()) {
//Changing from QImage to matrix
QImage temp = image->copy();
cv::Mat res(temp.height(),temp.width(),CV_8UC3,(uchar*)temp.bits(),temp.bytesPerLine());
cv::Mat res_gray;
//Changing the image to grey scale an equalizing the result
cvtColor(res, res_gray,CV_BGR2GRAY);
cv::equalizeHist(res_gray,res_gray);
cv::CascadeClassifier detector;
std::vector< cv::Rect > featureVec;
bool retDetector=true; // detector.load("C:/Users/ansurbcn_2/Pictures/cara.jpg");
qDebug()<<mXmlFilePath;
if (!detector.load(mXmlFilePath.toLatin1().constData()))
{
qDebug() << "Error loading detector";
return false;
}
detector.detectMultiScale(res_gray, featureVec);
//detector.detectMultiScale(res_gray, featureVec, scaleFactor, neighbours, 18|9);
if (retDetector) {
qDebug() << "OK Detector";
}
else {
qDebug() << "Failed Detector";
}
for(size_t i=0; i<featureVec.size();i++)
{
cv::Rect oneFeature =featureVec[i];
QRect qrect(oneFeature.x, oneFeature.y, oneFeature.width, oneFeature.height);
returnList.push_back(qrect);
ok = true;
}
}
return ok;
}
Related
I can not print to paper for some reasone. So I have a functional printer. And I use the folowing code to print a qDialog and a few pictures out:
QPrinter printer;
QPainter painter;
painter.begin(&printer);
double xscale = printer.width() / double(window->width());
double yscale = printer.height() / double(window->height());
double scale = qMin(xscale, yscale);
painter.scale(scale, scale);
QPrintDialog printDialog(&printer, this);
if (printDialog.exec() == QDialog::Accepted) {
bool skip = true;
if(ui->generalInfos->isChecked()) {
//window is a QDialog I want to print out
window->render(&painter);
skip = false;
}
QList<Document *> docs;
if(worker) {
//a list with path to pictures
docs = worker->getDocuments();
}
for(auto document : docs) {
if(ui->Documents->isChecked(document->getID())) {
for(auto scan : document->getScans()) {
if(!skip) {
printer.newPage();
}
else {
skip = false;
}
painter.resetTransform();
const QImage image(scan);
const QPoint imageCoordinates(0,0);
xscale = printer.width() / double(image.width());
yscale = printer.height() / double(image.height());
scale = qMin(xscale, yscale);
painter.scale(scale, scale);
painter.drawImage(imageCoordinates,image);
}
}
}
}
painter.end();
and it doesn't work. Nothing is printed and Qt trows an error:
QWin32PrintEngine::newPage: EndPage failed (The parameter is incorrect.)
QWin32PrintEngine::end: EndPage failed (0x31210cf7) (The parameter is incorrect.)
can someone please help me?
If you simplify your code, you will probably find the solution.
So lets start with selecting the printer, then (afterwards!) start painting to the printer:
QPrinter printer;
QPrintDialog printDialog(&printer, this);
if (printDialog.exec() == QDialog::Accepted)
{
QPainter painter;
painter.begin(&printer);
window->render(&painter);
painter.end();
}
If this works, add more of your old code to the sketch above.
If it doesn't work, something else in your program or your environment (selected printer?) is wrong, so you need to extend your bug hunt beyond what you showed us here.
I want to use pre-made files instead of live capture in the following program to track the person.
what is realsense SDK API used to load pre-made files and catch the frame by frame?
Is it possible to use to detect/track person any general video/image files which captured using any other camera's ?
Example Program:
Example Source Link
Source
#include <thread>
#include <iostream>
#include <signal.h>
#include "version.h"
#include "pt_utils.hpp"
#include "pt_console_display.hpp"
#include "pt_web_display.hpp"
#include "or_console_display.hpp"
#include "or_web_display.hpp"
using namespace std;
using namespace rs::core;
using namespace rs::object_recognition;
// Version number of the samples
extern constexpr auto rs_sample_version = concat("VERSION: ",RS_SAMPLE_VERSION_STR);
// Doing the OR processing for a frame can take longer than the frame interval, so we
// keep track of whether or not we are still processing the last frame.
bool is_or_processing_frame = false;
unique_ptr<web_display::pt_web_display> pt_web_view;
unique_ptr<web_display::or_web_display> or_web_view;
unique_ptr<console_display::pt_console_display> pt_console_view;
unique_ptr<console_display::or_console_display> or_console_view;
void processing_OR(correlated_sample_set or_sample_set, or_video_module_impl* impl, or_data_interface* or_data,
or_configuration_interface* or_configuration)
{
rs::core::status st;
// Declare data structure and size for results
rs::object_recognition::localization_data* localization_data = nullptr;
//Run object localization processing
st = impl->process_sample_set(or_sample_set);
if (st != rs::core::status_no_error)
{
is_or_processing_frame = false;
return;
}
// Retrieve recognition data from the or_data object
int array_size = 0;
st = or_data->query_localization_result(&localization_data, array_size);
if (st != rs::core::status_no_error)
{
is_or_processing_frame = false;
return;
}
//Send OR data to ui
if (localization_data && array_size != 0)
{
or_console_view->on_object_localization_data(localization_data, array_size, or_configuration);
or_web_view->on_object_localization_data(localization_data, array_size, or_configuration);
}
is_or_processing_frame = false;
}
int main(int argc,char* argv[])
{
rs::core::status st;
pt_utils pt_utils;
rs::core::image_info colorInfo,depthInfo;
rs::core::video_module_interface::actual_module_config actualModuleConfig;
rs::person_tracking::person_tracking_video_module_interface* ptModule = nullptr;
rs::object_recognition::or_video_module_impl impl;
rs::object_recognition::or_data_interface* or_data = nullptr;
rs::object_recognition::or_configuration_interface* or_configuration = nullptr;
cout << endl << "Initializing Camera, Object Recognition and Person Tracking modules" << endl;
if(pt_utils.init_camera(colorInfo,depthInfo,actualModuleConfig,impl,&or_data,&or_configuration) != rs::core::status_no_error)
{
cerr << "Error: Device is null." << endl << "Please connect a RealSense device and restart the application" << endl;
return -1;
}
pt_utils.init_person_tracking(&ptModule);
//Person Tracking Configuration. Set tracking mode to 0
ptModule->QueryConfiguration()->QueryTracking()->Enable();
ptModule->QueryConfiguration()->QueryTracking()->SetTrackingMode((Intel::RealSense::PersonTracking::PersonTrackingConfiguration::TrackingConfiguration::TrackingMode)0);
if(ptModule->set_module_config(actualModuleConfig) != rs::core::status_no_error)
{
cerr<<"error : failed to set the enabled module configuration" << endl;
return -1;
}
//Object Recognition Configuration
//Set mode to localization
or_configuration->set_recognition_mode(rs::object_recognition::recognition_mode::LOCALIZATION);
//Set the localization mechnizm to use CNN
or_configuration->set_localization_mechanism(rs::object_recognition::localization_mechanism::CNN);
//Ignore all objects under 0.7 probabilty (confidence)
or_configuration->set_recognition_confidence(0.7);
//Enabling object center feature
or_configuration->enable_object_center_estimation(true);
st = or_configuration->apply_changes();
if (st != rs::core::status_no_error)
return st;
//Launch GUI
string sample_name = argv[0];
// Create console view
pt_console_view = move(console_display::make_console_pt_display());
or_console_view = move(console_display::make_console_or_display());
// Create and start remote(Web) view
or_web_view = move(web_display::make_or_web_display(sample_name, 8000, true));
pt_web_view = move(web_display::make_pt_web_display(sample_name, 8000, true));
cout << endl << "-------- Press Esc key to exit --------" << endl << endl;
while (!pt_utils.user_request_exit())
{
//Get next frame
rs::core::correlated_sample_set* sample_set = pt_utils.get_sample_set(colorInfo,depthInfo);
rs::core::correlated_sample_set* sample_set_pt = pt_utils.get_sample_set(colorInfo,depthInfo);
//Increment reference count of images at sample set
for (int i = 0; i < static_cast<uint8_t>(rs::core::stream_type::max); ++i)
{
if (sample_set_pt->images[i] != nullptr)
{
sample_set_pt->images[i]->add_ref();
}
}
//Draw Color frames
auto colorImage = (*sample_set)[rs::core::stream_type::color];
pt_web_view->on_rgb_frame(10, colorImage->query_info().width, colorImage->query_info().height, colorImage->query_data());
//Run OR in a separate thread. Update GUI with the result
if (!is_or_processing_frame) // If we aren't already processing or for a frame:
{
is_or_processing_frame = true;
std::thread recognition_thread(processing_OR, *sample_set,
&impl, or_data, or_configuration);
recognition_thread.detach();
}
//Run Person Tracking
if (ptModule->process_sample_set(*sample_set_pt) != rs::core::status_no_error)
{
cerr << "error : failed to process sample" << endl;
continue;
}
//Update GUI with PT result
pt_console_view->on_person_info_update(ptModule);
pt_web_view->on_PT_tracking_update(ptModule);
}
pt_utils.stop_camera();
actualModuleConfig.projection->release();
return 0;
}
After installing the Realsense SKD, check the realsense_playback_device_sample for how to load the RSSDK capture file.
The short answer is not really. Beside the images that are captured from the other camera, you also need to supply the camera intrinsic and extrinsic settings in order to calculate the depth of and object and call the person tracking module.
I would like to pick two points from pointcloud and return coordinates of the two points. In order to get down to the problem, I have used the PointPickingEvent of PCL, and written a class containing pointcloud, visualizer, and a vector to store selected points. My code:
#include <pcl/point_cloud.h>
#include <pcl/PCLPointCloud2.h>
#include <pcl/io/io.h>
#include <pcl/io/pcd_io.h>
#include <pcl/common/io.h>
#include <pcl/io/ply_io.h>
#include <pcl/io/vtk_lib_io.h>
#include <pcl/visualization/pcl_visualizer.h>
using namespace pcl;
using namespace std;
class pickPoints {
public:
pickPoints::pickPoints () {
viewer.reset (new pcl::visualization::PCLVisualizer ("Viewer", true));
viewer->registerPointPickingCallback (&pickPoints::pickCallback, *this);
}
~pickPoints () {}
void setInputCloud (PointCloud<PointXYZ>::Ptr cloud)
{
cloudTemp = cloud;
}
vector<float> getpoints() {
return p;
}
void simpleViewer ()
{
// Visualizer
viewer->addPointCloud<pcl::PointXYZ>(cloudTemp, "Cloud");
viewer->resetCameraViewpoint ("Cloud");
viewer->spin();
}
protected:
void pickCallback (const pcl::visualization::PointPickingEvent& event, void*)
{
if (event.getPointIndex () == -1)
return;
PointXYZ picked_point1,picked_point2;
event.getPoints(picked_point1.x,picked_point1.y,picked_point1.z,
picked_point2.x,picked_point2.y,picked_point2.z);
p.push_back(picked_point1.x); // store points
p.push_back(picked_point1.y);
p.push_back(picked_point1.z);
p.push_back(picked_point2.x);
p.push_back(picked_point2.y);
p.push_back(picked_point2.z);
//cout<<"first selected point: "<<p[0]<<" "<<p[1]<<" "<<p[2]<<endl;
//cout<<"second selected point: "<<p[3]<<" "<<p[4]<<" "<<p[5]<<endl;
}
private:
// Point cloud data
PointCloud<pcl::PointXYZ>::Ptr cloudTemp;
// The visualizer
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
// The picked point
vector<float> p;
};
int main()
{
//LOAD;
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ> ());
pcl::PolygonMesh mesh;
pcl::io::loadPolygonFilePLY("test.ply", mesh);
pcl::fromPCLPointCloud2(mesh.cloud, *cloud);
pickPoints pickViewer;
pickViewer.setInputCloud(cloud); // A pointer to a cloud
pickViewer.simpleViewer();
vector<float> pointSelected;
pointSelected= pickViewer.getpoints();
cout<<pointSelected[0]<<" "<<pointSelected[1]<<" "<<pointSelected[2]<<endl;
cout<<pointSelected[3]<<" "<<pointSelected[4]<<" "<<pointSelected[5]<<endl;
cin.get();
return 0;
}
But when the code was debugged, I got nothing. Also I know that when picking points with the left button, the SHIFT button should be pressed. Thank you in advance for any help!
I found that the getPoints() method does not work as I expected. However, getPoint() worked well. Here is code to print out the selected points and store them is a vector:
std::vector<pcl::PointXYZ> selectedPoints;
void pointPickingEventOccurred(const pcl::visualization::PointPickingEvent& event, void* viewer_void)
{
float x, y, z;
if (event.getPointIndex() == -1)
{
return;
}
event.getPoint(x, y, z);
std::cout << "Point coordinate ( " << x << ", " << y << ", " << z << ")" << std::endl;
selectedPoints.push_back(pcl::PointXYZ(x, y, z));
}
void displayCloud(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud, const std::string& window_name)
{
if (cloud->size() < 1)
{
std::cout << window_name << " display failure. Cloud contains no points\n";
return;
}
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer(new pcl::visualization::PCLVisualizer(window_name));
pcl::visualization::PointCloudColorHandlerGenericField<pcl::PointXYZI> point_cloud_color_handler(cloud, "intensity");
viewer->addPointCloud< pcl::PointXYZI >(cloud, point_cloud_color_handler, "id");
viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, "id");
viewer->registerKeyboardCallback(keyboardEventOccurred, (void*)viewer.get());
viewer->registerPointPickingCallback(pointPickingEventOccurred, (void*)&viewer);
while (!viewer->wasStopped() && !close_window){
viewer->spinOnce(50);
}
close_window = false;
viewer->close();
}
You can also find distances between the points pretty easily once they are selected.
if (selectedPoints.size() > 1)
{
float distance = pcl::euclideanDistance(selectedPoints[0], selectedPoints[1]);
std::cout << "Distance is " << distance << std::endl;
}
The selectedPoints vector can be emptied with a keyboardEvent if you want to start over picking points.
so i have been browsing the previous questions before about this issue, but i could not find a solution for my code.
cpp file of dialog
------------------------------------------------
#include "everesult.h"
#include "ui_everesult.h"
everesult::everesult(QWidget *parent) :
QDialog(parent),
ui1(new Ui::everesult)
{
ui1->setupUi(this);
}
everesult::~everesult()
{
delete ui1;
}
void everesult::setmodel(QStandardItemModel *model)
{
ui1->listView->setModel(model);
}
void everesult::on_buttonBox_clicked(QAbstractButton *button)
{
EveReprocess M_;
QModelIndex Selectedindex = ui1->listView->currentIndex();
QModelIndex StationIdsindex = ui1->listView->model()->index(0, 1);
int typeID = 0;
int stationID = 0;
stationID = ui1->listView->model()->data(StationIdsindex, Qt::DisplayRole).toInt();
typeID = ui1->listView->model()->data(Selectedindex, Qt::DisplayRole).toInt();
M_.GetMaterials(typeID, stationID);
}
--------------------------------------------------
Getmaterial and replyFinished from main window.
--------------------------------------------------
void EveReprocess::GetMaterials(int typeId, int stationid)
{
//get typeid from material list
this->modelMaterial = new QSqlQueryModel();
modelMaterial->setQuery(QString("SELECT tm.quantity, tm.materialTypeID, t.typeName FROM invTypeMaterials tm INNER JOIN invTypes t ON t.TypeID = tm.materialTypeId WHERE tm.TypeID=%1 ").arg(typeId));
if (!modelMaterial->query().exec())
qDebug() << modelMaterial->query().lastError();
//Set eve Central Url with typeids
QUrl url = QUrl("http://api.eve-central.com/api/marketstat?");
QUrlQuery q;
int numRows = modelMaterial->rowCount();
for (int row = 0; row < numRows; ++row)
{
QModelIndex index = modelMaterial->index(row, 1);
q.addQueryItem( QString("typeid"), QString::number(modelMaterial->data(index, Qt::DisplayRole).toInt()));
}
q.addQueryItem( QString("usesystem"), QString::number(stationid));
//set created url and connect
url.setQuery(q);
qDebug() << url;
manager = new QNetworkAccessManager(this);
connect(manager, SIGNAL(finished(QNetworkReply*)), this, SLOT(replyFinished(QNetworkReply *)));
manager->get(QNetworkRequest(url) );
}
void EveReprocess::replyFinished(QNetworkReply *reply)
{
qDebug() << "replyFinished called";
if ( reply->error() != QNetworkReply::NoError ) {
qDebug() << "Request failed, " << reply->errorString();
emit replyFinished(false);
return;
}
qDebug() << "Request succeeded";
//process with xmlreader and get values
processSearchResult( reply);
}
some of the code is here, i think it should be somewhere here, as the rest works fine.
this issue showed up after i use a dialog to let user pick a int from a list.
below is the function that calls the dialog that i have made for this. sorry about code format wil clean it up after it is working
void EveReprocess::Search_TypeId(QString ItemName, QString SystemName)
{
QList<int> TypeIdList;
QList<int> StationIdList;
modelIds = new QStandardItemModel(10,2,this);
if !(db.isOpen()) return;
this->queryItem = new QSqlQuery;
queryItem->prepare("SELECT typeID FROM invTypes WHERE invTypes.typeName LIKE ? AND invTypes.groupID NOT IN (268,269,270)AND published= 1");
ItemName.prepend("%");
ItemName.append("%");
queryItem->bindValue(0, ItemName);
this->queryStation = new QSqlQuery;
queryStation->prepare("SELECT solarSystemID FROM mapSolarSystems WHERE mapSolarSystems.solarSystemName LIKE ?");
SystemName.prepend("%");
SystemName.append("%");
queryStation->bindValue(0, SystemName);
if(!queryStation->exec() || !queryItem->exec() )
{
qDebug() << queryItem->lastError().text();
qDebug() << queryItem->lastQuery();
qDebug() << queryStation->lastError().text();
qDebug() << queryStation->lastQuery();
}
while( queryStation->next())
{
StationIdList.append(queryStation->value(0).toInt());
}
while(queryItem->next())
{
TypeIdList.append(queryItem->value(0).toInt());
}
for (int i = 0; i < StationIdList.count(); ++i)
{
modelIds->setItem(i,1,new QStandardItem(QString::number(StationIdList.at(i))));
}
for (int i = 0; i < TypeIdList.count(); ++i)
{
modelIds->setItem(i,0,new QStandardItem(QString::number(TypeIdList.at(i))));
}
//
everesult Dialog;
Dialog.setmodel(modelIds);
Dialog.exec();
}
Before you proceed any further, some of your code is allows SQL injections. Even when it's not a security hole, it'll still lead to bugs. Instead of using string substitution in SQL queries, you should be using bindings.
Your problem is here:
everesult Dialog;
Dialog.setmodel(modelIds);
Dialog.exec();
The exec() is a blocking function - it blocks the main event loop until the dialog is dismissed. Thus the signals from the threaded network access manager never get delivered to your objects.
You should display the dialog box asynchronously, like so:
everesult * dialog = new everesult;
dialog->setModel(modelIds);
dialog->show();
connect(dialog, SIGNAL(accepted()), dialog, SLOT(deleteLater());
connect(dialog, SIGNAL(rejected()), dialog, SLOT(deleteLater());
Note that it's misleading to have type names starting with lower case and variable names starting with upper case. Qt's convention is the opposite, and it's useful to retain it unless you have a good reason to do otherwise.
DO Parameter Binding in SQL Queries
QSqlQuery query("SELECT .. WHERE tm.TypeID=:typeid");
query.bindValue(":typeid", typeId);
QSqlQueryModel model;
model.setQuery(query);
DON'T DO String Substitution in SQL Queries
setQuery(QString("SELECT ... WHERE tm.TypeID=%1 ").arg(typeId));
Queue class
#ifndef Queue_H
#define Queue_H
#include "Car.h"
#include <iostream>
#include <string>
using namespace std;
const int Q_MAX_SIZE = 20;
class Queue {
private:
int size; // size of the queue
Car carQueue[Q_MAX_SIZE];
int front, rear;
public:
Queue();
~Queue();
bool isEmpty();
bool isFull();
void enqueue(Car c);
void dequeue(); // just dequeue the last car in the queue
void dequeue(Car c); // if a certain car wants to go out of the queue midway.
// Condition: Car is not in washing. Meaning is not the 1st item in the queue
void dequeue(int index); // same as the previous comment
Car getFront();
void getCarQueue(Queue);
int length();
Car get(int);
};
Queue::Queue() {
size = 0;
front = 0;
rear = Q_MAX_SIZE -1;
}
Queue::~Queue() {
while(!isEmpty()) {
dequeue();
}
}
void Queue::enqueue(Car c) {
if (!isFull()) {
rear = (rear + 1) % Q_MAX_SIZE; // circular array
carQueue[rear] = c;
size++;
} else {
cout << "Queue is currently full.\n";
}
}
void Queue::dequeue() {
}
void Queue::dequeue(int index) {
if(!isEmpty()) {
front = (front + 1) % Q_MAX_SIZE;
if(front != index) {
carQueue[index-1] = carQueue[index];
rear--;
size--;
} else {
cout << "Not allowed to dequeue the first car in the queue.\n";
}
} else {
cout << "There are no cars to dequeue.\n";
}
}
bool Queue::isEmpty() {
return size == 0;
}
bool Queue::isFull() {
return (size == Q_MAX_SIZE);
}
Car Queue::getFront() {
return carQueue[front];
}
int Queue::length() {
return size;
}
Car Queue::get(int index) {
return carQueue[index-1];
}
void Queue::getCarQueue(Queue q) {
for(int i = 0; i< q.length(); i++)
cout << q.get(i) << endl; // Error here
}
#endif
error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'Car' (or there is no acceptable conversion)
I get this error which is kind of odd. so is there anything wrong? Thanks!
cout has no idea how to process a car object; it has never seen a car object and doesn't know how you output a car as text. cout can only process types it knows about, string, char, int, etc. The specific error is because there is version of operator << that takes an ostream and a car.
There are two options:
Creation an overload for operator<< that takes an ostream and a car. That will show cout how to output a car. This isn't usually done becuase there is usually more than one way your would want to display a car.
Write the output statement so that it manually prints out car properties like
cout << c.getMake() << " " << c.getModel()