Find a specific element within a vector - vector

I have a class team that contains information for football teams. I need to read in a file and add each unique team to a vector season.
//Loop to determine unique teams
if(season.size() <= 1)
{
season.push_back(new_team);
cout << "First team added!" << endl;
}
vector<team>::iterator point;
point = find(season.begin(), season.end(), new_team);
bool unique_team = (point != season.end());
if(unique_team == true && season.size()>1)
{
season.push_back(new_team);
cout << "New team added!" << endl;
}
cout << "# of Teams: " << season.size() << endl;
system("pause");
Any ideas why this doesn't work? I'm still new to this :-) So feel free to give constructive criticism.

I think your logic may be a little off. There first team should be added when the size of the teams vector is 0. Say your team is a vector of integers an insertTeam function would look something like this.
void Season::insertTeam(int team)
{
if (teams.size() == 0)
{
teams.push_back(team);
cout << "First team " << team << " added!" << endl;
}
else
{
vector<int>::iterator point;
point = find(teams.begin(), teams.end(), team);
bool unique_team = (point == teams.end());
if(unique_team == true && teams.size()>0)
{
teams.push_back(team);
cout << "New team " << team << " added!" << endl;
}
}
}

Related

Get length scale factor of STEP CAD file with OpenCASCADE

I am trying to get the length unit conversion factor in OpenCASCADE, when importing a STEP format CAD file. In my test file the entity #184 sets the length to meters and during import will be converted to milimeters used by OpenCASCADE internally by default
...
#184=(
LENGTH_UNIT()
NAMED_UNIT(*)
SI_UNIT($,.METRE.)
);
...
I belive the function below is how it should be done, but no matter what i try the "Length_Unit" STEP entity is not matched, and therefore I can't get the scaling factor.
void step_unit_scaling(std::string const &file_name) {
STEPControl_Reader reader;
reader.ReadFile( file_name.c_str() );
const Handle(Interface_InterfaceModel) Model = reader.Model();
Handle(StepData_StepModel) aSM = Handle(StepData_StepModel)::DownCast(Model);
Standard_Integer NbEntities = Model->NbEntities();
for (int i=1; i<=NbEntities; i++) {
Handle(Standard_Transient) enti = aSM->Entity(i);
if (enti->IsKind (STANDARD_TYPE(StepBasic_LengthMeasureWithUnit))) {
Handle(StepBasic_LengthMeasureWithUnit) MWU = Handle(StepBasic_LengthMeasureWithUnit)::DownCast(enti);
Standard_Real scal_mm = MWU->ValueComponent();
std::cout << " --- !!! MATCH !!! --- scal_mm = " << scal_mm << std::endl;
}
}
}
Does anyone know if this is the correct approach, or if there perhaps is a better way.
If you search for an entity of a given type, you should check the types to find an error. The following line will show the actual entity type.
std::cout << "Entity type " << enti->DynamicType()->Name() << std::endl;
When I play with STEP files here, I see that your STEP line leads to an entity of type StepBasic_SiUnitAndLengthUnit. With this code I can test for some expected SI units:
if (enti->IsKind(STANDARD_TYPE(StepBasic_SiUnitAndLengthUnit)))
{
Handle(StepBasic_SiUnitAndLengthUnit) unit =
Handle(StepBasic_SiUnitAndLengthUnit)::DownCast(enti);
if (unit->HasPrefix() &&
(StepBasic_SiUnitName::StepBasic_sunMetre == unit->Name()) &&
(StepBasic_SiPrefix::StepBasic_spMilli == unit->Prefix()))
{
std::cout << "SI Unit is millimetre." << std::endl;
}
else if (!unit->HasPrefix() &&
(StepBasic_SiUnitName::StepBasic_sunMetre == unit->Name()))
{
std::cout << "SI Unit is metre." << std::endl;
}
else
{
std::cout << "I did not understand that unit..." << std::endl;
}
}

Can't delete a pointer in C++

I got this code from a book. When I ran on Visual Studio, it said to switch strcpy() to strcpy_s(), and after I did that, it seems the program terminated at the delete pointer. I tried to run on Dev-C++, and it works fine. Anyone knows why? Thank you.
#include "pch.h"
#include <iostream>
#include <cstring>
int main()
{
cout << "Enter a kind of animal: ";
cin >> animal; // ok if input < 20 chars
ps = animal; // set ps to point to string
cout << ps << "!\n"; // ok, same as using animal
cout << "Before using strcpy():\n";
cout << animal << " at " << (int *)animal << endl;
cout << ps << " at " << (int *)ps << endl;
ps = new char[strlen(animal) + 1]; // get new storage
strcpy_s(ps, sizeof(animal), animal); // copy string to new storage
cout << "After using strcpy():\n";
cout << animal << " at " << (int *)animal << endl;
cout << ps << " at " << (int *)ps << endl;
delete[] ps;
return 0;
}

PCL feature matching failure

I am trying to match features between two point cloud I test by changing many parameters but it always produces wrong matches. I am calculating PFH feature descriptors of SIFT features.
Thank you for your suggestions.
Below is the code I used
// load the both point clouds
pcl::io::loadPCDFile("Tee.pcd", *cloud_1);
pcl::PLYReader Reader;
Reader.read("tee.ply", *cloud_2);
//pcl::io::loadPCDFile("Tee.pcd", *cloud_2);
// Create the filtering object
pcl::PassThrough<pcl::PointXYZRGB> pass;
pass.setInputCloud(cloud_2);
pass.setFilterFieldName("z");
pass.setFilterLimits(0.0, 1.0);
//pass.setFilterLimitsNegative (true);
pass.filter(*cloud_2_filtered);
// Downsample the cloud
const float voxel_grid_leaf_size = 0.009f;
downsample(cloud_1, voxel_grid_leaf_size, downsampledCloud_1);
std::cout << "First cloud: downsampled " << std::endl;
const float voxel_grid_leaf_size2 = 0.003f;
downsample(cloud_2_filtered, voxel_grid_leaf_size2, downsampledCloud_2);
std::cout << "second cloud: downsampled " << std::endl;
// Compute surface normals
const float normal_radius = 0.03;
compute_surface_normals(downsampledCloud_1, normal_radius, normalsFromCloud_1);
compute_surface_normals(downsampledCloud_2, normal_radius, normalsFromCloud_2);
std::cout << "second cloud: normals computed " << std::endl;
// Compute keypoints
const float min_scale = 0.01;
const int nr_octaves = 3;
const int nr_octaves_per_scale = 6;
const float min_contrast = 1.0;
detect_keypoints(cloud_1, min_scale, nr_octaves, nr_octaves_per_scale, min_contrast, keypointsFromCloud_1);
std::cout << "first cloud: keypoints computed " << std::endl;
//const float min_scale1 = 0.1;
detect_keypoints(cloud_2_filtered, min_scale, nr_octaves, nr_octaves_per_scale, min_contrast, keypointsFromCloud_2);
std::cout << "second cloud: keypoints computed " << std::endl;
//visualize_keypoints(cloud_2, keypointsFromCloud_2);
// Compute PFH features
const float feature_radius = 0.08;
compute_PFH_features_at_keypoints(downsampledCloud_1, normalsFromCloud_1, keypointsFromCloud_1, feature_radius, descriptors1);
std::cout << "first cloud: descriptor computed " << std::endl;
compute_PFH_features_at_keypoints(downsampledCloud_2, normalsFromCloud_2, keypointsFromCloud_2, feature_radius, descriptors2);
std::cout << "second cloud: descriptor computed " << std::endl;
// Find feature correspondences
std::vector<int> correspondences;
std::vector<float> correspondence_scores;
find_feature_correspondences(descriptors1, descriptors2, correspondences, correspondence_scores);
// Print out ( number of keypoints / number of points )
std::cout << "First cloud: Found " << keypointsFromCloud_1->size() << " keypoints "
<< "out of " << downsampledCloud_1->size() << " total points." << std::endl;
std::cout << "Second cloud: Found " << keypointsFromCloud_2->size() << " keypoints "
<< "out of " << downsampledCloud_2->size() << " total points." << std::endl;
// Visualize the two point clouds and their feature correspondences
visualize_correspondences(cloud_1, keypointsFromCloud_1, cloud_2_filtered, keypointsFromCloud_2, correspondences, correspondence_scores);
The resulting image is as shown:
Data & Preprocessing
It seems like that you are trying to match a point cloud with only the object to a point cloud where the object is inside the scene.
In order to get consistent and robust results, extract all object from the scene beforehand and try to match the reference to all detected objects and select the best match.
Descriptor
I experienced way better results using the SHOT descriptor rather than PFH.
Here you can read more on Object Recognition from the authors of PCL where they describe and explain the whole pipeline for object recognition.

QAbstractVideoSurface generating A Null Image

I'm reimplemented the present method from a QAbstractVideo Surface in order to capture frames from an IP camera.
This is my reimplemented methods (the required ones):
QList<QVideoFrame::PixelFormat> CameraFrameGrabber::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const
{
Q_UNUSED(handleType);
return QList<QVideoFrame::PixelFormat>()
<< QVideoFrame::Format_ARGB32
<< QVideoFrame::Format_ARGB32_Premultiplied
<< QVideoFrame::Format_RGB32
<< QVideoFrame::Format_RGB24
<< QVideoFrame::Format_RGB565
<< QVideoFrame::Format_RGB555
<< QVideoFrame::Format_ARGB8565_Premultiplied
<< QVideoFrame::Format_BGRA32
<< QVideoFrame::Format_BGRA32_Premultiplied
<< QVideoFrame::Format_BGR32
<< QVideoFrame::Format_BGR24
<< QVideoFrame::Format_BGR565
<< QVideoFrame::Format_BGR555
<< QVideoFrame::Format_BGRA5658_Premultiplied
<< QVideoFrame::Format_AYUV444
<< QVideoFrame::Format_AYUV444_Premultiplied
<< QVideoFrame::Format_YUV444
<< QVideoFrame::Format_YUV420P
<< QVideoFrame::Format_YV12
<< QVideoFrame::Format_UYVY
<< QVideoFrame::Format_YUYV
<< QVideoFrame::Format_NV12
<< QVideoFrame::Format_NV21
<< QVideoFrame::Format_IMC1
<< QVideoFrame::Format_IMC2
<< QVideoFrame::Format_IMC3
<< QVideoFrame::Format_IMC4
<< QVideoFrame::Format_Y8
<< QVideoFrame::Format_Y16
<< QVideoFrame::Format_Jpeg
<< QVideoFrame::Format_CameraRaw
<< QVideoFrame::Format_AdobeDng;
}
bool CameraFrameGrabber::present(const QVideoFrame &frame)
{
//qWarning() << "A frame";
if (frame.isValid()) {
//qWarning() << "Valid Frame";
QVideoFrame cloneFrame(frame);
cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
const QImage image(cloneFrame.bits(),
cloneFrame.width(),
cloneFrame.height(),
QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat()));
qWarning() << "Is created image NULL?" << image.isNull();
if (!image.isNull())
emit nextFrameAsImage(image);
cloneFrame.unmap();
return true;
}
return false;
}
And this is is how I used it:
grabber = new CameraFrameGrabber(this);
connect(grabber,&CameraFrameGrabber::nextFrameAsImage,this,&QCmaraTest::on_newFrame);
QMediaPlayer *a = new QMediaPlayer(this);
QString url = "http://Admin:1234#10.255.255.67:8008";
a->setMedia(QUrl(url));
a->setVideoOutput(grabber);
a->play();
The problem is that the image that is created is null. As far as I can tell, this can only be because the frame is valid but does not contain data.
Any ideas what the problem could be?
Important Detail: If I set the stream to a QVideoWidget and simply show that, it works just fine.
So I found out what the problem was.
This:
QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat())
Was returning invalid format because the IP cam gave the format as a YUV format which QImage can't handle. The solution was to force the format and the only one I found that did not make the program crash was: QImage::Format_Grayscale8.
With that, it worked.

iter with foreach

I iter for a QList with while with this code:
QList<Job> jobsList;
jobsList = job.getJobs(650, 654);
QListIterator<Job> iterJobs(jobsList);
while(iterJobs.hasNext())
{
job = iterJobs.next();
qDebug() << "IdJob " << job.jobId();
qDebug() << "jobType " << job.jobType();
}
and all fine but How can I to make whith foreach?
Thanks you very much
foreach (Job const& job, jobsList) {
qDebug() << "IdJob " << job.jobId();
qDebug() << "jobType " << job.jobType();
}
foreach (Job job, jobsList)
{
qDebug() << "IdJob " << job.jobId();
qDebug() << "jobType " << job.jobType();
}

Resources