I'm trying to compute ISS3D keypoints on a point cloud in PCL. I want to set
the normals because I'm not sure if the ISS keypoint estimation flips them
in the correct direction. However, when I try to set the normals like this
typedef pcl::PointCloud<pcl::PointXYZRGB> > PointCloud;
PointCloud::Ptr detecISSKeypoints(PointCloud::Ptr cloud, pcl::PointCloud<pcl::PointNormal>::Ptr normals, float resolution) {
pcl::PointCloud<pcl::PointXYZRGB>::Ptr keypoints(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::ISSKeypoint3D<pcl::PointXYZRGB, pcl::PointXYZRGB> detector;
detector.setInputCloud(cloud);
detector.setNormals(normals);
pcl::search::KdTree<pcl::PointXYZRGB>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZRGB>);
detector.setSearchMethod(kdtree);
detector.setSalientRadius(6 * resolution);
detector.setNonMaxRadius(6 * resolution);
detector.setMinNeighbors(6);
detector.setThreshold21(0.975);
detector.setThreshold32(0.975);
detector.setNumberOfThreads(4);
detector.compute(*keypoints);
return keypoints;
}
I get an error that setNormals is expecting a const PointCloudNConstPtr&. I tried to convert the pointer of the normals to const pcl::PointCloud<pcl::PointNormal>::ConstPtr, however this didn't work.
How can I set the normals?
Finally I managed to set the normals like this:
pcl::PointCloud<pcl::Normal>::Ptr normalsCopy (new pcl::PointCloud<pcl::Normal>);
copyPointCloud(*normals, *normalsCopy);
boost::shared_ptr<const pcl::PointCloud<pcl::Normal> > constNormals (normalsCopy);
detector.setNormals(constNormals);
The problem is ISSKeypoint3D is only accepting pcl::Normal, not pcl::PointNormal and the pointer must be to a constant pointlcoud.
Related
I use the pcl::IterativeClosestPoint method from the Point-Cloud-Library.
As of right now it seems that the documentation of it is offline.
Here in google cache. And also a tutorial.
There is a possibility to call icp.getFitnessScore() to get the mean squared distances from the points of the two clouds. I just can't find information on what kind of unit this is indicated. Does anyone knows what the number I get there means? For example output for me was: 0,0003192. This seems to be low, but I have no clue if it is meters, centimeters, feet, or whatever.
Thank you very much.
what kind of unit is icp.getFitnessScore() used?
Like Joy said in his comment, the unit is the same as your input data.
For example, your input point cloud might comes from a obj file. And a point will be stored like v 9.322 -1.0778 0.44997. The number returned by icp.getFitnessScore() will have the same unit as the point's coordinate.
Does anyone knows what the number I get there means?
The number you get represents the mean squared distance from each point in source to its closest point in target.
That is to say, if you assume every point in source has a corresponding point in target, and the correspondence set comes from closest point data association, then the number represents the mean squared distance between all correspondences. That can be seen from the source code below.
To make more sense of the function, you might want to filter out correspondences that have a large distance between them. (The two point cloud might only partially overlap.) And the function actually has an optional parameter max_range that does this.
The method getFitnessScore() is defined in pcl::Registration, the base class of pcl::IterativeClosestPoint. The optional parameter max_range is defaulted to be std::numeric_limits<double>::max(), as you can see in the definition:
/** \brief Obtain the Euclidean fitness score (e.g., sum of squared distances from the source to the target)
* \param[in] max_range maximum allowable distance between a point and its correspondence in the target
* (default: double::max)
*/
inline double
getFitnessScore (double max_range = std::numeric_limits<double>::max ());
And the source code of this function is:
template <typename PointSource, typename PointTarget, typename Scalar> inline double
pcl::Registration<PointSource, PointTarget, Scalar>::getFitnessScore (double max_range)
{
double fitness_score = 0.0;
// Transform the input dataset using the final transformation
PointCloudSource input_transformed;
transformPointCloud (*input_, input_transformed, final_transformation_);
std::vector<int> nn_indices (1);
std::vector<float> nn_dists (1);
// For each point in the source dataset
int nr = 0;
for (size_t i = 0; i < input_transformed.points.size (); ++i)
{
// Find its nearest neighbor in the target
tree_->nearestKSearch (input_transformed.points[i], 1, nn_indices, nn_dists);
// Deal with occlusions (incomplete targets)
if (nn_dists[0] <= max_range)
{
// Add to the fitness score
fitness_score += nn_dists[0];
nr++;
}
}
if (nr > 0)
return (fitness_score / nr);
else
return (std::numeric_limits<double>::max ());
}
I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"
I need to find out which points of a pointcloud are visible from a RGBD sensor located at origin(0,0,0). I tried to use the voxelgridOcclusionEstimation class of pcl to determine the visible region in the cloud as seen by the sensor. It uses ray tracing technique.
As an experiment,I tried to get the visible region in a sphere whose center satisfies one of the following:
center is along x
center is along y
center is along z
center is along xz plane
center is along y z plane
center is along x y plane.
The sensor is at the origin with zero rotation in all cases.
voxelgridOcclusionEstimation yeilds wierd results. The green region denotes visible region, while the red represents the occluded region.
My code is:
int main(int argc, char * argv[])
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_in(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_occluded(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_visible(new pcl::PointCloud<pcl::PointXYZ>);
pcl::io::loadPCDFile(argv[1],*cloud_in);
Eigen::Quaternionf quat(1,0,0,0);
cloud_in->sensor_origin_ = Eigen::Vector4f(0,0,0,0);
cloud_in->sensor_orientation_= quat;
pcl::VoxelGridOcclusionEstimation<pcl::PointXYZ> voxelFilter;
voxelFilter.setInputCloud (cloud_in);
float leaf_size=atof(argv[2]);
voxelFilter.setLeafSize (leaf_size, leaf_size, leaf_size);
voxelFilter.initializeVoxelGrid();
std::vector<Eigen::Vector3i,
Eigen::aligned_allocator<Eigen::Vector3i> > occluded_voxels;
for (size_t i=0;i<cloud_in->size();i++)
{
PointT pt=cloud_in->points[i];
Eigen::Vector3i grid_cordinates=voxelFilter.getGridCoordinates (pt.x, pt.y, pt.z);
int grid_state;
int ret=voxelFilter.occlusionEstimation( grid_state, grid_cordinates );
if (grid_state==1)
{
cloud_occluded->push_back(cloud_in->points[i]);
}
else
{
cloud_visible->push_back(cloud_in->points[i]);
}
}
pcl::io::savePCDFile(argv[3],*cloud_occluded);
pcl::io::savePCDFile(argv[4],*cloud_visible);
return 0;
}
Your code seems to work except for the typo and missing point type definitions. Try with a different point cloud for better visual analysis.
Edit. On the other hand this seems to behave oddly with for example the milk cart could from here http://pointclouds.org/documentation/tutorials/supervoxel_clustering.php#supervoxel-clustering.
The voxelgridOcclusionEstimation class works but the grid width is very important. If we make it very small then there will be unoccupied voxels in the Foreground, which will let the casted rays to pass to pass to Background. If they are set very big, then the surface will not be correctly represented. This is more difficult if the model does not have uniform point density as in the case of data captured by RGBD sensors
I have a drone following a path for movement. That is, it doesn't use a rigidbody so I don't have access to velocity or magnitude and such. It follows the path just fine, but I would like to add banking to it when it turns left or right. I use a dummy object in front of the drone, thinking I could calculate the bank/tilt amount using the transform vectors from the two objects.
I've been working on this for days as I don't have a lot of math skills. Basically I've been copying pieces of code trying to get things to work. Nothing I do works to make the drone bank. The following code manages to spin (not bank).
// Update is called once per frame
void Update () {
Quaternion rotation = Quaternion.identity;
Vector3 dir = (dummyObject.transform.position - this.transform.position).normalized;
float angle = Vector3.Angle( dir, transform.up );
float rollAngle = CalculateRollAngle(angle);
rotation.SetLookRotation(dir, transform.right);// + rollIntensity * smoothRoll * right);
rotation *= Quaternion.Euler(new Vector3(0, 0, rollAngle));
transform.rotation = rotation;
}
/// <summary>
/// Calculates Roll and smoothes it (to compensates for non C2 continuous control points algorithm) /// </summary>
/// <returns>The roll angle.</returns>
/// <param name="rollFactor">Roll factor.</param>
float CalculateRollAngle(float rollFactor)
{
smoothRoll = Mathf.Lerp(smoothRoll, rollFactor, rollSmoothing * Time.deltaTime);
float angle = Mathf.Atan2(1, smoothRoll * rollIntensity);
angle *= Mathf.Rad2Deg;
angle -= 90;
TurnRollAngle = angle;
angle += RollOffset;
return angle;
}
Assuming you have waypoints the drone is following, you should figure out the angle between the last two (i.e. your "now-facing" and "will be facing" directions). The easy way is to use Vector2.Angle.
I would use this angle to determine the amount I'll tilt the drone's body: the sharper the turn, the harder the banking. I would use a ratio value (public initially so I can manipulate it from the editor).
Next, instead of doing any math I would rely on the engine to do the rotation for me - so I would go for Transform.Rotate function.In case banking can go too high and look silly, I would set a maximum for that and Clamp my calculated banking angle between zero and max.
Without knowing exactly what you do and how, it's not easy to give perfect code, but for a better understand of the above, here's some (untested, i.e. pseudo) code for the solution I visualize:
public float turnSpeed = 7.0f; //the drone will "rotate toward the new waypoint" by this speed
//bankSpeed+turnBankRatio must be two times "faster" (and/or smaller degree) than turning, see details in 'EDIT' as of why:
public float bankSpeed = 14.0f; //banking speed
public float turnBankRatio = .5f; //90 degree turn == 45 degree banking
private float turnAngle = 0.0f; //this is the 'x' degree turning angle we'll "Lerp"
private float turnAngleABS = 0.0f; //same as turnAngle but it's an absolute value. Storing to avoid Mathf.Abs() in Update()!
private float bankAngle = 0.0f; //banking degree
private bool isTurning = false; //are we turning right now?
//when the action is fired for the drone it should go for the next waypoint, call this guy
private void TurningTrigger() {
//remove this line after testing, it's some extra safety
if (isTurning) { Debug.LogError("oups! must not be possible!"); return; }
Vector2 droneOLD2DAngle = GetGO2DPos(transform.position);
//do the code you do for the turning/rotation of drone here!
//or use the next waypoint's .position as the new angle if you are OK
//with the snippet doing the turning for you along with banking. then:
Vector2 droneNEW2DAngle = GetGO2DPos(transform.position);
turnAngle = Vector2.Angle(droneOLD2DAngle, droneNEW2DAngle); //turn degree
turnAngleABS = Mathf.Abs(turnAngle); //avoiding Mathf.Abs() in Update()
bankAngle = turnAngle * turnBankRatio; //bank angle
//you can remove this after testing. This is to make sure banking can
//do a full run before the drone hits the next waypoint!
if ((turnAngle * turnSpeed) < (bankAngle * bankSpeed)) {
Debug.LogError("Banking degree too high, or banking speed too low to complete maneuver!");
}
//you can clamp or set turnAngle based on a min/max here
isTurning = true; //all values were set, turning and banking can start!
}
//get 2D position of a GO (simplified)
private Vector2 GetGO2DPos(Vector3 worldPos) {
return new Vector2(worldPos.x, worldPos.z);
}
private void Update() {
if (isTurning) {
//assuming the drone is banking to the "side" and "side" only
transform.Rotate(0, 0, bankAngle * time.deltaTime * bankSpeed, Space.Self); //banking
//if the drone is facing the next waypoint already, set
//isTurning to false
} else if (turnAngleABS > 0.0f) {
//reset back to original position (with same speed as above)
//at least "normal speed" is a must, otherwise drone might hit the
//next waypoint before the banking reset can finish!
float bankAngle_delta = bankAngle * time.deltaTime * bankSpeed;
transform.Rotate(0, 0, -1 * bankAngle_delta, Space.Self);
turnAngleABS -= (bankAngle_delta > 0.0f) ? bankAngle_delta : -1 * bankAngle_delta;
}
//the banking was probably not set back to exactly 0, as time.deltaTime
//is not a fixed value. if this happened and looks ugly, reset
//drone's "z" to Quaternion.identity.z. if it also looks ugly,
//you need to test if you don't """over bank""" in the above code
//by comparing bankAngle_delta + 'calculated banking angle' against
//the identity.z value, and reset bankAngle_delta if it's too high/low.
//when you are done, your turning animation is over, so:
}
Again, this code might not perfectly fit your needs (or compile :P), so focus on the idea and the approach, not the code itself. Sorry for not being able right now to put something together and test myself - but I hope I helped. Cheers!
EDIT: Instead of a wall of text I tried to answer your question in code (still not perfect, but goal is not doing the job, but to help with some snippets and ideas :)
So. Basically, what you have is a distance and "angle" between two waypoints. This distance and your drone's flight/walk/whatever speed (which I don't know) is the maximum amount of time available for:
1. Turning, so the drone will face in the new direction
2. Banking to the side, and back to zero/"normal"
As there's two times more action on banking side, it either has to be done faster (bankSpeed), or in a smaller angle (turnBankRatio), or both, depending on what looks nice and feels real, what your preference is, etc. So it's 100% subjective. It's also your call if the drone turns+banks quickly and approaches toward the next waypoint, or does things in slow pace and turns just a little if has a lot of time/distance and does things fast only if it has to.
As of isTurning:
You set it to true when the drone reached a waypoint and heads out to the next one AND the variables to (turn and) bank were set properly. When you set it to false? It's up to you, but the goal is to do so when the maneuver is finished (this was buggy in the snippet the first time as this "optimal status" was not possible to ever be reached) so he drone can "reset banking".For further details on what's going on, see code comments.Again, this is just a snippet to support you with a possible solution for your problem. Give it some time and understand what's going on. It really is easy, you just need some time to cope ;)Hope this helps! Enjoy and cheers! :)
I've been trying to apply an angular impulse to a Body object in PlayN, but to no avail. Whichever value (radials) I enter, the angle of the body never changes. I have tried to set the torque as well with no results.
Example code that doesn't work:
BodyDef def = new BodyDef();
def.type = BodyType.DYNAMIC;
Body body = world.createBody(def);
float degToRad = (float) (180 / Math.PI);
float radials = (float) (50 / degToRad);
// None of the following options work.
body.applyAngularImpulse(radials); // Immediate angular change.
body.applyTorque(radials); // Angular change over time.
How can I get a valid body object to change its angle without manually setting its angular velocity (e.g. with setAngularVelocity)?
Thanks in advance!
I did notice that the torque and angular velocity are reset by calling the setWake method, which I never do manually, but it is invoked by the Island class:
public void setAwake(boolean flag) {
...
m_angularVelocity = 0.0f;
m_torque = 0.0f;
...
}
Note: Setting the angular velocity is not an option because I rely on the physics simulation. I've found an article for Box2D angle rotation, but it didn't change the outcome of the applyAngularImpulse method.
I've debugged some, and come to the conclusion the values that I've been using were way too low. When going through the applyAngularImpulse code I noticed a few things:
public void applyAngularImpulse(float impulse) {
if (m_type != BodyType.DYNAMIC) {
return;
}
if (isAwake() == false) {
setAwake(true);
}
m_angularVelocity += m_invI * impulse;
}
m_invI is a really low value, something like 2.44882598E-4 so it makes sense if I use a too low value, that there is just no visible effect. If calculate the impulse from the question's example multiplied by the m_invI field:
m_invI = 0.00044882598 = 2.449E-4
impulse = 50 / 180 / PI = 0.08841941282 = 8.842E-2
m_invI * impulse = 2.449E-4 * 8.842E-2 = 0.00002165237 = 2.165E-5 <--
That means just an angular velocity increment of 0.00002165237 per update cycle... Which is way too low to be noticeable it seems.
Also, while browsing the Box2D forums for similar situations, I found it there's a best practice to include the body's inertia into your angular impulse calculation. Like this:
body.applyAngularImpulse(speedValue * body.getInertia());
The inertia tends to be a high value (+4000) for me.
Hope this can be some help to others that have trouble applying an angular impulse or torque.
In my case I forgot to call
body.setFixedRotation(false);
Even though I was setting mass, an inertia of 0 was reported and all torques and angular impulses were ignored.
Apparently it is fixed by default.
I had the same problem.
In my case I accidentally put my points in CW order (leading to negative area and zero mass). This was my incorrect code:
// incorrect version
triangleVertices.push(new B2Vec2(width / 2, 0.0)); // top
triangleVertices.push(new B2Vec2(0.0, height)); // left
triangleVertices.push(new B2Vec2(width, heightM)); // right
That looks like CCW, right? Nope, not in my coordinate system (+Y is down).
// corrected version
triangleVertices.push(new B2Vec2(width / 2, 0.0)); // top
triangleVertices.push(new B2Vec2(width, heightM)); // right
triangleVertices.push(new B2Vec2(0.0, height)); // left