I am trying to write a sample opengl application for occlusion query to identify visible triangles. But occlusion query always returns zero count for pixel sampling.
I follow below steps to do so:
Setup opengl\glut
Call lookat from a point (from where complete model is visible)
Enable depth test, depth mask, enable face culling ( and few other options).
Render complete model.
Disable depth mask
Start new occlusion query
Render a triangle \ face
End query and wait until result is available.
Once result is available then get count.
Repeat step 6 to 9 for all triangles \ faces.
For given model I compute bbox and enlarge it by some factor (say 1.5) so as complete model is visible. From each corner of bbox, I call step 2-10 and with eye point as corner.
Here issue is for each query result of step #9 (output of occlusion ) query is 0, i.e. it is stating number of visible pixel are zero.
I have attached sample application here (ObjRender Sample) which loads OBJ file and perform above step 1-10. To reproduce it just open VS project, build and run it. It has sample data with it.
Related
I'm trying to figure out how to automatically adjust the maximum iteration value when moving around in the Mandelbrot fractal.
All examples I've found uses a constant of 1000 or less but that's not enough when zooming into the fractal set.
Is there a way to determine the number of max_iterations based on for example where you are in the Mandelbrot space (x_start,x_end,y_start,y_end)?
One method I tried was to repetitively pre-process a small area in the region of the Mset boundary with increasing iterations until the percentage change in status from one repetition to the next was small. The problem was, that would vary in different places on the current map, since the "depth" varies across it. How to find the right place to do it? By logging the "deepest" boundary area during the previous generation (that will still be within the next zoom area).
But my best strategy was to avoid iterating wherever possible:
Away from the boundary of the Mset, areas of equal depth can be "contoured" and then filled with that depth. It was not an easy algorithm. Basically I followed a raster scan but when I detected a boundary of iteration change (examining all the neighbours to ensure I wasn't close the the edge of the Mset), I would switch to a curve-stitching method to iterate around a contour back to where it started (obviously not recalculating spots I already did), and then make a second pass filling in the raster lines within the countour with the iteration level. It was fraught with leaks but eventually I cracked it.
Within the Mset, I followed the same approach, because the very last thing you want to do is to plough across vast areas and hit the iteration limit.
The difficult area is close the the boundary, where the iteration results can't be related to smooth contours with the neighbours. The contour stitching method won't work here, since there is only ever 1 pixel of a particular depth.
Using the contour method also will have faults to the lower or Mset sides of this region, but since this area looks chaotic until you zoom deeper, I lived with that.
So having said all that, I simply set the iteration depth as high as I can tolerate, but perhaps you can combine my first paragraph with the area-filling techniques.
BTW colouring the region adjacent to the Mset looks terrible when an animated smooth playback of the zoom is attempted. For that reason I coloured this area in a grey scale, by comparing with neighbours. If there was too much difference, I coloured to 0x808080 at first, then adapted that depending on the predominance of the neighbours' depth. All requiring fine tuning!
I'm combining A* pathfinding with a steering AI so I can make the movement look more smooth and natural. To do this, I'm calculating the path from the enemy to the player and using checkpoints on the path to have the steering AI move to. However, from what I have seen the only way to get the x and y values of a certain point on a path, you need to use path_get_point_x(path, n) to get the x coord for the nth point of the path. But, from what I've seen, the amount of points in a path are far too low for me to accurately move the enemy around obstacles. Sometimes, the enemy goes through obstacles to get to the next point even though the path traces around the obstacle. I noticed there is a variable called path_position that is a number from 0-1 representing how far into the path you are (1 being finished). Is there a way to use that to predict where the player will be at position 0.3 of they're at position 0.25, for example?
Most of the time objects will take the quickest path, or path of least resistance. Check precise collision detection for tiles that are bordering the pathways and see if that helps keep objects out of collision objects. As for a prediction to where they will be you can multiple the speed of the object by the frames per second.
I am working with disparity maps (1024 x 768) obtained via stereo and I am able to get point clouds with XYZRGB pcl::Points. However not all pixels from the disparity map are valid depth hence there will never be 1024x768 = 786432 XYZRGB points. Fortunately I am able to save the point clouds unorganized (i.e. height=1). Unfortunately, some normal estimation methods etc, are tailored for organized pointclouds. How can I create organised pointclouds from this ?
I believe that this is not possible.
First of all unorganized point cloud (PC) is just list of points in random order written in file
On the other hand organized PC carries information of in which order orginal points were obtained by depth camera and some other information. This information is stored in lets call it grid.
Once you destroy this grid omiting some points theres no algorithm that can put it back together as it originally was
You can use other methods which provides PCL that doesnt take OPC as an argument. Result will be same as if you would use organized point cloud only little bit slower (depends on size of your input cloud)
I assume that you do have the calibration parameters that are necessary to transform the image points and their depth into 3D points, right?
In this case, you simply create a 2D point cloud and do the following for each pixel of the disparity map:
If the point is valid:
set the corresponding point in the point cloud to the 3D point
else:
set the corresponding point in the cloud to NaN (i.e. a 3D point with NaN as coordinates)
I'm very new to PCL.
I try to detect the floor under an object for checking if the object topples or is it positioned horizontally.
I've checked API and found the method: pcl::PointCloud< T >::at.
Seems like I could detect Z-value of a point using at. Is it correct?
If yes, I'm confused, how it should work. Mathematically a point is infinite small. On my scans I see the point-density the smaller the more distinct they are in Z-direction.
Will at always return a point? Is the value the mean of nearest physical points?
As referenced in the documentation, pcl::PointCloud< T >::at returns the information of a single point (the coordinates plus other data depending on the point format) given column and row information (roughly the X,Y in the depth image). For this reason, this method just works on organized clouds.
Unfortunately, not every point is a valid point. Unless you filter the point cloud, you could find invalid measurements (points which have NaN components). This is pretty normal, just discard those points using a filter. Your intuition is right, the point density is smaller the further away you go from the sensor.
As for what you're trying to achieve, you should take a look at the planar segmentation tutorial on the PCL website and at the Table Object Detector software by Nicolas Burrus. The latter extracts a plane, and the clusters of objects on top of it.
I'm writing a 2D game, in which I would like to have crate-like objects. These objects would move around, like real crates do. I have a hypothetical idea of how I would like to achieve that:
Basically I'd store the boxes' corners' coordinates with their force and velocity unit vectors, and in every update I'd basically do the following steps:
1. Apply the forces(gravity, from collisions, etc..) accordingly.
2. Modify velocity vector based on the force.
3. Move every corner of the box, like so:
4. I repeat nr 3. for every corner, so I get the real movement of the cube.
My questions are: Is this approach heading in the right direction? Is this theory even correct? If not, what would be the correct way to move a box around based on vectors in a 2D environment?
Just to clarify: I'm only dragging corner "A" in the picture, but I want to repeat the dragging for every other corner, with their own vectors. By "dragging" I mean the algorithm I just stated.
Keeping each corner's coordinate and speed makes no sense as you would be storing lots of redundant information. Boxes are rigid objects, which means that there are constraints that must be satisfied at any time instant, namely the distance between any two given corners is fixed. This also translates to a constraint that links the velocities of all four corners and so they are not independent values. With rigid bodies the movement of any point is the sum of two independent movements - the linear movement of the centre of mass (CM) and the rotation around a fixed axis - often, but not always, chosen to be the one that goes through the CM. Hence you only need to store the position and the velocity of the crate's CM (which coincides with the geometric centre of the crate) as well as the angle of rotation and the rate of rotation around the CM.
As to the motion, the gravity field is a constant vector field and hence cannot induce rotation in symmetric objects like those rectangular crates. Instead it only produces accelerated vertical motion of the CM. This is also what happens due to all external forces - one has to take their vector sum and apply it to the CM. Only external forces whose direction does not go through the CM give torque and so cause rotation. Such forces are any external pushes/pulls or reaction forces that arise when crates collide with each other or hit the ground / a wall. Computing torque due to external forces is easy but computing reaction forces could be quite involving process because of the constrained dynamics that has to be employed. Once the torque has been computed, one has to divide it by the moment of inertia of the create in order to get the angular acceleration. Often it is more convenient to use another axis and not the one that goes through the CM - Steiner's theorem can be employed in this case in order to compute the moment of inertia around that other axis.
To summarise:
all forces, acting on the create, are first added together (as vectors) and the resultant force (divided by the mass of the create) determines the linear acceleration of the CM;
the torque of all forces is computed and then used to determine the angular acceleration around a given axis.
See here for some sample problems of rigid body motion and how the physics is actually worked out.
Given your algorithm, if by "velocity vector" you actually mean "the velocity of CM", then 1 would be correct - all corners move in the same direction (the linear motion of the CM). But 2 would not be always correct - the proper angle of rotation would depend on the time the torque was applied (e.g. the simulation timestep), and one has to take into account that the lever arm length changes in between as the crate rotates.