Add text next to a Point in PCL visualizer - point-cloud-library

I have an application where I successfully plot 2D laser range data from a LiDAR in real-time and run PCL's Euclidean clustering algorithm to paint those cluster points in a different color. I would however like to add a text next to each detected cluster and tell its distance from the sensor. I do have the coordinate of the centroid point of each detected cluster but when I try to use the addText:
bool pcl::visualization::PCLVisualizer::addText (const std::string & text, int xpos, int ypos,double r,double g,double b,const std::string & id = "")
text: Text to be printed in window
xpos: position in x
ypos: position in y
r: red
g: green
b: blue
id: Text ID tag
It seems like the function addText() puts the text in on PIXEL x- and y-values in stead of real-world values (meters). However, PCLs other method such as "addPoint()", addCircle() etc are indeed placing the data based on real world measurements.
Does anyone have experience with transforming spatial coordinates to pixels in PCL visualizer, or have successfully plotted text in other ways?
Below is a screenshot of my application. Clusters are drawn in red with a white circle around the centroid. At the bottom left I'm printing the distance of each cluster. As can be seen they are just stacked on top of each other instead of being added on top of its own white circle.
Thankful for any help
regards
Screenshot

Okey I got it to work with a function called: pcl::visualization::PCLVisualizer::addText3D.
There is no support for erasing/updating all text fields that have been added over a period of time though, so one always needs to know the ID tags of each respective text and iterate through them to erase/update them.
You can delete texts with function: pcl::visualization::PCLVisualizer::removeText3D
Do however keep in mind that text ID tags share the same memory space as other ID name tags (e.g names you have given circles, clouds or cylinders etc). This means that if you try to add a text of name "abc" they command will fail if there is a circle pressent in your window named "abc".
Below is a visual example of how it looks now.Obstacle distance plotting

Related

DICOM why need overlay and how to read it

Just wondering why we need the overlay and when we will need it?
I have a Scout image with overlay, what do these dots mean and what do these numbers or fractions mean?
How these numbers are drawn on the image?
DICOM standard allows two specific types of overlays (graphics and ROI) along with the image and overlays are stored as 1-bit image in Overlay Data (60XX, 0050) attribute. A dataset can have up to 16 separate overpay planes (using the repeating groups encoding).
The overlay plane that represents region of interest (ROI) will have value of “R” for Overlay Type (60xx, 0040) attribute and ROI Area (60xx, 1301), ROI Mean (60xx,1302) and ROI Standard Deviation (60xx, 1303) can be used for the corresponding values of ROI. All bits representing ROI will have a value of 1 that represents the pixels under the boundaries of the actual image data.
Graphic Overlay will have value of “G” in Overlay Type (60xx, 0040) attribute and it is used for expressing reference marks (reference line), graphic annotation, or bitmap text etc. Again, all visible values in an overlay plane are set to 1.
The Overlay Rows (60xx, 0010) and Overlay Columns (60xx,0011) specifies the width and height of the overlay plane. Overlay Bits Allocated is always 1 and Overlay Bit Position is 0 (it was used in previous version and usage has been retired). Overlay Origin (60xx, 0050) is used to described the first overlay point with respect to the pixel in the image and 1\1 represents upper left pixel of the image.
Overlays can be used to display any data over an image. You could, for example, allow users to make annotations or graphics marks. You cannot mark the original data, so the overlay is stored in a separate layer.
In your case, the creator of the overlay should explain its meaning.
The meaning of the overlay is:
i.e. 2/16 -> Series number 2 and slice number 16

Extract pixel coordinates in scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

determine rectangle rotation point

I would like to know how to compute rotation components of a rectangle in space according to four given points in a projection plane.
Hard to depict in a single sentence, thus I explain my needs.
I have a 3D world viewed from a static camera (located in <0,0,0>).
I have a known rectangular shape (an picture, actually) That I want to place in that space.
I only can define points (up to four) in a spherical/rectangular referencial (camera looking at <0°,0°> (sph) or <0,0,1000> (rect)).
I considere the given polygon to be my rectangle shape rotated (rX,rY,rZ). 3 points are supposed to be enough, 4 points should be too constraintfull. I'm not sure for now.
I want to determine rX, rY and rZ, the rectangle rotation about its center.
--- My first attempt at solving this constrint problem was to fix the first point: given spherical coordinates, I "project" this point onto a camera-facing plane at z=1000. Quite easy, this give me a point.
Then, the second point is considered to be on the <0,0,0>- segment, which is about an infinity of solution ; but I fix this by knowing the width(w) and height(h) of my rectangle: I then get two solutions for my second point ; one is "in front" of the first point, and the other is "far away"... I now have a edge of my rectangle. Two, in fact.
And from there, I don't know what to do. If in the end I have my four points, I don't have a clue about how to calculate the rotation equivalency...
It's hard to be lost in Mathematics...
To get an idea of the goal of all this: I make photospheres and I want to "insert" in them images. For instance, I got on my photo a TV screen, and I want to place a picture in the screen. I know my screen size (or I can guess it), I know the size of the image I want to place in (actually, it has the same aspect ratio), and I know the four screen corner positions in my space (spherical or euclidian). My software allow my to place an image in the scene and to rotate it as I want. I can zoom it (to give the feeling of depth)... I then can do all this manually, but it is a long try-fail process and never exact. I would like then to be able to type in the screen corner positions, and get the final image place and rotation attributes in a click...
The question in pictures:
Images presenting steps of the problem
Note that on the page, I present actual images of my app. I mean I had to manually rotate and scale the picture to get it fits the screen but it is not a photoshop. The parameters found are:
Scale: 0.86362
rX = 18.9375
rY = -12.5875
rZ = -0.105881
center position: <-9.55, 18.76, 1000>
Note: Rotation is not enought to set the picture up: we need scale and translation. I assume the scale can be found once a first edge is fixed (first two points help determining two solutions as initial constraints, and because I then know edge length and picture width and height, I can deduce scale. But the software is kind and allow me to modify picture width and height: thus the constraint is just to be sure the four points are descripbing a rectangle in space, with is simple to check with vectors. Here, the problem seems to place the fourth point as a valid rectangle corner, and then deduce rotation from that rectangle. About translation, it is the center (diagonal cross) of the points once fixed.

Find the closest tile along a path

I have a tile based game and I need to find the closest tile within a 32px radius. So say a user is at 400, 200 and the user clicks at 500, 400. I need to create a path or line from the player to the mouse position on click and the closest tile that is underneath the path within 32px (or 2 tiles) must be chosen. The map is tiled at 16px.
A function call to see if a tile is at a given tile position is available Map.at(x,y).
I just don't know the maths to use to work this out.
The block blocks are within 16px, the red are within 32px. The grey block is the tile to be destroyed and the blue line is the invisible path between the player and mouse.
If you work in terms of tile coordinates, the problem becomes a line-drawing problem from the title the user is at to the tile the mouse was clicked in. A line drawing algorithm would generate, in sequence, all the tiles along a straight-line path between those two tiles. Just pick the first one where Map.at(x,y) satisfies your requirements and exit the line-drawer.
A number of line drawing algorithms exist. Two simple ones are DDA and Bresenham's. Both generate the discrete "pixels" (tiles, in your question) in the correct order. The DDA is the simple choice if floating point arithmetic can be used in your application. Bresenham's uses only integer math.
With a lot of games, it's not necessary a straight line, but a search for the shortest path. If that's where you are heading then you might want to look at the A* algorithm.

Matlab Bwareaopen equivalent function in OpenCV

I'm trying to find similar or equivalent function of Matlabs "Bwareaopen" function in OpenCV?
In MatLab Bwareaopen(image,P) removes from a binary image all connected components (objects) that have fewer than P pixels.
In my 1 channel image I want to simply remove small regions that are not part of bigger ones? Is there any trivial way to solve this?
Take a look at the cvBlobsLib, it has functions to do what you want. In fact, the code example on the front page of that link does exactly what you want, I think.
Essentially, you can use CBlobResult to perform connected-component labeling on your binary image, and then call Filter to exclude blobs according to your criteria.
There is not such a function, but you can
1) find contours
2) Find contours area
3) filter all external contours with area less then threshhold
4) Create new black image
5) Draw left contours on it
6) Mask it with a original image
I had the same problem and came up with a function that uses connectedComponentsWithStats():
def bwareaopen(img, min_size, connectivity=8):
"""Remove small objects from binary image (approximation of
bwareaopen in Matlab for 2D images).
Args:
img: a binary image (dtype=uint8) to remove small objects from
min_size: minimum size (in pixels) for an object to remain in the image
connectivity: Pixel connectivity; either 4 (connected via edges) or 8 (connected via edges and corners).
Returns:
the binary image with small objects removed
"""
# Find all connected components (called here "labels")
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(
img, connectivity=connectivity)
# check size of all connected components (area in pixels)
for i in range(num_labels):
label_size = stats[i, cv2.CC_STAT_AREA]
# remove connected components smaller than min_size
if label_size < min_size:
img[labels == i] = 0
return img
For clarification regarding connectedComponentsWithStats(), see:
How to remove small connected objects using OpenCV
https://www.programcreek.com/python/example/89340/cv2.connectedComponentsWithStats
https://python.hotexamples.com/de/examples/cv2/-/connectedComponentsWithStats/python-connectedcomponentswithstats-function-examples.html
The closest OpenCV solution to your question is the morphological closing or opening.
Say you have white regions in your image that you need to remove. You can use morphological opening. Opening is erosion + dilation, in that order. Erosion is when the white regions in your image are shrunk. Dilation is (the opposite) where white regions in your image are enlarged. When you perform an opening operation, your small white region is eroded until it vanishes. Larger white features will not vanish but will be eroded from the boundary. The subsequent dilation step restores their original size. However, since the small element(s) vanished during the erosion step, they will not appear in the final image after dilation.
For example consider this image where we want to remove the small white regions but retain the 3 large white ellipses. Running the following code removes the white regions and displays the clean image
import cv2
im = cv2.imread('sample.png')
clean = cv2.morphologyEx(im, cv2.MORPH_OPEN, np.ones((10, 10)))
cv2.imshwo("Clean image", clean)
The clean image output would be like this.
The command above uses a square block of size 10 as the kernel. You can modify this to suit your requirement. You can even generate a more advanced kernel using the function getStructuringElement().
Note that if your image is inverted, i.e., with black noise on a white background, you simply need to use the morphological closing operation (cv2.MORPH_CLOSE method) instead of opening. This reverses the order of operation - first the image is eroded and then dilated.

Resources