pcl::ConcaveHull<pointXYZRGB> reconstruct returns a point cloud with all RGB values as black - point-cloud-library

When I run pcl::concaveHull reconstruct using PointXYZRGB with the RGB values previously defined from scene color values, the returned cloud has RGB values that are zeros or black.

use ConcaveHull.setKeepInformation(true)

Related

How to add normal map to material in Qt 3D?

I'm trying go apply normal map to QDiffuseSpecularMaterial or QMetalRoughMaterial. I Use QTextureImage to load the textures. When I try to apply normal material just becomes black, however, I don't have any issues with other maps (baseColor, ambientOcclusion, metalness and roughness)
What I've tried without success:
Changed direction of vertex normals, but vertex normals are correct
Swapped RGB channels in normal map texture - all the combinations. Also tried with grayscale texture
Mapped values in texture loaded in QPaintedTextureImage from (0, 255) to (0, 1) range
Thought that normal maps maybe don't work with QPointLight, so I've also added QDirectionalLight to the scene

max dot size in R ggplot

I am trying to write a R-Script that saves a series of Maps with Dots on it. For the Map I used ggmap and geom_point for the Dots. There is a map for each day in a certain time range.
The size of the Dots depends on a certain variable, but I have a problem scaling them. I am supposed to create later an animation of all the maps changing the dots' size through time, that means that I need a global scale for the dimension of the dots, spacing from the smallest value (zero) to the biggest value of that variable (the global maximum). In the most maps the biggest value is not reached.
I tried with:
scale_size_area(max_size = max(my_variable))
because scale_size_area allows to plot very tiny dots for the 0 values. I was hoping that the so written code would scale the dots correctly, using the global maximum as maximum size, but it doesn't seem to work. Every map has still a locally biggest dot that has the same size of any biggest dot in every map. Here's an example where two points with different values have the same size:
I hope I could explain my problem. I'd be glad to hear some suggestion.
To set the maximum and minimum values in a scale (size or otherwise), use limits:
scale_size_area(limits = c(NA, max(your_varible))
NA computes the minimum

Is it possible to use color.scale to create color graded polygons in a polar.plot with plotrix?

I have been trying to map sound levels in multiple directions from a single sound source. I have average dB readings from 45 degree intervals around the source. I have plotted these using the polar.plot function in the plotrix package with my data represented as a polygon.
I would like to color the polygon so that higher values are more easily distinguished from lower ones using a color gradient (e.g. red for higher values, green for lower ones). I have attempted to do this using the color.scale function (also from plotrix).
>dB<-runif(9, min=17, max=24)
>azimuth<-seq(0,360,by=45)
>plot1<- polar.plot(dB,azimuth, main="Directional Signal Levels (dB)", start=90, clockwise=TRUE, rp.type="polygon", radial.lim=c(0,24), poly.col=color.scale(dB,c(0,1,1), c(1,1,0),0), boxed.radial=FALSE)
However, this seems to only generate a solid red polygon.
Is there a way to get the polygon to use the specified color gradient I have provided? Or is there another package that will allow me to specify the color gradient for a polygon if this one will not?

Extract pixel coordinates in scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

How to replicate adding/mixing of HSV values in RGB space

At the moment I'm doing a colourizing effect using additive blending in HSV space. Have a diff value in HSV space which is added to an image texture's individual pixels to get the desired color effect. But this is turning out to be expensive as the fragment shader has to do two costly conversions to do the addition
RGB -> HSV
HSV addition
HSV -> RGB
Is there a better way to do this? The diff value will be provided in HSV only. And the final color representation is in RGB to draw.
Many Thanks,
Sak
You can get a similar effect to HSV manipulations by using a color matrix in RGB. For example, a rotation around the r=g=b axis is similar to a hue addition. (Adding x degrees in the hue channel is similar to a rotation of x degrees around r=g=b in RGB.) A translation along the r=g=b axis is similar to a value addition. (I believe that adding x to the value channel should be similar to adding x to all of r, g, and b.) And a uniform scale perpendicular to the r=g=b axis is similar to a saturation addition. I don't know off the top of my head the exact translation between adding x to saturation and scaling in RGB, but it shouldn't be too hard to work out. You should be able to compose these matrixes into a single matrix, and implement it as a single matrix multiply by the RGB value.

Resources