GLSL Shader - 2D Rim Lighting Effect - 2d

How can I create a gradient like the one seen in the image below using GLSL?
What is the best method for achieving a smooth transition from opaque at the edges of the polygon being drawn to transparent at its center?

The image you referred to is achieved through what is called distance transform. It is a very useful and common operation widely applied in image processing, computer vision and robot path calculation, etc. What it does is for each pixel of a image, compute the 2D Euclidean distance from the pixel to the nearest edges of the polygon. The output is an image whose pixel value indicates the minimum distance. To visualize the results, we map the distance to gray scale. Particularly, in your reference image, the ridge with bright white has the largest distance to the boundary while the dark area contains much smaller values because they are very close to the polygon boundary.
In terms of implementation, a brutal force approach is to draw a 2D image you want to transform, and in the fragment shader, compute the distance from current fragment position to every edge of a polygon and output the minimum value to an framebuffer. The geometry information of the polygon can be stored in another texture. Eventually, you ends up with a 2D texture whose pixel value encodes the shortest distance to the edges of the polygon.
You can also find this common transform implementation in OpenCV library.

Related

What algorithms exist for generating a texture atlas from a mesh?

I have a triangle mesh along with a function which defines the material properties at each point in 3d space. Using a given resolution in object space, I generate a triangular texture for each triangle; specifically, these each end up being right triangles with size corresponding to the actual triangle in the mesh. I have two problems, however: 1) the output texture atlas is large and obviously contains large amounts of dead space, and 2) each vertex for each triangle needs to have its own texcoord, since each triangle's texture ends up in a different part of the atlas.
What algorithms exist to generate a texture atlas from a mesh with known textures for each triangle? I'm looking to share as many texcoords as possible across the mesh, which means that adjacent triangles should have correspondingly adjacent textures in the atlas. Not everything can be shared -- since 3d objects can't always flatten into a 2d surface with constant texture resolution -- but I'm hoping to maximize this.

Determining visible colors of defined cubes with face textures

I have an arbitrary number of cubes, each defined by pairs of (x,y,z) coordinates, representing opposite vertexes. (Specifically, they represent the vertex with minimal coordinates and maximal coordinates, respectively.) Each cube has a texture assigned to each face, from a simple 2D PNG file. (So for N cubes, there are 6N texture assignments.) These cubes are not rotated, so all faces are either parallel to or orthogonal to a single external "ground plane".
Wherever two faces overlap on the same plane, the last-defined cube's face is rendered. So for instance, if I define a red cube and then a blue cube, anywhere the two cubes have overlapping coplanar faces, it would appear blue, not red.
Assuming I can easily read the pixel values from the PNG texture files, how can I calculate the average visible color of the entire set of cubes from a given view angle? For simplicity, assume the viewing angles are all multiples of 90 degrees on any axis, so they're all pointing normal to a set of cube faces.
EDIT Oh, and one more possible kink: some textures have transparent pixels. Where this occurs, a face using these textures will be invisible and show through to any other cubes behind it. This must be accounted for in the color average as well.

3d point cloud registration - parts of textured sphere

A spherical object is photographed from 6 different sides (cube faces). Since the radius and camera distance is known, the z coordinate of every pixel in the images can be calculated.
So I have multiple point clouds (nearly half spheres) of the same physical object as pcl::PointCloud<pcl::PointXYZRGB>.
I know the rough rotational relationship between the models (90 deg rotations), but to stitch them together to a full sphere correctly I need to know the rigid transform more precisely. How can I achieve this? The shapes have no significance in this case, stitching by color matching would be good. But the examples in the documentation all seem to only consider shapes, not colors.
The overlap of the partial models is about 40 degrees.

Depth interpolation for surface removal with perspective projection

This seems like a question for which an answer should readily available on the web or books but my quest for an answer has led me so far only to blind alleys that turned out to be dead ends.
I'm trying to draw 3D lines in real-time with hidden surface removal (the lines are edges of solid objects).
So I have two 3D points that were projected to 2D points using perspective projection. For each point I have computed the depth of the point. Now I want to draw the line segment that joins the 2 points, and for hidden surface removal to work I have to compute, for each intermediary 2D point on the 2D line (that results from the projection) the depth of the corresponding 3D point (the 3D point that is projected on that intermediary 2D point).
My problem is that, since the depth function isn't linear when you do perspective projection, I can't interpolate the depth of the 2 original 3D points to compute the depth of the intermediary point.
So how do I compute the depth of each point on the line with a method that's compatible with the constraints of real-time rendering?
Thanks in advance for any help.
Use homogeneous coordinates, which can be linearly interpolated in screen space: http://www.cs.unc.edu/~olano/papers/2dh-tri/

Detect Shapes in an array of points

I have an array of points. I want to know if this array of point represents a circle, a square or a triangle.
Where should i begin? (i use C#)
Thanks
Jon
Depending on your problem, a good approach for this problem may be to use the Hough transform and all its derived algorithm
It consists in a transformation of the image space to an other space where the coordinate represents the objects parameters (angle and initial point for a line, coordinates of the center and radius for a circle)
The algorithm transforms each point of your array of points in points in the other space. Then you have to search in the new space if some points are prevailing. From these points, you will get the parameters of your object.
Of course, you need to do it once to recognize the lines (so you will know how many lines are in your bitmap and where they are) and to it to recognize the circles (it is not exactly the same algorithm)
You may have a look to this lecture (for Hough Circle Transform), but you could easily find the algorithm for line
EDIT: you can also have a look to these answers
Shape recognition algorithm(s)
Detecting an object on the image based on geometrical form
imagine it is each of these one-by-one and try to fit each of these shapes on the data.. for a square, you could find the four extreme points, and try charting out a square that goes through all of them..
Once you have got a shape in place.. you could measure the distance between each of the points and the part of the shape that is nearest to it.. then square these distances and add them up.. the shape which has the smallest sum-of-squares is probably your best bet
Use the Hough Transform.
I'm going to take a wild stab and say if you have 3 points the shape represents a triangle, 4 points is some kind of quadrilateral, any more than that is a circle.
Perhaps there's more information to your problem you could provide.

Resources