Any advice please as to how would I go about plotting 'by hand' a stock standard 2D sine graph 'rotated' into a 3D view like in the diagram?
Related
I have a .las file and I performed the following operations:
Convert PointCloud to RGB Image
Convert PointCloud to GroundTruth Matrix.
Crop Images and corresponding GroundTruth Matrix to fixed size 256x256
Train UNet (image and groundtuth label)
Inference. Get prediction Matrix with each pixel representing Labels
So I've a predicted matrix,
I don't know how to map it to PointCloud to see how 3D predicted classification looks like?
I'm using Julia
Unfortunately for your goal, you pre-processed your 3D data by converting it to 2D and then cropping the 2D image further. You can plot the 2D data with colors for differently labeled points to show the 2D results, but you are unlikely to be able to get back to a true 3D plot you can move a point of view through with a 3D viewer that way. You should, if you can, modify your preprocessing so as to train your network on the 3D data directly.
I;ve used binning which was used to project 3D to 2D again from 2D to 3D.
I have a matrix that contains the points of the form (x,y,z). My goal now is to transform these points so that they lie in the 2D plane. I want to keep the angle as shown in the figures. Im working with R and want to use ggplot2 instead of the plotly package.
Can anyone help me with this?
I'm looking to build a spherical mesh out of equilateral triangles. I've calculated the points of the triangle and managed to create a flat mesh out of them.
I'm now stuck on how to translate these points from a flat surface onto a sphere
My goal is to achieve something similar to the result from this video. Where the creator does something close to what I'm trying to do by projecting points from a plane onto a sphere.
I've tried looking for "projecting triangle onto a sphere" results online but I've not managed to find anything that would bring me closer to a solution.
i'm working on a project where i have a cloud of points in space as input data, my goal is to create a surface.
I started by computing a regression plan for the cloud, then i projected my points on the plane using dot products :
My plane is represented by a point and a normal , i construct the axis of the plane's space using cross products then project each point on these axis.
then i triangulate in 2D (that's the point of the whole operation).
My problem is that my points now are in the plane space and i want to get them back to their inital position (inverse the transformation) to have my surface ON my points.
thank you :)
the best way is to keep the original positions and make the triangulation give you the indices rather than the positions , i hope it will help !
This seems like a question for which an answer should readily available on the web or books but my quest for an answer has led me so far only to blind alleys that turned out to be dead ends.
I'm trying to draw 3D lines in real-time with hidden surface removal (the lines are edges of solid objects).
So I have two 3D points that were projected to 2D points using perspective projection. For each point I have computed the depth of the point. Now I want to draw the line segment that joins the 2 points, and for hidden surface removal to work I have to compute, for each intermediary 2D point on the 2D line (that results from the projection) the depth of the corresponding 3D point (the 3D point that is projected on that intermediary 2D point).
My problem is that, since the depth function isn't linear when you do perspective projection, I can't interpolate the depth of the 2 original 3D points to compute the depth of the intermediary point.
So how do I compute the depth of each point on the line with a method that's compatible with the constraints of real-time rendering?
Thanks in advance for any help.
Use homogeneous coordinates, which can be linearly interpolated in screen space: http://www.cs.unc.edu/~olano/papers/2dh-tri/