Why the other side of the plane is not being shown? - qt

I created a plane in Blender and added it as a QCustom3DItem on Qt chart.
But while rotating the graph I noticed I can't see the other side of the plane, why?
#include <QtWidgets>
#include <Q3DBars>
#include <QCustom3DItem>
using namespace QtDataVisualization;
MainWidget::MainWidget(QWidget *parent) : QWidget(parent)
{
resize(800,600);
auto vLayout = new QVBoxLayout(this);
auto graph = new Q3DBars;
auto widget = QWidget::createWindowContainer(graph);
vLayout->addWidget(widget);
auto bar = new QCustom3DItem;
bar->setMeshFile(":mesh/planey.obj");
bar->setScaling(QVector3D(.1f,.8f,.1f));
graph->addCustomItem(bar);
}
# Blender v2.81 (sub 16) OBJ File: ''
# www.blender.org
o Plane
v 0.000000 2.000000 1.000000
v -0.000000 0.000000 1.000000
v 0.000000 2.000000 -1.000000
v -0.000000 0.000000 -1.000000
vt 1.000000 0.000000
vt 0.000000 1.000000
vt 0.000000 0.000000
vt 1.000000 1.000000
vn 1.0000 -0.0000 0.0000
s off
f 2/1/1 3/2/1 1/3/1
f 2/1/1 4/4/1 3/2/1

Because face culling occurs.
The renderer is set to show only the triangles whose normals are facing the direction of the camera. Triangles oriented the other way around are considered "back faces", and hence skipped for performance reasons.
Unfortunately, this seems to be hardcoded in Qt, as shown in class abstract3drenderer.cpp:
void Abstract3DRenderer::initializeOpenGL()
{
m_context = QOpenGLContext::currentContext();
// Set OpenGL features
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
// ...
I'm not sure if you can override this since the 3D renderer is private.
One possible workaround is to provide an .obj file with two planes, one with normals facing one direction and the other with normals rotate 180°. Although this can create other problems such as Z-fighting...

Related

How can I fill the curves with a solid white or transparent color in gnuplot?

Just wondering if I can fill the area under the curve with a color/option that does not distort the graph features, I have tried with no fill no filled and used solid/impulses style filled. Any suggestion is welcome!
Just used: splot w lines lc rgb "dark-violet"
This is an ugly hack, but it kind of works: plot each trace twice, once as a curve and once as a solid white "wall" that will cover whatever lies behind it. The latter requires a slight abuse of the filledcurves plot style in 3-d plots. You have to manually take care of the depth ordering, so you have to plot back-to-front.
Example:
f(x,t)=exp(-(x-t)**2) # some function in lieu of data
set xr [-10:20]
set yr [-0.5:10.5]
set zr [0:1]
set xlabel "x"
set ylabel "t"
set xyplane at 0
set samples 1000
unset key
set multiplot
do for [t=10:0:-1] {
splot '+' using 1:(t):(f($1,t)) w filledcurves lc rgb "white" lw 1
if (t == 10) { # after first splot, do not plot borders and tics again
unset border
unset xtics
unset ytics
unset ztics
unset xlabel
unset ylabel
unset zlabel
}
splot '+' using 1:(t):(f($1,t)) w l ls 1
}
unset multiplot
gives
You could use polygons to make the curves.
Here is an example using polygons:
set terminal wxt size 600,400
#set title "Graph Title"
#set xlabel "X"
#set ylabel "Y"
#set ylabel "Z"
# sets background color
set object 1 rectangle from screen -0.1,-0.1 to screen 1.1,1.1 fillcolor rgb "#ffffff" behind
# allows rendering of polygons with hidden line removal
set hidden3d back offset 0 trianglepattern 2 undefined 1 altdiagonal bentover
# displays borders 0x7F = 0b1111111
set border 0x7F linecolor rgb "#555555"
# displays the x, y and z axis
#set xzeroaxis linewidth 0.5 linetype 1
#set yzeroaxis linewidth 0.5 linetype 2
#set zzeroaxis linewidth 0.5 linetype 3
# displays the x, y and z grid
set grid xtics linecolor rgb "#888888" linewidth 0.2 linetype 9
set grid ytics linecolor rgb "#888888" linewidth 0.2 linetype 9
set grid ztics linecolor rgb "#888888" linewidth 0.2 linetype 9
# moves the x, y grid to 0
set xyplane at 0
# makes the x, y, and z axis proportional
#set view equal xyz
# moves the key out of the graph
set key outside vertical bottom right
splot "Data.dat" title "Data" with lines linewidth 1.0 linecolor rgb "#8888FF"
Data.dat:
0.000000 0.000000 4.562285
0.000000 0.000000 0.000000
1.000000 0.000000 2.139039
1.000000 0.000000 0.000000
2.000000 0.000000 3.120719
2.000000 0.000000 0.000000
0.000000 1.000000 2.562285
0.000000 1.000000 0.000000
1.000000 1.000000 4.139039
1.000000 1.000000 0.000000
2.000000 1.000000 3.120719
2.000000 1.000000 0.000000
0.000000 2.000000 2.562285
0.000000 2.000000 0.000000
1.000000 2.000000 1.139039
1.000000 2.000000 0.000000
2.000000 2.000000 3.120719
2.000000 2.000000 0.000000
Generated by blender and a python script:
https://github.com/lowlevel86/blender-to-gnuplot
You could also use command line tools if you want to read vertex data instead of polygon data. You could replace "Data.dat" in the gnuplot file with:
"<grep -E '[0-9]' Data.dat | awk '{print $1 \" \"$2\" \"$3; print $1\" \"$2\" 0.0\\n\"}'"
or maybe
"<grep -E '[0-9]|^$' Data.dat | awk '{print (!NF)?\"\":$1 \" \"$2\" \"$3\"\\n\"$1\" \"$2\" 0.0\\n\"}'"

Save a rgl 3D scatterplot in a lossless format like svg or pdf

What I want to achieve is, to show a 3D graph in rgl, rotate it into the view I would like to show and then save it to a file. I know that I can do this with the rgl.snapshot function like this:
library(rgl)
x <- runif(20)
y <- runif(20)
z <- runif(20)
plot3d(x, y, z)
rgl.snapshot("rgl.snapshot.png")
The problem is, that rgl.snapshot produces a file in the screen resolution, hence not high resolution enough for print. I have no way to influence the resolution the file get`s saved in. In general even better would be if I would be able to save the file in a vector format like pdf or svg.
My Idea was to save the rotation of the current view and use it with another function that produces a non-interactive 3d scatterplot like scatter3D from the plot3D package. To save the rotation matrix I did the following:
rotationMatrix <- rgl.projection()
You can also do it like this:
rotationMatrix <- par3d()$modelMatrix
The rotation matrix looks like this:
$model
[,1] [,2] [,3] [,4]
[1,] 0.9584099 0.0000000 0.0000000 -0.4726846
[2,] 0.0000000 0.3436644 0.9792327 -0.6819317
[3,] 0.0000000 -0.9442102 0.3564116 -3.6946754
[4,] 0.0000000 0.0000000 0.0000000 1.0000000
$proj
[,1] [,2] [,3] [,4]
[1,] 3.732051 0.000000 0.000000 0.00000
[2,] 0.000000 3.732051 0.000000 0.00000
[3,] 0.000000 0.000000 -3.863703 -14.36357
[4,] 0.000000 0.000000 -1.000000 0.00000
$view
x y width height
0 0 256 256
Now my question is how I get from this rotation matrix to the arguments phi and theta which are used by the scatter3D function.
library(plot3D)
# phi = ?
# theta = ?
pdf("scatter3D.pdf")
scatter3D(x, y, z, pch=20, phi = 20, theta =30, col="black")
dev.off()
I know there is math to extract a rotation angle from a rotation matrix. I don't really get how to apply it in my case. Especially since the matrix has 4 rows and columns. I would expect 3 of each... Next problem is that scatter3D uses only two rotation axes (theta gives the azimuthal direction and phi the colatitude), so I would have to convert from a 3-axis rotation to the same rotation resulting from a two-axis rotation. I think the rotation axis of phi is defined by the rotation of theta.
If there is another way to save a rgl snapshot in a lossless format I would be happy to learn about it!
The latest (R-forge only; see How do I install the latest version of rgl? for how to get it) version of rgl has a function rglToBase() that returns the phi and theta values you'd need. There's also rgl.postscript() as mentioned in my Apr 24 comment that saves in a lossless format (but can't save everything).
Edited to add: A very new addition is the writeASY() function. This writes out Asymptote source code to draw the image in various formats, mainly intended for LaTeX documents. See http://asymptote.sourceforge.net. This is still a little limited (subscenes aren't supported, surface lighting isn't perfect, etc.) but it's getting there. Suggestions would be welcome.

Why ggtern doesn't plots some points?

I'm trying to do a plot from a data.frame that contains positive and negative values and I cannot plot all points. Someone know, if is possible to adapt the code to plot all point?
example = data.frame(X1=c(-1,-0.5,1),X2=c(-1,0.7,1),X3=c(1,-0.5,2))
ggtern(data=example, aes(x=X1,y=X2,z=X3)) + geom_point()
Well actually your points are getting plotted, but they lie outside the plot region.
Firstly, to understand why, each row of your data must sum to unity, else it will be coerced that way, therefore what you will be plotting is the following:
example = example / matrix(rep(rowSums(example),ncol(example)),nrow(example),ncol(example))
example
X1 X2 X3
1 1.000000 1.000000 -1.000000
2 1.666667 -2.333333 1.666667
3 0.250000 0.250000 0.500000
Now the rows sum to unity:
print(rowSums(example))
[1] 1 1 1
You have negative values, which are nonsensical in terms of 'composition', however, negative concentrations can still be plotted, as they will numerically represent points outside the ternary area, lets expand the limits and render to see where they lie:
ggtern(data=example, aes(x=X1,y=X2,z=X3)) +
geom_mask() +
geom_point() +
limit_tern(5,5,5)

Simple Correspondence Analysis in R - Not all objects appear in plot?

I feel like this may be a dumb question but I have spent a long time looking for an answer and can't seem to find one. It's hard even to know what to search for so if this is answered somewhere else that you know of, a link is all I need.
However. I am trying to do a simple CA in R using the vegan package and it works fine. However, the plot I generate only shows 60 "sites" when in reality I have 135. Does anyone know why this might happen? I need to be able to show all of the objects. My code is below
library(vegan)
CPUE.matrix <- read.csv("CPUE_Matrix_CA.csv", header=TRUE, row.names=1)
cpue.ca <- cca(CPUE.matrix)
plot(cpue.ca, type="n")
points(cpue.ca, display = "sites", cex = 1.3, bg=labels, pch=20, col="red")
text(cpue.ca, display = "spec", cex=0.9, col="black")
To give you an idea of what my data look like:
head(CPUE.matrix)
Black.Rockfish Brown.Rockfish Copper.Rockfish Pacific.Cod
1974_G57 0.000000 0.0000000 0.4731183 0.00
1974_H66 0.000000 1.6666667 2.0000000 0.00
1974_H67 0.000000 0.0000000 0.0000000 0.00
1974_H78 2.726236 0.0000000 2.6171869 0.00
1974_H79 0.000000 0.5660377 0.0000000 0.00
1974_H80 0.000000 0.1600000 0.0000000 0.08
Quillback.Rockfish
1974_G57 0.5677419
1974_H66 0.6666667
1974_H67 0.6037736
The data are 5 species of fish, 135 locations and the catch per unit effort of each species at each location in the cells. When I plot, not enough of the locations show up in the plot.
Just to allow closing out the question: as in the coments thread, it turns out that many sites were overlapping each other. One way to make this more obvious, as has been mentioned in other SO questions, is to plot with partial transparency. In that way, overlapping items will appear darker than a single item. See, for example, R: Scatterplot with too many points

Transforming captured co-ordinates into screen co-ordinates

I think this is probably a simple maths question but I have no idea what's going on right now.
I'm capturing the positions of "markers" on a webcam and I have a list of markers and their co-ordinates. Four of the markers are the outer corners of a work surface, and the fifth (green) marker is a widget. Like this:
Here's some example data:
Top left marker (a=98, b=86)
Top right marker (c=119, d=416)
Bottom left marker (e=583, f=80)
Bottom right marker (g=569, h=409)
Widget marker (x=452, y=318)
I'd like to somehow transform the webcam's widget position into a co-ordinate to display on the screen, where top left is 0,0 not 98,86 and somehow take into account the warped angles from the webcam capture.
Where would I even begin? Any help appreciated
In order to compute the warping, you need to compute a homography between the four corners of your input rectangle and the screen.
Since your webcam polygon seems to have an arbitrary shape, a full perspective homography can be used to convert it to a rectangle. It's not that complicated, and you can solve it with a mathematical function (should be easily available) known as Singular Value Decomposition or SVD.
Background information:
For planar transformations like this, you can easily describe them with a homography, which is a 3x3 matrix H such that if any point on or in your webcam polygon, say x1 were multiplied by H, i.e. H*x1, we would get a point on the screen (rectangular), i.e. x2.
Now, note that these points are represented by their homogeneous coordinates which is nothing but adding a third coordinate (the reason for which is beyond the scope of this post). So, suppose your coordinates for X1 were, (100,100), then the homogeneous representation would be a column vector x1 = [100;100;1] (where ; represents a new row).
Ok, so now we have 8 homogeneous vectors representing 4 points on the webcam polygon and the 4 corners of your screen - this is all we need to compute a homography.
Computing the homography:
A little math:
I'm not going to get into the math, but briefly this is how we solve it:
We know that 3x3 matrix H,
H =
h11 h12 h13
h21 h22 h23
h31 h32 h33
where hij represents the element in H at the ith row and the jth column
can be used to get the new screen coordinates by x2 = H*x1. Also, the result will be something like x2 = [12;23;0.1] so to get it in the screen coordinates, we normalize it by the third element or X2 = (120,230) which is (12/0.1,23/0.1).
So this means each point in your webcam polygon (WP) can be multiplied by H (and then normalized) to get your screen coordinates (SC), i.e.
SC1 = H*WP1
SC2 = H*WP2
SC3 = H*WP3
SC4 = H*WP4
where SCi refers to the ith point in screen coordinates and
WPi means the same for the webcam polygon
Computing H: (the quick and painless explanation)
Pseudocode:
for n = 1 to 4
{
// WP_n refers to the 4th point in the webcam polygon
X = WP_n;
// SC_n refers to the nth point in the screen coordinates
// corresponding to the nth point in the webcam polygon
// For example, WP_1 and SC_1 is the top-left point for the webcam
// polygon and the screen coordinates respectively.
x = SC_n(1); y = SC_n(2);
// A is the matrix which we'll solve to get H
// A(i,:) is the ith row of A
// Here we're stacking 2 rows per point correspondence on A
// X(i) is the ith element of the vector X (the webcam polygon coordinates, e.g. (120,230)
A(2*n-1,:) = [0 0 0 -X(1) -X(2) -1 y*X(1) y*X(2) y];
A(2*n,:) = [X(1) X(2) 1 0 0 0 -x*X(1) -x*X(2) -x];
}
Once you have A, just compute svd(A) which will give decompose it into U,S,VT (such that A = USVT). The vector corresponding to the smallest singular value is H (once you reshape it into a 3x3 matrix).
With H, you can retrieve the "warped" coordinates of your widget marker location by multiplying it with H and normalizing.
Example:
In your particular example if we assume that your screen size is 800x600,
WP =
98 119 583 569
86 416 80 409
1 1 1 1
SC =
0 799 0 799
0 0 599 599
1 1 1 1
where each column corresponds to corresponding points.
Then we get:
H =
-0.0155 -1.2525 109.2306
-0.6854 0.0436 63.4222
0.0000 0.0001 -0.5692
Again, I'm not going into the math, but if we normalize H by h33, i.e. divide each element in H by -0.5692 in the example above,
H =
0.0272 2.2004 -191.9061
1.2042 -0.0766 -111.4258
-0.0000 -0.0002 1.0000
This gives us a lot of insight into the transformation.
[-191.9061;-111.4258] defines the translation of your points (in pixels)
[0.0272 2.2004;1.2042 -0.0766] defines the affine transformation (which is essentially scaling and rotation).
The last 1.0000 is so because we scaled H by it and
[-0.0000 -0.0002] denotes the projective transformation of your webcam polygon.
Also, you can check if H is accurate my multiplying SC = H*WP and normalizing each column with its last element:
SC = H*WP
0.0000 -413.6395 0 -411.8448
-0.0000 0.0000 -332.7016 -308.7547
-0.5580 -0.5177 -0.5554 -0.5155
Dividing each column, by it's last element (e.g. in column 2, -413.6395/-0.5177 and 0/-0.5177):
SC
-0.0000 799.0000 0 799.0000
0.0000 -0.0000 599.0000 599.0000
1.0000 1.0000 1.0000 1.0000
Which is the desired result.
Widget Coordinates:
Now, your widget coordinates can be transformed as well H*[452;318;1], which (after normalizing is (561.4161,440.9433).
So, this is what it would look like after warping:
As you can see, the green + represents the widget point after warping.
Notes:
There are some nice pictures in this article explaining homographies.
You can play with transformation matrices here
MATLAB Code:
WP =[
98 119 583 569
86 416 80 409
1 1 1 1
];
SC =[
0 799 0 799
0 0 599 599
1 1 1 1
];
A = zeros(8,9);
for i = 1 : 4
X = WP(:,i);
x = SC(1,i); y = SC(2,i);
A(2*i-1,:) = [0 0 0 -X(1) -X(2) -1 y*X(1) y*X(2) y];
A(2*i,:) = [X(1) X(2) 1 0 0 0 -x*X(1) -x*X(2) -x];
end
[U S V] = svd(A);
H = transpose(reshape(V(:,end),[3 3]));
H = H/H(3,3);
A
0 0 0 -98 -86 -1 0 0 0
98 86 1 0 0 0 0 0 0
0 0 0 -119 -416 -1 0 0 0
119 416 1 0 0 0 -95081 -332384 -799
0 0 0 -583 -80 -1 349217 47920 599
583 80 1 0 0 0 0 0 0
0 0 0 -569 -409 -1 340831 244991 599
569 409 1 0 0 0 -454631 -326791 -799
Due to perspective effects linear or even bilinear transformations may not be accurate enough.
Look at correct perspective mapping and more from google on this phrase, may be this is what you need...
Since your input area isn't a rectangle of the same aspect-ratio as the screen, you'll have to apply some sort of transformation to do the mapping.
What I would do is take the proportions of where the inner point is with respect to the outer sides and map that to the same proportions of the screen.
To do this, calculate the amount of the free space above, below, to the left, and to the right of the inner point and use the ratio to find out where in the screen the point should be.
alt text http://img230.imageshack.us/img230/5301/mapkg.png
Once you have the measurements, place the inner point at:
x = left / (left + right)
y = above / (above + below)
This way, no matter how skewed the webcam frame is, you can still map to the full regular rectangle on the screen.
Try the following: split the original rectangle and this figure with 2 diagonals. Their crossing is (k, l). You have 4 distorted triangles (ab-cd-kl, cd-ef-kl, ef-gh-kl, gh-ab-kl) and the point xy is in one of them.
(4 triangles are better than 2, since the distortion doesn't depend on the diagonal chosen)
You need to find in which triangle point XY is. To do that you need only 2 checks:
Check if it's in ab-cd-ef. If true, go on with ab-cd-ef, (in your case it's not, so we proceed with cd-ef-gh).
We don't check cd-ef-gh, but already check a half of it: cd-gh-kl. The point is there. (Otherwise it would have been ef-gh-kl)
Here's an excellent algorythm to check if a point is in a polygon, using only it's points.
Now you need only to map the point to the original triangle cd-gh-kl. The point xy is a linear combination of the 3 points:
x = c * a1 + g * a2 + k * (1 - a1 - a2)
y = d * a1 + h * a2 + l * (1 - a1 - a2)
a1 + a2 <= 1
2 variables (a1, a2) with 2 equations. I guess you can derive the solution formulae on your own.
Then you just make a linear combinations of a1&a2 with the corresponding points' co-ordinates in the original rectangle. In this case with W (width) and H (height) it's
X = width * a1 + width * a2 + width / 2 * (1 - a1 - a2)
Y = 0 * a1 + height * a2 + height / 2 * (1 - a1 - a2)
More of how to do this in objective-c in xcode, related to jacobs post, you can find here: calculate the V from A = USVt in objective-C with SVD from LAPACK in xcode
The "Kabcsh Algorithm" does exactly this: it creates a rotation matrix between two spaces given N matched pairs of positions.
http://en.wikipedia.org/wiki/Kabsch_algorithm

Resources