Hexagonal tilling of hemi-sphere - math

I need to have hexagonal grid on a spherical surface. like shown here.
Right now I am doing a hexagonal flattened grid.
and the projecting it onto the surface of a hemisphere. Like here,
But as you can see, the funny artifact is hexagons on the edge are disproportionately large. There should be a better way to do this so that all the hexagons are near equal in their size.
I had tried the solution like #spektre had suggested but my code was producing following plot.
i was using the a=sqrt(x*x+y*y)/r * (pi/2) because i wanted to scale a that goes from [0,r] to z [0,r] so angle a has bounds of [0,pi/2].
But with just a=sqrt(x*x+y*y)/r it works well.
New Development with the task, New problem
I have the problem that now, the hexagons are not equal through out the shapes. I want a uniform shape (area wise) for them across the dome and cylinder. I am confused on how to manage this?

Here is what I have in mind:
create planar hex grid on XY plane
center of your grid must be the center of your sphere I chose (0,0,0) and size of the grid should be at least the 2*radius of your sphere big.
convert planar coordinates to spherical
so distance from (0,0,0) to point coordinate in XY plane is arclength traveling on surface of your sphere so if processed point is (x,y,z) and sphere radius is r then latitude position on sphere is:
a=sqrt(x*x+y*y)/r;
so we can directly compute z coordinate:
z=r*cos(a);
and scale x,y to surface of sphere:
a=r*sin(a)/sqrt(x*x+y*y);
x*=a; y*=a;
If the z coordinate is negative then you have crossed half sphere and should handle differently (skip hex or convert to cylinder or whatever)
Here Small OpenGL/C++ example for this:
//---------------------------------------------------------------------------
const int _gx=15; // hex grid size
const int _gy=15;
const int _hy=(_gy+1)<<1; // hex points size
const int _hx=(_gx+1);
double hex[_hy][_hx][3]; // hex grid points
//---------------------------------------------------------------------------
void hexgrid_init(double r) // set hex[][][] to planar hex grid points at xy plane
{
double x0,y0,x,y,z,dx,dy,dz;
double sx,sy,sz;
int i,j;
// hex sizes
sz=sqrt(8.0)*r/double(_hy);
sx=sz*cos(60.0*deg);
sy=sz*sin(60.0*deg);
// center points arrounf (0,0)
x0=(0.5*sz)-double(_hy/4)*(sz+sx);
y0=-double(_hx)*(sy);
if (int(_gx&1)==0) x0-=sz+sx;
if (int(_gy&1)==0) y0-=sy; else y0+=sy;
for (y=y0,i=0;i<_hy;i+=2,y+=sy+sy)
for (x=x0,j=0;j<_hx;j++,x+=sz)
{
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
x+=sz+sx+sx; j++; if (j>=_hx) break;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
}
for (y=y0+sy,i=1;i<_hy;i+=2,y+=sy+sy)
for (x=x0+sx,j=0;j<_hx;j++,x+=sx+sx+sz)
{
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
x+=sz; j++; if (j>=_hx) break;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
}
}
//---------------------------------------------------------------------------
void hexgrid_half_sphere(double r0) // convert planar hex grid to half sphere at (0,0,0) with radius r0
{
int i,j;
double x,y,z,a,l;
for (i=0;i<_hy;i++)
for (j=0;j<_hx;j++)
{
x=hex[i][j][0];
y=hex[i][j][1];
z=hex[i][j][2];
l=sqrt(x*x+y*y); // distance from center on xy plane (arclength)
a=l/r0; // convert arclength to angle
z=r0*cos(a); // compute z coordinate (sphere)
if (z>=0.0) // half sphere
{
a=r0*sin(a)/l;
}
else{ // turn hexes above half sphere to cylinder
z=0.5*pi*r0-l;
a=r0/l;
}
x*=a;
y*=a;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=z;
}
}
//---------------------------------------------------------------------------
void hex_draw(int x,int y,GLuint style) // draw hex x = <0,_gx) , y = <0,_gy)
{
y<<=1;
if ((x&1)==0) y++;
if ((x<0)||(x+1>=_hx)) return;
if ((y<0)||(y+2>=_hy)) return;
glBegin(style);
glVertex3dv(hex[y+1][x ]);
glVertex3dv(hex[y ][x ]);
glVertex3dv(hex[y ][x+1]);
glVertex3dv(hex[y+1][x+1]);
glVertex3dv(hex[y+2][x+1]);
glVertex3dv(hex[y+2][x ]);
glEnd();
}
//---------------------------------------------------------------------------
And usage:
hexgrid_init(1.5);
hexgrid_half_sphere(1.0);
int x,y;
glColor3f(0.0,0.2,0.3);
for (y=0;y<_gy;y++)
for (x=0;x<_gx;x++)
hex_draw(x,y,GL_POLYGON);
glLineWidth(2);
glColor3f(1.0,1.0,1.0);
for (y=0;y<_gy;y++)
for (x=0;x<_gx;x++)
hex_draw(x,y,GL_LINE_LOOP);
glLineWidth(1);
And preview:
For more info and ideas see related:
Make a sphere with equidistant vertices
Turning a cylinder into a sphere without pinching at the poles

Related

Qt3D: QVector3D::unproject of QPickEvent mouse position yields wrong 3D coordinates

I know this question has been asked multiple times but I still can't figure out what is wrong in my code. I have a QEntity with a QObjectPicker attached to it. What I'm trying to achieve is that the user can move the object in the plane parallel to the near and far plane. But at the moment I'm only trying to project back the mouse coordinates to obtain the world intersection.
A QPickEvent already provides the world intersection but I need to compute it myself so that it has the same depth as the first world intersection of when the user pressed the mouse button. This way I can compute the difference vector and move the object by that difference.
To this end I implemented the following procedure (projectionMatrix is a projection matrix manually set on the camera and stored in a variable that's why I set it here directly):
float distance = pickEvent->distance();
float posY = height() - pickEvent->position().y() - 1.0f;
QVector3D screenCoordinates = QVector3D(pickEvent->position().x(), posY, distance);
QMatrix4x4 modelViewMatrix = camera->viewMatrix() * object->transform()->matrix();
QVector3D mouseIn3D = screenCoordinates.unproject(modelViewMatrix,
projectionMatrix,
QRect(0, 0, width(), height()));
object is the object I'm trying to move. An example of values I obtain are:
pick distance is 600.787
pick event world intersection is (-72.0421, 27.4382, 697.146)
the projection of the screen coordinates mouseIn3D is (168.434, 708.616, 29.4223)
Obviously this is not correct as the world intersection and the projected mouse coordinates differ a lot. I also tried 0.f and 1.f for distance instead of the 600.787 because some people said that's the depth value of the near and far plane but this didn't work either and gave wrong coordinates.
On the other hand, when I project the 3D local intersection provided by the QPickEvent to the screen, the 2D pixel coordinates are correct:
QVector3D pointOnModel = pickEvent->localIntersection();
QVector3D projected = pointOnModel.project(modelViewMatrix,
projectionMatrix,
QRect(0, 0, width(), height()));
qDebug() << "x y" << pickEvent->position().x() << height() - pickEvent->position().y();
qDebug() << "projected" << projected;
This yields
x y 949 1118
projected QVector3D(949, 1118, 0.90164)
Does anyone have an idea what I'm doing wrong?

Unity - Find a point for a gameobject to look at the mouse while camera is at any angle

I have a 3D game where I want an arrow to point in the direction base on the mouses angle of that object in a 2D view.
Now from the camera looking down at the board from a 90 degree x-angle standpoint it works fine. The below image is when I am in a 90 Degree x-angle Camera angle facing down on my game and I have the arrow face where my cursor is:
But now when we take a step back and have the camera at a 45 degree x-angle the direction the arrow is facing is a bit off. The below image is when I have the cursor face my mouse cursor when my camera is on a 45 degree x-angle :
Now lets look at the above image but when the Camera is shifted back to 90 Degrees x-angle:
My current code is:
// Get the vectors of the 2 points, the pivot point which is the ball start and the position of the mouse.
Vector2 objectPoint = Camera.main.WorldToScreenPoint(_arrowTransform.position);
Vector2 mousePoint = (Vector2)Input.mousePosition;
float angle = Mathf.Atan2( mousePoint.y - objectPoint.y, mousePoint.x - objectPoint.x ) * 180 / Mathf.PI;
_arrowTransform.rotation = Quaternion.AngleAxis(-angle, Vector2.up) * Quaternion.Euler(90f, 0f, 0f);
What would I have to add in my Mathf.Atan2() to compensate for the camera rotation on the x and/or y to make sure when the user wants to move the camera how they please it will make sure to provide an accurate direction?
EDIT : The solution was in MotoSV's answer with using Plane. This allowed me to get the exact point no matter what my camera angles were based on my mouse position. Code that worked for me is below :
void Update()
{
Plane groundPlane = new Plane(Vector3.up, new Vector3(_arrowTransform.position.x, _arrowTransform.position.y, _arrowTransform.position.z));
Ray ray = _mainCamera.ScreenPointToRay(Input.mousePosition);
float distance;
if (groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
_arrowTransform.LookAt(point);
}
}
Although this does not answer your question directly with regards to the Mathf.Atan2 method it is a alternative approach that may be useful.
This would be placed onto the game object that represents the arrow:
public class MouseController : MonoBehaviour
{
private Camera _camera;
private void Start()
{
_camera = GameObject.FindGameObjectWithTag("MainCamera").GetComponent<Camera>();
}
private void Update()
{
Plane groundPlane = new Plane(Vector3.up, this.transform.position);
Ray ray = _camera.ScreenPointToRay(Input.mousePosition);
float distance;
Vector3 axis = Vector3.zero;
if(groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
axis = (point - this.transform.position).normalized;
axis = new Vector3(axis.x, 0f, axis.z);
}
this.transform.rotation = Quaternion.LookRotation(axis);
}
}
The basic idea is to:
Create a Plane instance centred at the game object's position
Convert the mouse screen position into a Ray that heads into the world
relative to the camer'a current position and rotation
Then cast that ray onto the Plane created in step #1
If the ray intersects the plane, then you can use the GetPoint method to find out where on the plane the ray hit
Then create a direction vector from the centre of the plane to the intersect point and create a LookRotation based on the vector
You can find out more information about the Plane class on the Unity - Plane documentation page.

Advanced rectangles collision in processing

Coded in procesing (processing.org):
I want to know when the mouse or another shape collides with a rectangle,
this is very easy but I have one problem: I want it to work when the rectangle is rotated (example: rotate(radians(90))).
Both Kevin and Asad's contributions are useful.
In terms of using the 2D renderer, you need to roll your own functionality for that. For this you should be familiar with a few bits and bobs of linear algebra (vector and matrices mainly and just a few operations anyway).
I am going to assume you're already familiar with 2D transformations (using pushMatrix()/popMatrix() along with translate(),rotate(),scale()) if not, I warmly recommend the 2D Transformations Processing tutorial
I am going to explain some of the concepts only briefly (as it's a big topic on it's own).
If you used translate()/rotate()/scale() before, it's all been matrix operations handled for you behind the scenes. In 2D, a transformation can be stored in a 3x3 matrix like so:
X Y T
1 0 0
0 1 0
0 0 1
The rotation and scale are stored in the 1st and 2nd column (2 values each) while translation is stored in the last column. In theory you could have a 2x3 matrix instead of a 3x3 matrix, but an NxN matrix has a few nice properties. One of the nice things is being simple to multiply with a vector. Position can be stored as vectors and we'd like to transform a vector by multiplying it with a transformation matrix. If you look at a vector as a single column vector, the 3x3 form of the matrix allow multiplication(see matrix multiplication rules here).
In short:
You can store transformations in a matrix
You can apply these transformation to a vector using multiplication
Back to your issue, checking if a point is within a box with transformations applied, you can do this:
convert the test point's coordinate system to the box's transformed coordinate system by:
inverting the box's transformation matrix and
multiplying the point to the inverted transformation matrix.
This may be hard to comprehend at first, but one way to look at is imagining you rotate the whole 'world'(coordinate system) so your rotated box is straight (essentially rotating in the opposite direction, or inverting the transformation) then check if the point is in the box.
Luckily all these matrix operations don't need to be implemented from scratch: PMatrix2D deals with this.
Here is a basic commented sketch explaining all the above:
Box box1,box2;
void setup(){
size(400,400);
box1 = new Box(200,100);
box1.translate(75,100);
box1.rotate(radians(30));
box1.scale(1.1);
box2 = new Box(100,200);
box2.translate(275,150);
box2.rotate(radians(-5));
box2.scale(.95);
}
void draw(){
background(255);
box1.update(mouseX,mouseY);
box2.update(mouseX,mouseY);
box1.draw();
box2.draw();
}
class Box{
PMatrix2D coordinates = new PMatrix2D();//box coordinate system
PMatrix2D reverseCoordinates = new PMatrix2D();//inverted coordinate system
PVector reversedTestPoint = new PVector();//allocate reversed point as vector
PVector testPoint = new PVector();//allocate regular point as vector
float w,h;//box width and height
boolean isHovered;
Box(float w,float h){
this.w = w;
this.h = h;
}
//whenever we update the regular coordinate system, we update the reversed one too
void updateReverseCoordinates(){
reverseCoordinates = coordinates.get();//clone the original coordinate system
reverseCoordinates.invert();//simply invert it
}
void translate(float x,float y){
coordinates.translate(x,y);
updateReverseCoordinates();
}
void rotate(float angle){
coordinates.rotate(angle);
updateReverseCoordinates();
}
void scale(float s){
coordinates.scale(s);
updateReverseCoordinates();
}
boolean isOver(float x,float y){
reversedTestPoint.set(0,0);//reset the reverse test point
testPoint.set(x,y);//set the x,y coordinates we want to test
//transform the passed x,y coordinates to the reversed coordinates using matrix multiplication
reverseCoordinates.mult(testPoint,reversedTestPoint);
//simply test the bounding box
return ((reversedTestPoint.x >= 0 && reversedTestPoint.x <= w) &&
(reversedTestPoint.y >= 0 && reversedTestPoint.y <= h));
}
void update(float x,float y){
isHovered = isOver(x,y);
}
void draw(){
if(isHovered) fill(127);
else fill(255);
pushMatrix();
applyMatrix(coordinates);
rect(0,0,w,h);
popMatrix();
}
}
You're looking for the modelX() and modelY() functions. Just pass in mouseX and mouseY (z is 0) to find the position of the mouse in rotated space. Similarly, pass in the position of your rectangles to find their rotated points.
Here's the example from the reference:
void setup() {
size(500, 500, P3D);
noFill();
}
void draw() {
background(0);
pushMatrix();
// start at the middle of the screen
translate(width/2, height/2, -200);
// some random rotation to make things interesting
rotateY(1.0); //yrot);
rotateZ(2.0); //zrot);
// rotate in X a little more each frame
rotateX(frameCount / 100.0);
// offset from center
translate(0, 150, 0);
// draw a white box outline at (0, 0, 0)
stroke(255);
box(50);
// the box was drawn at (0, 0, 0), store that location
float x = modelX(0, 0, 0);
float y = modelY(0, 0, 0);
float z = modelZ(0, 0, 0);
// clear out all the transformations
popMatrix();
// draw another box at the same (x, y, z) coordinate as the other
pushMatrix();
translate(x, y, z);
stroke(255, 0, 0);
box(50);
popMatrix();
}

In Qt drawPoint method does not plot anything if negative valued parameters are supplies

in Qt creator drawPoint() method does not put point if negative valued parameters are passed
following is code for Bresenham's algorithm.but, it is not working in qt creator.it just plots circle in one quadrant.
Bresenham::Bresenham(QWidget*parent):QWidget(parent)
{}
void Bresenham::paintEvent(QPaintEvent *e)
{
Q_UNUSED(e);
QPainter qp(this);
drawPixel(&qp);
}
void Bresenham::drawPixel(QPainter *qp)
{
QPen pen(Qt::red,2,Qt::SolidLine);
qp->setPen(pen);
int x=0,y,d,r=100;
y=r;
d=3-2*r;
do
{
qp->drawPoint(x,y);
qp->drawPoint(y,x);
qp->drawPoint(y,-x);
qp->drawPoint(x,-y);
qp->drawPoint(-x,-y);
qp->drawPoint(-y,-x);
qp->drawPoint(-x,y);
qp->drawPoint(-y,x);
if(d<0)
{
d=d+4*x+6;
}
else
{
d=d+(4*x-4*y)+10;
y=y-1;
}
x=x+1;
}while(x<y);
}
You need to translate the Qt coordinate system to the classic cartesian one. Choose a new center QPoint orig and replace all
qp->drawPoint(x,y);
with
qp->drawPoint(orig + QPoint(x,y));
The Qt coordinates system origin is at (0,0) and the y-axis is inverted. For instance, a segment from A(2,7) to B(6,1) look like this:
Notice how there is only the positive-x, positive-y quadrant. For simplicity assume that no negative coordinates exist.
Note:
For performance reasons it is better to compute all the points first and then draw them all using
QPainter::drawPoints ( const QPoint * points, int pointCount );

OpenGL FPS Camera movement relative to lookAt target

I have a camera in OpenGL.I had no problem with it until adding FPS controller.The problem is that the basic FPS behavior is ok. The camera moves forward,backward,left and right+ rotates towards the direction supplied by mouse input.The problems begin when the camera moves to the sides or the back of the target position.In such a case camera local forward,backward,left,right directions aren't updated based on its current forward look but remain the same as if it was right in front of the target.Example:
If the target object position is at (0,0,0) and camera position is at (-50,0,0) (to the left of the target) and camera is looking at the target,then to move it back and forth I have to use the keys for left and right movement while backward/forward keys move the camera sideways.
Here is the code I use to calculate camera position, rotation and LookAt matrix:
void LookAtTarget(const vec3 &eye,const vec3 &center,const vec3 &up)
{
this->_eye = eye;
this->_center = center;
this->_up = up;
this->_direction =normalize((center - eye));
_viewMatrix=lookAt( eye, center , up);
_transform.SetModel(_viewMatrix );
UpdateViewFrustum();
}
void SetPosition(const vec3 &position){
this->_eye=position;
this->_center=position + _direction;
LookAtTarget(_eye,_center,_up);
}
void SetRotation(float rz , float ry ,float rx){
_rotationMatrix=mat4(1);
vec3 direction(0.0f, 0.0f, -1.0f);
vec3 up(0.0f, 1.0f, 0.0f);
_rotationMatrix=eulerAngleYXZ(ry,rx,rz);
vec4 rotatedDir= _rotationMatrix * vec4(direction,1) ;
this->_center = this->_eye + vec3(rotatedDir);
this->_up =vec3( _rotationMatrix * vec4(up,1));
LookAtTarget(_eye, _center, up);
}
Then in the render loop I set camera's transformations:
while(true)
{
display();
fps->print(GetElapsedTime());
if(glfwGetKey(GLFW_KEY_ESC) || !glfwGetWindowParam(GLFW_OPENED)){
break;
}
calculateCameraMovement();
moveCamera();
view->GetScene()->GetCurrentCam()->SetRotation(0,-camYRot,-camXRot);
view->GetScene()->GetCurrentCam()->SetPosition(camXPos,camYPos,camZPos);
}
lookAt() method comes from GLM math lib.
I am pretty sure I have to multiply some of the vectors (eye ,center etc) with rotation matrix but I am not sure which ones.I tried to multiply _viewMatrix by the _rotationMatrix but it creates a mess.The code for FPS camera position and rotation calculation is taken from here.But for the actual rendering I use programmable pipeline.
Update:
I solved the issue by adding a separate method which doesn't calculate camera matrix using lookAt but rather using the usual and basic approach:
void FpsMove(GLfloat x, GLfloat y , GLfloat z,float pitch,float yaw){
_viewMatrix =rotate(mat4(1.0f), pitch, vec3(1, 0, 0));
_viewMatrix=rotate(_viewMatrix, yaw, vec3(0, 1, 0));
_viewMatrix= translate(_viewMatrix, vec3(-x, -y, -z));
_transform.SetModel( _viewMatrix );
}
It solved the problem but I still want to know how to make it work with lookAt() methods I presented here.
You need to change the forward direction of the camera, which is presumably fixed to (0,0,-1). You can do this by rotating the directions about the y axis by camYRot (as computed in the lookat function) so that forwards is in the same direction that the camera is pointing (in the plane made by the z and x axes).

Resources