I am working with motion capture data and I want to do "skinning" in processing. So basically with every two points I get from my data I have to add a 3D object in between (I'll be using a box for now and the placement and rotation coordinates are the center of the 3D object) and rotate it so that it is aligned in all three dimensions with the vector that connects the two points.
Here we can see on the left, the initial placed box between the two points and on the right the now correctly rotated box:
The only way I know of to rotate an object in processing is to use the rotateX(), rotateY(), rotateZ() functions, which rotate an object around the global(?) axes using euler angles.
Now I am seriously struggling with finding a way to calculate this rotation properly.
I have already written a function for calculating the angle between two vectors:
float calcVectorAngle(PVector p1, PVector p2) {
return acos((p1.dot(p2)) / (mag(p1.x, p1.y, p1.z) * mag(p2.x, p2.y, p2.z)));
}
And I then tried to feed the vector (between the two points) combined with a unit vector for one of each rotation axis:
float x = calcVectorAngle(vector, new PVector(1,0,0));
float y = calcVectorAngle(vector, new PVector(0,1,0));
float z = calcVectorAngle(vector, new PVector(0,0,1));
But when I use these values to rotate the object the rotation is completely off.
A code example:
PVector p1;
PVector p2;
PVector boxSize = new PVector(500, 100, 100);
void setup() {
size(1000,1000,P3D);
p1 = new PVector(100, 100, 0);
p2 = new PVector(900, 900, -1000);
}
void draw() {
background(125);
strokeWeight(2);
stroke(255, 0, 0);
pushMatrix();
line(p1.x, p1.y, p1.z, p2.x, p2.y, p2.z);
PVector midPoint = calcMidPoint(p1, p2);
translate(midPoint.x, midPoint.y, midPoint.z);
PVector rotation = calcRotation(p1, p2);
rotateX(rotation.x);
rotateY(rotation.y);
rotateZ(rotation.z);
box(boxSize.x, boxSize.y, boxSize.z);
popMatrix();
}
PVector calcMidPoint(PVector p1, PVector p2) {
return new PVector((p1.x + p2.x) / 2, (p1.y + p2.y) / 2, (p1.z + p2.z) / 2);
}
PVector calcRotation(PVector p1, PVector p2) {
PVector vector = new PVector();
vector.sub(p2, p1, vector);
float x = calcVectorAngle(vector, new PVector(1,0,0));
float y = calcVectorAngle(vector, new PVector(0,1,0));
float z = calcVectorAngle(vector, new PVector(0,0,1));
return new PVector(x, y, z);
}
float calcVectorAngle(PVector p1, PVector p2) {
return acos((p1.dot(p2)) / (mag(p1.x, p1.y, p1.z) * mag(p2.x, p2.y, p2.z)));
}
Now I am a little lost.
The axis of rotation is the Cross product of the default direction of the box (1, 0, 0) and the direction along the line (p2 - p1).
The angle of rotation is the acos of the Dot product of the normalized direction vectors:
PVector currentDirection = new PVector(1, 0, 0);
PVector newDirection = p2.copy().sub(p1).normalize();
PVector rotationAxis = currentDirection.cross(newDirection).normalize();
float rotationAngle = acos(currentDirection.dot(newDirection));
Rotate the box by the angle around the axis:
rotate(rotationAngle, rotationAxis.x, rotationAxis.y, rotationAxis.z);
Complete example:
PVector p1, p2;
PVector boxSize = new PVector(500, 100, 100);
void setup() {
size(1000,1000,P3D);
p1 = new PVector(100, 100, 0);
p2 = new PVector(900, 900, -1000);
}
void draw() {
background(125);
strokeWeight(2);
stroke(255, 0, 0);
line(p1.x, p1.y, p1.z, p2.x, p2.y, p2.z);
PVector midPoint = p1.copy().add(p2).mult(0.5);
PVector currentDirection = new PVector(1, 0, 0);
PVector newDirection = p2.copy().sub(p1).normalize();
PVector rotationAxis = currentDirection.cross(newDirection).normalize();
float rotationAngle = acos(currentDirection.dot(newDirection));
pushMatrix();
translate(midPoint.x, midPoint.y, midPoint.z);
rotate(rotationAngle, rotationAxis.x, rotationAxis.y, rotationAxis.z);
box(boxSize.x, boxSize.y, boxSize.z);
popMatrix();
}
Related
I'm really struggling here and I can't get it right, not even knowing why.
I'm using p5.js in WEBGL mode, I want to compute the position of on point rotated on the 3 axes around the origin in order to follow the translation and the rotation given to object through p5.js, translation and rotatation on X axis, Y axis and Z axis.
The fact is that drawing a sphere in 3d space, withing p5.js, is obtained by translating and rotating, since the sphere is created at the center in the origin, and there is no internal model giving the 3d-coordinates.
After hours of wandering through some math too high for my knowledge, I understood that the rotation over 3-axis is not as simple as I thought, and I ended up using Quaternion.js. But I'm still not able to match the visual position of the sphere in the 3d world with the coordinates I have computed out of the original point on the 2d-plane (150, 0, [0]).
For example, here the sphere is rotated on 3 axis. At the beginning the coordinates are good (if I ignore the fact that Z is negated) but at certain point it gets completely out of sync. The computed position of the sphere seems to be completely unrelated:
It's really hours that I'm trying to solve this issue, with no result, what did I miss?
Here it follows my code:
//font for WEBGL
var robotoFont;
var dotId = 0;
var rotating = true;
var orbits = [];
var dotsData = [];
function preload() {
robotoFont = loadFont('./assets/Roboto-Regular.ttf');
}
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
textFont(robotoFont);
background(0);
let orbit1 = new Orbit(0, 0, 0, 0.5, 0.5, 0.5);
orbit1.obj.push(new Dot(0, 0));
orbits.push(orbit1);
// let orbit2 = new Orbit(90, 45, 0);
// orbit2.obj.push(new Dot(0, 0));
// orbits.push(orbit2);
}
function draw() {
angleMode(DEGREES);
background(0);
orbitControl();
let len = 200;
fill('white');
stroke('white');
sphere(2);
stroke('red');
line(0, 0, 0, len, 0, 0);
text('x', len, 0)
stroke('green');
line(0, 0, 0, 0, len, 0);
text('y', 0, len)
push();
rotateX(90);
stroke('yellow');
line(0, 0, 0, 0, len, 0);
text('z', 0, len)
pop();
dotsData = [];
orbits.forEach(o => o.draw());
textSize(14);
push();
for (let i = 0; i < 2; i++) {
let yPos = -(windowHeight / 2) + 15;
for (let i = 0; i < dotsData.length; i++) {
let [id, pos, pos3d] = dotsData[i];
let [x1, y1, z1] = [pos[0].toFixed(0), pos[1].toFixed(0), pos[2].toFixed(0)];
let [x2, y2, z2] = [pos3d.x.toFixed(0), pos3d.y.toFixed(0), pos3d.z.toFixed(0)];
text(`${id}: (${x1}, ${y1}, ${z1}) -> (${x2}, ${y2}, ${z2})`, -windowWidth / 2 + 5, yPos);
yPos += 18;
}
rotateX(-90);
}
pop();
}
function mouseClicked() {
// controls.mousePressed();
}
function keyPressed() {
// controls.keyPressed(keyCode);
if (keyCode === 32) {
rotating = !rotating;
}
}
class Orbit {
constructor(x, y, z, xr, yr, zr) {
this.obj = [];
this.currentRot = [
x ? x : 0,
y ? y : 0,
z ? z : 0
]
this.rot = [
xr ? xr : 0,
yr ? yr : 0,
zr ? zr : 0
]
}
draw() {
push();
if (rotating) {
this.currentRot[0] += this.rot[0];
this.currentRot[1] += this.rot[1];
this.currentRot[2] += this.rot[2];
}
rotateY(this.currentRot[1]);
rotateX(this.currentRot[0]);
rotateZ(this.currentRot[2]);
noFill();
stroke('white');
ellipse(0, 0, 300, 300);
for (let i = 0; i < this.obj.length; i++) {
let o = this.obj[i];
o.draw();
dotsData.push([o.id, o.getPosition(), this.#get3DPos(o)]);
}
pop();
}
#get3DPos(o) {
let [x, y, z] = o.getPosition();
let w = 0;
let rotX = this.currentRot[0] * PI / 180;
let rotY = this.currentRot[1] * PI / 180;
let rotZ = this.currentRot[2] * PI / 180;
let rotation = Quaternion.fromEuler(rotZ, rotX, rotY, 'ZXY').conjugate();
[x, y, z] = rotation.rotateVector([x, y, z]);
return createVector(x, y, z);
}
}
class Dot {
constructor(angle) {
this.id = ++dotId;
this.x = cos(angle) * 150;
this.y = sin(angle) * 150;
}
draw() {
push();
fill('gray');
translate(this.x, this.y);
noStroke();
sphere(15);
pop();
}
getPosition() {
return [this.x, this.y, 0];
}
}
It doesn't work in stackoverflow because I need local asset like the font.
Here the working code: https://editor.p5js.org/cigno5/sketches/_ZVq0kjJL
I've finally sorted out. I can't really understand why works this way but I didn't need quaternion at all, and my first intuition of using matrix multiplications to apply rotation on 3-axis was correct.
What I did miss in first instance (and made my life miserable) is that matrix multiplication is not commutative. This means that applying rotation on x, y and z-axis is not equivalent to apply same rotation angle on z, y and x.
The working solution has been achieved with 3 simple steps:
Replace quaternion with matrix multiplications using vectors (method #resize2)
Rotating the drawing plane with Z-Y-X order
Doing the math of rotation in X-Y-Z order
//font for WEBGL
var robotoFont;
var dotId = 0;
var rotating = true;
var orbits = [];
var dotsData = [];
function preload() {
robotoFont = loadFont('./assets/Roboto-Regular.ttf');
}
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
textFont(robotoFont);
background(0);
let orbit1 = new Orbit(0, 0, 0, 0.5, 0.5, 0.5);
orbit1.obj.push(new Dot(0, 0.5));
orbits.push(orbit1);
// let orbit2 = new Orbit(90, 45, 0);
// orbit2.obj.push(new Dot(0, 0));
// orbits.push(orbit2);
}
function draw() {
angleMode(DEGREES);
background(0);
orbitControl();
let len = 200;
fill('white');
stroke('white');
sphere(2);
stroke('red');
line(0, 0, 0, len, 0, 0);
text('x', len, 0)
stroke('green');
line(0, 0, 0, 0, len, 0);
text('y', 0, len)
push();
rotateX(90);
stroke('yellow');
line(0, 0, 0, 0, len, 0);
text('z', 0, len)
pop();
dotsData = [];
orbits.forEach(o => o.draw());
textSize(14);
push();
for (let i = 0; i < 2; i++) {
let yPos = -(windowHeight / 2) + 15;
for (let i = 0; i < dotsData.length; i++) {
let [id, pos, pos3d] = dotsData[i];
let [x1, y1, z1] = [pos[0].toFixed(0), pos[1].toFixed(0), pos[2].toFixed(0)];
let [x2, y2, z2] = [pos3d.x.toFixed(0), pos3d.y.toFixed(0), pos3d.z.toFixed(0)];
text(`${id}: (${x1}, ${y1}, ${z1}) -> (${x2}, ${y2}, ${z2})`, -windowWidth / 2 + 5, yPos);
yPos += 18;
}
rotateX(-90);
}
pop();
}
function mouseClicked() {
// controls.mousePressed();
}
function keyPressed() {
// controls.keyPressed(keyCode);
if (keyCode === 32) {
rotating = !rotating;
}
}
class Orbit {
constructor(x, y, z, xr, yr, zr) {
this.obj = [];
this.currentRot = [
x ? x : 0,
y ? y : 0,
z ? z : 0
]
this.rot = [
xr ? xr : 0,
yr ? yr : 0,
zr ? zr : 0
]
}
draw() {
push();
if (rotating) {
this.currentRot[0] += this.rot[0];
this.currentRot[1] += this.rot[1];
this.currentRot[2] += this.rot[2];
}
rotateZ(this.currentRot[2]);
rotateY(this.currentRot[1]);
rotateX(this.currentRot[0]);
noFill();
stroke('white');
ellipse(0, 0, 300, 300);
for (let i = 0; i < this.obj.length; i++) {
let o = this.obj[i];
o.draw();
dotsData.push([o.id, o.getPosition(), this.#get3DPos(o)]);
}
pop();
}
#get3DPos(o) {
let [x, y, z] = o.getPosition();
let pos = createVector(x, y, z);
pos = this.#rotate2(pos, createVector(1, 0, 0), this.currentRot[0]);
pos = this.#rotate2(pos, createVector(0, 1, 0), this.currentRot[1]);
pos = this.#rotate2(pos, createVector(0, 0, 1), this.currentRot[2]);
return pos;
}
//https://stackoverflow.com/questions/67458592/how-would-i-rotate-a-vector-in-3d-space-p5-js
#rotate2(vect, axis, angle) {
// Make sure our axis is a unit vector
axis = p5.Vector.normalize(axis);
return p5.Vector.add(
p5.Vector.mult(vect, cos(angle)),
p5.Vector.add(
p5.Vector.mult(
p5.Vector.cross(axis, vect),
sin(angle)
),
p5.Vector.mult(
p5.Vector.mult(
axis,
p5.Vector.dot(axis, vect)
),
(1 - cos(angle))
)
)
);
}
}
class Dot {
constructor(angle, speed) {
this.id = ++dotId;
this.angle = angle;
this.speed = speed
}
draw() {
this.angle += this.speed;
this.x = cos(this.angle) * 150;
this.y = sin(this.angle) * 150;
push();
fill('gray');
translate(this.x, this.y);
noStroke();
sphere(15);
pop();
}
getPosition() {
return [this.x, this.y, 0];
}
}
And now it works like a charm:
https://editor.p5js.org/cigno5/sketches/PqB9CEnBp
I would like to create a 3D demo application with JavaFX to visualize movements of points in 3D space and first I need to set up a coordinate grid for visual reference. Unfortunately, I was not able to find a sample code for a grid like in this picture:
Does anyone know what is the most practical way to create something like it?
There are a few solutions out there already.
FXyz3D library has a CubeWorld class, that gives you precisely a reference grid.
It is quite easy to use. Just import the 'org.fxyz3d:fxyz3d:0.3.0' dependency from JCenter and use it:
CubeWorld cubeWorld = new CubeWorld(5000, 500, true);
Sphere sphere = new Sphere(100);
sphere.setMaterial(new PhongMaterial(Color.FIREBRICK));
sphere.getTransforms().add(new Translate(100, 200, 300));
Scene scene = new Scene(new Group(cubeWorld, sphere), 800, 800, true, SceneAntialiasing.BALANCED);
As you can see, the solution is based on using 2D rectangles for each face, and the grid lines are created with 3D cylinders. It has very nice features (like self lightning or frontal faces according to camera don't show grid), but it is quite intensive in nodes (sample above has 168 nodes).
There are other solutions that use a lower number of nodes. For instance, for this sample, that also happens to be related to Leap Motion, I used a TriangleMesh.
This is an easy solution, and with just two meshes. However, you see the triangles, instead of squares.
So let's try to get rid of the triangles. For that I'll use a PolygonMesh, as in this other question, based on the 3DViewer project that is available at the OpenJFX repository, contains already a PolygonalMesh implementation, that allows any number of points per face, so any polygon can be a face.
This will give you a plane grid based in square faces:
private PolygonMesh createQuadrilateralMesh(float width, float height, int subDivX, int subDivY) {
final float minX = - width / 2f;
final float minY = - height / 2f;
final float maxX = width / 2f;
final float maxY = height / 2f;
final int pointSize = 3;
final int texCoordSize = 2;
// 4 point indices and 4 texCoord indices per face
final int faceSize = 8;
int numDivX = subDivX + 1;
int numVerts = (subDivY + 1) * numDivX;
float points[] = new float[numVerts * pointSize];
float texCoords[] = new float[numVerts * texCoordSize];
int faceCount = subDivX * subDivY;
int faces[][] = new int[faceCount][faceSize];
// Create points and texCoords
for (int y = 0; y <= subDivY; y++) {
float dy = (float) y / subDivY;
double fy = (1 - dy) * minY + dy * maxY;
for (int x = 0; x <= subDivX; x++) {
float dx = (float) x / subDivX;
double fx = (1 - dx) * minX + dx * maxX;
int index = y * numDivX * pointSize + (x * pointSize);
points[index] = (float) fx;
points[index + 1] = (float) fy;
points[index + 2] = 0.0f;
index = y * numDivX * texCoordSize + (x * texCoordSize);
texCoords[index] = dx;
texCoords[index + 1] = dy;
}
}
// Create faces
int index = 0;
for (int y = 0; y < subDivY; y++) {
for (int x = 0; x < subDivX; x++) {
int p00 = y * numDivX + x;
int p01 = p00 + 1;
int p10 = p00 + numDivX;
int p11 = p10 + 1;
int tc00 = y * numDivX + x;
int tc01 = tc00 + 1;
int tc10 = tc00 + numDivX;
int tc11 = tc10 + 1;
faces[index][0] = p00;
faces[index][1] = tc00;
faces[index][2] = p10;
faces[index][3] = tc10;
faces[index][4] = p11;
faces[index][5] = tc11;
faces[index][6] = p01;
faces[index++][7] = tc01;
}
}
int[] smooth = new int[faceCount];
PolygonMesh mesh = new PolygonMesh(points, texCoords, faces);
mesh.getFaceSmoothingGroups().addAll(smooth);
return mesh;
}
So you can use 2 or 3 of them to create a coordinate system like this:
public Group createGrid(float size, float delta) {
if (delta < 1) {
delta = 1;
}
final PolygonMesh plane = createQuadrilateralMesh(size, size, (int) (size / delta), (int) (size / delta));
final PolygonMesh plane2 = createQuadrilateralMesh(size, size, (int) (size / delta / 5), (int) (size / delta / 5));
PolygonMeshView meshViewXY = new PolygonMeshView(plane);
meshViewXY.setDrawMode(DrawMode.LINE);
meshViewXY.setCullFace(CullFace.NONE);
PolygonMeshView meshViewXZ = new PolygonMeshView(plane);
meshViewXZ.setDrawMode(DrawMode.LINE);
meshViewXZ.setCullFace(CullFace.NONE);
meshViewXZ.getTransforms().add(new Rotate(90, Rotate.X_AXIS));
PolygonMeshView meshViewYZ = new PolygonMeshView(plane);
meshViewYZ.setDrawMode(DrawMode.LINE);
meshViewYZ.setCullFace(CullFace.NONE);
meshViewYZ.getTransforms().add(new Rotate(90, Rotate.Y_AXIS));
PolygonMeshView meshViewXY2 = new PolygonMeshView(plane2);
meshViewXY2.setDrawMode(DrawMode.LINE);
meshViewXY2.setCullFace(CullFace.NONE);
meshViewXY2.getTransforms().add(new Translate(size / 1000f, size / 1000f, 0));
PolygonMeshView meshViewXZ2 = new PolygonMeshView(plane2);
meshViewXZ2.setDrawMode(DrawMode.LINE);
meshViewXZ2.setCullFace(CullFace.NONE);
meshViewXZ2.getTransforms().add(new Translate(size / 1000f, size / 1000f, 0));
meshViewXZ2.getTransforms().add(new Rotate(90, Rotate.X_AXIS));
PolygonMeshView meshViewYZ2 = new PolygonMeshView(plane2);
meshViewYZ2.setDrawMode(DrawMode.LINE);
meshViewYZ2.setCullFace(CullFace.NONE);
meshViewYZ2.getTransforms().add(new Translate(size / 1000f, size / 1000f, 0));
meshViewYZ2.getTransforms().add(new Rotate(90, Rotate.Y_AXIS));
return new Group(meshViewXY, meshViewXY2, meshViewXZ, meshViewXZ2 /*, meshViewYZ, meshViewYZ2 */);
}
Note that I've duplicated the plane to mock a wider stroke every 5 lines.
Finally adding axes:
public Group getAxes(double scale) {
Cylinder axisX = new Cylinder(1, 200);
axisX.getTransforms().addAll(new Rotate(90, Rotate.Z_AXIS), new Translate(0, -100, 0));
axisX.setMaterial(new PhongMaterial(Color.RED));
Cylinder axisY = new Cylinder(1, 200);
axisY.getTransforms().add(new Translate(0, 100, 0));
axisY.setMaterial(new PhongMaterial(Color.GREEN));
Cylinder axisZ = new Cylinder(1, 200);
axisZ.setMaterial(new PhongMaterial(Color.BLUE));
axisZ.getTransforms().addAll(new Rotate(90, Rotate.X_AXIS), new Translate(0, 100, 0));
Group group = new Group(axisX, axisY, axisZ);
group.getTransforms().add(new Scale(scale, scale, scale));
return group;
}
Now you have:
final Group axes = getAxes(0.5);
final Group grid = createGrid(200, 10);
final Sphere sphere = new Sphere(5);
sphere.getTransforms().add(new Translate(20, 15, 40));
Scene scene = new Scene(new Group(axes, grid, sphere), 800, 800, true, SceneAntialiasing.BALANCED);
The total amount of nodes of this sample is 14.
Of course, it can be improved to add labels and many other features.
I scaled the canvas but the actual marker is not scaled in the android map app. The following code is written in my code :
public void onCameraChange(CameraPosition position) {
float angle = position.bearing;
float tilt = position.tilt;
for (String key:canvases.keySet()) {
Float angle2 = angle;
Float markerAngle = Float.parseFloat(markerDirection.get(key));
if (angle==0){
angle2 = markerAngle;
}
else{
angle2 = (360 -angle) + markerAngle;
}
Bitmap x = canvases.get(key);
Bitmap bmResult = Bitmap.createBitmap(x.getHeight(), x.getWidth(), Bitmap.Config.ARGB_8888);
Canvas tempCanvas = new Canvas(bmResult);
tempCanvas.rotate(angle2 , x.getHeight(), x.getWidth());
tempCanvas.drawBitmap(x, 0, 0, null);
markers.get(key).setIcon(BitmapDescriptorFactory.fromBitmap(bmResult));
}
}
I would like to make a RGB wheel in processing as a GUI to control the LED color of a RGB Led connected to an Arduino Board.
I have done this code in Processing so far.
float startFill;
float startAngle;
int step;
float stepLength;
float centerX;
float centerY;
float pSize;
float bValue;
void setup()
{
size(512, 512);
colorMode(HSB, 2*PI, 100, 100);
smooth();
}
void draw()
{
background(0,0,25);
ellipseMode(CENTER);
noStroke();
step = 120;
centerX = width/2;
centerY = height/2;
startFill = 0;
startAngle = 0;
stepLength = PI/step;
pSize = 400;
bValue = 200;
// draw arcs
for(int i=0; i< 2*step; i++)
{
for(int j=0; j< step; j++)
{
fill(startFill, bValue, 100,80);
stroke(0,0,95,20);
arc(centerX, centerY, pSize, pSize, startAngle, startAngle+stepLength);
bValue = bValue - 50/step;
pSize = pSize - 50/step;
}
startFill = startFill + stepLength;
startAngle = startAngle + stepLength;
}
}
I would like to map the values of Red, Green and Blue using the mouse position on the screen over the previous wheel.
I found a picture that would help me as guide to write the RGB values over the mouse position on the wheel but I'm not very sure how to make that.
RGB WHEEL PROCESSING
I would really appreciate any help or advice.
Best regards
Note that that color wheel is not actuall a color wheel. It's just "the same color, going in". The outer circle is your standard color mix, pure R at angle ..., pure G at angle ...+2/4*pi, and pure B at angle ...+4/3*pi. For activation purposes, construct a color wedge object and use that:
class ColorWedge {
color c;
float[] coords;
ColorWedge(color _c, float[] _coords) {
c = _c;
coords = _coords;
}
void draw() {
fill(c);
noStroke();
triangle(coords[0],coords[1],coords[2],coords[3],coords[4],coords[5]);
stroke(0);
line(coords[2],coords[3],coords[4],coords[5]);
}
}
And then construct wedges for "all" the colors by creating wedges over an angle:
final float PI2 = 2*PI;
ArrayList<ColorWedge> wedges;
void setup() {
size(200,200);
colorMode(HSB,PI2);
wedges = new ArrayList<ColorWedge>();
float radius = 90,
ox = width/2,
oy = height/2,
px, py, nx, ny,
step = 0.01,
overlap = step*0.6;
for(float a=0; a<PI2; a+=step) {
px = ox + radius * cos(a-overlap);
py = oy + radius * sin(a-overlap);
nx = ox + radius * cos(a+overlap);
ny = oy + radius * sin(a+overlap);
wedges.add(new ColorWedge(color(a,PI2,PI2), new float[]{ox,oy,px,py,nx,ny}));
}
}
Controlling the color is then simply a matter of figuring out where the mouse is, and that its angle to the center of the sketch is:
color wcolor = 0;
void draw() {
background(PI2,0,PI2);
pushStyle();
for(ColorWedge w: wedges) { w.draw(); }
strokeWeight(10);
stroke(wcolor);
line(0,0,width,0);
line(width,0,width,height);
line(width,height,0,height);
line(0,height,0,0);
popStyle();
}
void mouseMoved() {
float angle = atan2(mouseY-height/2,mouseX-width/2);
if(angle<0) angle+=PI2;
ColorWedge wedge = wedges.get((int)map(angle,0,PI2,0,wedges.size()));
wcolor = wedge.c;
}
That should get you well on your way, if not 100% of the way there.
I'm trying to embed an existing implementation of ArcBall in JOGL into my own project. It compiles and runs but I doesn't work! I can't play around with the view.
I took the implementation (two classes) from here:
http://www.mdimension.com/page/Software?appNum=1
And followed the instructions of embeding the thing into my own project. Here's the class I'm using ArcBall in:
public class GLRenderer implements GLEventListener {
private static final int MAP_SIZE = 1024;
private static final int STEP_SIZE = 16;
private static final float HEIGHT_RATIO = 1.5f;
private float[][] temperatureMap = new float[MAP_SIZE][MAP_SIZE];
private float scaleValue = 0.15f;
private GLU glu = new GLU();
private ArcBall arcBall = new ArcBall();
public void init(GLAutoDrawable drawable) {
GL gl = drawable.getGL();
gl.glShadeModel(GL.GL_SMOOTH);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.glClearDepth(1.0f);
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glDepthFunc(GL.GL_LEQUAL);
gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_NICEST);
loadValuesToMap();
arcBall.registerDrawable(drawable);
}
public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) {
GL gl = drawable.getGL();
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL.GL_PROJECTION); // Select The Projection Matrix
gl.glLoadIdentity();
glu.gluPerspective(30,(float)width/(float)height,1.0f,650.0);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glLoadIdentity();
arcBall.reshape(width, height);
}
public void display(GLAutoDrawable drawable) {
arcBall.displayUpdateRotations();
GL gl = drawable.getGL();
gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glClear(GL.GL_DEPTH_BUFFER_BIT); //added
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
setLight(gl);
positionCamera(glu, gl);
drawXYZ(gl);
arcBall.displayTransform(gl);
drawMap(glu, gl);
gl.glFlush();
}
public void setVertexColor(GL gl, int x, int y) {
float fColor = -0.15f + (temperatureMap[x][y] / 256.0f);
gl.glColor3f(0.0f, 0.0f, fColor);
}
public void drawMap(GLU glu, GL gl) {
int x, z;
float y;
gl.glBegin(GL.GL_QUADS);
for(int X = 0; X <(MAP_SIZE - STEP_SIZE); X += STEP_SIZE) {
for(int Y = 0; Y < (MAP_SIZE -STEP_SIZE); Y += STEP_SIZE) {
// Get The (X, Y, Z) Value For The Bottom Left Vertex
x = X;
y = temperatureMap[X][Y];
z = Y;
// Set The Color Value Of The Current Vertex
setVertexColor(gl, x, z);
gl.glVertex3f(x, y, z);
// Get The (X, Y, Z) Value For The Top Left Vertex
x = X;
y = temperatureMap[X][Y + STEP_SIZE];
z = Y + STEP_SIZE ;
// Set The Color Value Of The Current Vertex
setVertexColor(gl, x, z);
gl.glVertex3f(x, y, z); // Send This Vertex To OpenGL To Be Rendered
// Get The (X, Y, Z) Value For The Top Right Vertex
x = X + STEP_SIZE;
y = temperatureMap[X + STEP_SIZE][Y + STEP_SIZE];
z = Y + STEP_SIZE ;
// Set The Color Value Of The Current Vertex
setVertexColor(gl, x, z);
gl.glVertex3f(x, y, z); // Send This Vertex To OpenGL To Be Rendered
// Get The (X, Y, Z) Value For The Bottom Right Vertex
x = X + STEP_SIZE;
y = temperatureMap[X + STEP_SIZE][Y];
z = Y;
// Set The Color Value Of The Current Vertex
setVertexColor(gl, x, z);
gl.glVertex3f(x, y, z); // Send This Vertex To OpenGL To Be Rendered
}
}
gl.glEnd();
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
gl.glTranslated(0.1, 0.1, -0.5);
gl.glColor3f(0.0f, 0.0f, 1.0f);
glu.gluSphere(glu.gluNewQuadric(), 0.05f, 32, 32);
gl.glTranslated(0.1, 0.1, -0.1);
gl.glColor3f(0.0f, 1.0f, 0.0f);
glu.gluSphere(glu.gluNewQuadric(), 0.05f, 32, 32);
gl.glTranslated(0.1, -0.1, 0.1);
gl.glColor3f(1.0f, 0.0f, 0.0f);
glu.gluSphere(glu.gluNewQuadric(), 0.05f, 32, 32);
}
public void positionCamera(GLU glu, GL gl) {
glu.gluPerspective(75.0f,1.09,0.1f,500.0f);
glu.gluLookAt(194, 80, 194,
131, 55, 131,
0, 1, 0);
gl.glScalef(scaleValue, scaleValue * HEIGHT_RATIO, scaleValue);
}
public void setLight(GL gl) {
// Prepare light parameters.
float SHINE_ALL_DIRECTIONS = 1;
float[] lightPos = {0, -30, 0, SHINE_ALL_DIRECTIONS};
float[] lightColorAmbient = {0.5f, 0.5f, 0.5f, 0.5f};
float[] diffuseLight = { 0.8f, 0.8f, 0.8f, 1.0f };
float[] lightColorSpecular = {0.5f, 0.5f, 0.5f, 0.5f};
// Set light parameters.
gl.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, lightPos, 1);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, lightColorAmbient, 0);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, diffuseLight, 0);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_SPECULAR, lightColorSpecular, 0);
// Enable lighting in GL.
gl.glEnable(GL.GL_LIGHT1);
gl.glEnable(GL.GL_LIGHTING);
// Set material properties.
gl.glEnable(GL.GL_COLOR_MATERIAL);
}
public void drawXYZ(GL gl) {
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glBegin(GL.GL_LINES);
gl.glColor3d(1.0, 0.0, 0.0); //red (x)
gl.glVertex3d(-0.1, 0.0, 0.0);
gl.glVertex3d(1500.0, 0.0, 0.0);
gl.glColor3d(0.0, 1.0, 0.0); //green (y)
gl.glVertex3d(0.0, -0.1, 0.0);
gl.glVertex3d(0.0, 1500.0, 0.0);
gl.glColor3d(0.0, 0.0, 1.0); //blue (z)
gl.glVertex3d(0.0, 0.0, 0.1);
gl.glVertex3d(0.0, 0.0, 1500.0);
gl.glEnd();
}
public void displayChanged(GLAutoDrawable drawable, boolean modeChanged, boolean deviceChanged) {
init(drawable);
}
private void loadValuesToMap() {
for(int i = 0; i < MAP_SIZE; i++) {
for(int j = 0; j< MAP_SIZE; j++) {
if(i > 300 && i < 700 && j > 300 && j < 700)
temperatureMap[i][j] = 150;
else
temperatureMap[i][j] = 100;
}
}
}
}
I'm new to openGL soke it might be a stupid mistake. I'd appreciate any help though.
Thanks
The source code is not complete. Where is your frame (AWT Frame or Swing JFrame)? Please look at the example of JOGL on Wikipedia.