In javaFX I can create rectangle/circles... and other 2D geometric figures.
In the Group i can add them with:
group.getChildren().add("Name");
Also I can check collision:
hero = character.localToScene(character.getBoundsInLocal());
yewTree = yew.localToScene( yew.getBoundsInLocal());
if ( yewTree.intersects(hero) )
{
}
... and perform action if it happens.
There is another amazing thing:
AnimationTimer and TranslateTransition. It not only lets me to do periodic/continous animation, but also it checks some states.
( I can make few game loops without making major game loop, as they works independend. )
AND THE QUESTION COMES.
Can I add 3d models ( stl, obj, 3ds...) to the group and manimulate them ( scaling, changing coordinates, check collision, use them as a node in animation timer or Translatetransition )?
Im not sure about http://www.interactivemesh.org/
. I expect there is some kind of alternative, as it is confusing, lack of documentation, few examples on the Internet I found - do not works.
Related
I've been building something in aframe (using the current master build from github as of 23rd March 2018) and have noticed that there are two sets of rotations that are identical that I don't think should be.
A model looks the same with its rotation attribute "270 90 90" and "270 180 0".
Similarly - setting the rotation attribute to "270 270 90" and "270 0 0" look the same.
I created a little demo to show this here - http://marcamillian.com/VR/rotationIssue.html.
Is this a bug or am I mis-understanding something?
=== Further information ===
I came across this when trying to add rotating animations a model from "270 90 0" along its roll axis and yaw axis and not getting the same motion on each.
After checking all of my functions I started setting the attribute on the model directly and getting the model looking the same for different rotation positions.
This is a pretty common problem in 3D computer graphics. Welcome to the gimbal lock problem! It is basically an issue you run into when trying to set rotations using "Euler" angles, especially when rotating in multiples of 90 degrees. If you are going to rotate like this try to only rotate by two axes (e.g. x and y) instead of all 3 (x,y,z).
More information here on gimbal lock: https://www.youtube.com/watch?v=zc8b2Jo7mno&feature=youtu.be
An even better practise is to rotate using quaternions instead. I will also note that it is best practise to use components to make sure A-Frame is available to modify, or not as recommended a "loaded" event listener on a-scene (see this SO question for more info. How to detect when a scene is loaded in A-Frame?):
//listen for scene load s0 we know Aframe and Threejs are around to access. A snipped of your code, modified slightly for some direction ...
document.querySelector('a-scene').addEventListener('loaded', function () {
//AFRAME loaded
const scene = document.querySelector('a-scene');
const arrowElem = scene.querySelector(".shape-container");
//now set your click listener
setButton.addEventListener('click',function(){
let rotationValue = `${inputPitch.valueAsNumber} ${inputYaw.valueAsNumber} ${inputRoll.valueAsNumber}`
//arrowElem.setAttribute('rotation', rotationValue); //your way
//this doesn't work well ...
//arrowElem.object3D.rotation.set( THREE.Math.degToRad(inputPitch.valueAsNumber),
THREE.Math.degToRad(inputYaw.valueAsNumber),
THREE.Math.degToRad(inputRoll.valueAsNumber)
);
//consider how you can use quaternions instead
let quaternion = new THREE.Quaternion();
quaternion.setFromAxisAngle( new THREE.Vector3( 0, 1, 0 ), Math.PI / 2 ); //might have to change your logic to better use this functionality ...
})
});
For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}
I am working on a 3D mesh manipulator using this : http://leapmotion.com. So far, I have been able manipulate the points just fine, by 'grabbing' and moving them, however I now want to be able to rotate the mesh and work on the opposite face. What I have done is add an extra object that is called 'rotatable' as Shown below:
scene=new THREE.Scene();
camera = new THREE.PerspectiveCamera(70,window.innerWidth/window.innerHeight,1,8000)
renderer=new THREE.WebGLRenderer( { clearColor: 0x000000, clearAlpha: 1, maxLights:5 } )
//This is the 'Mesh Scene'
rotatable = new THREE.Object3D()
scene.add(rotatable)
//Mesh we are altering
var material = new THREE.MeshNormalMaterial()
material.side=2
var geom = new THREE.SphereGeometry(200,10,10);
var sphere = new THREE.Mesh(geom, material)
rotatable.add(sphere)
I am then trying to change the vertices of this sphere, but to do so I need to do a 'collision test' in order to see if the vertex is being 'grabbed' This involves check the vertex position and see if it coincides with one of your finger position (psuedoCode below)
if(finger.x == vertex.x && finger.y == vertex.y && finger.z == vertex.z){
vertex.grabbed = true
}
This works fine when the rotatable's rotation is zero, however when it starts to rotate, the collision test will still be testing for the unrotated vertex position (which makes sense). My question is how to find the position of the vertex in its 'scene / global' position. The only way I can think of doing this so far is to calculate the rotation of the 'rotatable' and use this vector to calculate the new vertex position.
I know nothing about math, so this may not be the way to go, and even if it is I will have to struggle through it so hard that I won't ever know if I'm just doing the math incorrectly, or this isn't the way I should go about calculating it. Obviously I'm willing to go through this work, but just want to make sure this is the way to do it, rather then an other simpler method.
If there are any other questions about the code, please let me know, and Thanks in advance for your time!
Isaac
To get the world position of a vertex specified in local coordinates, apply the object's world transform to the vertex like so:
vertex.applyMatrix4( object.matrixWorld );
(I am not familiar with leapmotion, so hopefully it does not impact this answer.)
Tip: maxLights is no longer required. And it is best to avoid material.side = 2. Use material.side = THREE.DoubleSide instead.
You can find the constants here: https://github.com/mrdoob/three.js/blob/master/src/Three.js
three.js r.55
So basically when I try to draw more a mesh inside an FBX file its orientation is always removed and it's scaled down. I'm not sure if the issue is caused by code or the way I'm exporting the FBX files. I have been trying to narrow down the cause and I am fairly sure it's not caused by the way I export the FBX (but I could be wrong), so it's either the XNA content pipeline or my drawing code
Here are some pics I took to show my problem, where the gray background is in 3Ds Max as I see it and red background is in XNA:
THis is as it appears in 3D StudioMax: http://i.stack.imgur.com/e0oW4.png
This is how it appears in XNA: http://i.stack.imgur.com/1vOcx.png
Both are being viewed from the same angle and direction but varying distances.
Now what is really odd is if I create another mesh in max, say a box, and export that (along with the original model), it works fine: http://i.stack.imgur.com/SIDg9.png
So long as there is more than one mesh in the fbx model it draws properly (though I'm still suspicious if it's drawing with proper scaling applied, i.e. if in Max it is 1 unit long in XNA it becomes something like 1.27 units long), if there is less its orientation which I applied to it in 3D studio max is removed when I draw it.
This is how I draw the model:
model.CopyAbsoluteBoneTransformsTo(boneTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = boneTransforms[mesh.ParentBone.Index];
Vector3 cameraPosition = Camera.Get.Position;// new Vector3(0, 0, 0);
//cameraPosition.X = -Camera.Get.PosX;
//cameraPosition.Y = Camera.Get.PosY;
effect.View = Camera.Get.View;// Matrix.CreateLookAt(cameraPosition, cameraPosition + Camera.Get.LookDir, Camera.Get.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
BaseGame.Get.GraphicsDevice.Viewport.AspectRatio,
0.01f, 1000000); //Matrix.CreateOrthographic(800 / 1, 480 / 1, 0, 1000000);
//effect.TextureEnabled = true;
effect.LightingEnabled = true;
effect.PreferPerPixelLighting = true;
//effect.SpecularColor = new Vector3(1, 0, 0);
}
mesh.Draw();
}
Obviously mesh.draw() is called twice when there is more than one mesh in the fbx file..
Generally if you are having a problem with the position or scale of the mesh while rendering, then it's likely to be related to the matrices. Not necessarily the exporting, but rather how you use them in the code.
I use blender3d for modelling, but I know that Blender3d actually defines different spaces when you are creating the meshes within the editor. For example, if you create a mesh while in 'object' mode, the position/rotation/scale of the object in the scene will not be exported (because that object will be the root of a new tree, centered around 0,0,0). So I would check for a similar situation in 3DMax - make sure you are transforming the vertices in Max relative to 0,0,0, or else you may lose the 'initial' translation and when you render in XNA, all the objects will be rendered around your 0,0,0 (i.e. appear mixed together).
Failing that, and I can't remember exactly off the top of my head, but I think you may need to multiply the current mesh's absolute matrix transform with that of the parent's world matrix transform. Although it's been a while so I'm not too sure.
I've got a small action-script chart, that's meant to be live updating, and also be able to support more than 10000 points of data. The way it's currently set up it doesn't need to redraw the whole chart if the new line we wish to add doesn't extend that chart's boundaries.
Yet it does, the redraw regions show the whole chart as being redrawn as opposed to the single line i need to add.
When the chart gets a new piece of data from the javascript it does the following.(some stuff has been stripped for clarity.
private function registerJSCallbacks() : void
{
ExternalInterface.addCallback( "addData", addData );
}
private function addData( val : * ) : void
{
trace( "addData",val );
var g: Graphics = this.graphics;
g.moveTo(x1,y1);
g.lineTo(x2,y2);
}
Is there any better way to do that, that won't redraw the whole screen? Is my coding pattern wrong for this type of update?
I'm a novice so even vaguely relevant advice would be appreciated.
You're editing the underlying vector, so it has to redraw the whole thing. You've got a couple options:
(easiest): Spawn a new Sprite after every X draw operations, so that each draw only recalculated a few vector lines
(more involved): Use one Sprite to draw, and every X draw operations, write the graphics in the sprite to a backing BitmapData object (using bitmapData.draw) and clear the sprite.
Option 2 is probably performs better than option 1, but I haven't benchmarked this specific scenario. You might get comparable performance if you sprite.cacheAsBitmap = true on each Sprite in option 1 as you move to a new "active" sprite.