A-Frame: Update entity rotation from controller rotation - aframe

I am updating the y rotation of an entity from my laser controller's y rotation. The problem is that the y rotation of the controller is not added to the existing y rotation of the entity. For example: I rotate the entity by clicking on a button on my controller. The entity is rotated like my controller. But I would like to keep that rotation of the entity and ADD the rotation of my controller the next time I decide to rotate the entity.
The current behavior is implemented like that:
tick: function () {
this.el.object3D.rotation.y = this.laser.object3D.rotation.y;
}
What I would like is
this.el.object3D.rotation.y = this.el.object3D.rotation.y + this.laser.object3D.rotation.y;
And it should work in the other direction as well
this.el.object3D.rotation.y = this.el.object3D.rotation.y - this.laser.object3D.rotation.y;

Needs more context, link to illustrate. You have to keep track of the initial rotation of the entity and then you can do:
this.el.object3D.rotation.y = initialRotation.y + this.laser.object3D.rotation.y;

Related

How to get current facing direction vector 2D?

So I am developing a game in 2D in Godot and I want to see a line from the center of my sprite to its facing position. I have the sprite rotating and moving along the rotation direction but when I try to create vector out of that its very wrong. For example the vector line is going from the center of the sprite to near (0,0) position on the screen.
public override void _Draw()
{
Vector2 rotationDirection = new Vector2(Mathf.Cos(sprite.GlobalRotation) , Mathf.Sin(sprite.GlobalRotation)) - sprite.GlobalPosition;
DrawLine(sprite.GlobalPosition, rotationDirection, Colors.Red, 2f);
}
EDIT:
Fixed it it works now.
public override void _Draw()
{
DrawLine(sprite.GlobalPosition, sprite.GlobalPosition + new Vector2(Mathf.Cos(sprite.GlobalRotation), Mathf.Sin(sprite.GlobalRotation)) * 50f, Colors.Red, 2f);
}
A simpler solution would have been to add a Line2D child node. You can see the effect right away in the editor, and there's no need for any maths.
This might also be more efficient, because custom _draw functions don't play nice with Godot's geometry batching.

On mesh click, ArcRotateCamera focus on

I'm using ArcRotateCamera, when I click on mesh, I have to focus camera on
var camera = new BABYLON.ArcRotateCamera("Camera", -Math.PI / 2, Math.PI / 2, 300, BABYLON.Vector3.Zero(), scene);
camera.setTarget(BABYLON.Vector3.Zero());
// on mesh click, focus in
var i = 2;
var pickInfo = scene.pick(scene.pointerX, scene.pointerY);
if (pickInfo.hit) {
pickInfo.pickedMesh.actionManager = new BABYLON.ActionManager(scene);
pickInfo.pickedMesh.actionManager.registerAction(
new BABYLON.ExecuteCodeAction(BABYLON.ActionManager.OnPickTrigger,
function (event) {
camera.position = (new BABYLON.Vector3(pickInfo.pickedPoint.x, pickInfo.pickedPoint.y, camera.position.z + i));
i += 2;
})
);
}
this code changes mesh's z position but don't makes it in the center of screen
There are a few things that can be changed in your code.
1st - what you are doing is executing a code action after a click, instead of simply running the code in the callback after a pick has occurred. You are registering a pick action (technically user click) on right on the first frame, but only if the mouse was found in the right location at the right moment. My guess is that it didn't work every time (unless you scene is covered with meshes :-) )
2nd - you are changing the camera's position, instead of change the position to which it is looking. Changing the camera's position won't result in what you want (to see the selected mesh), it will move the camera to a new position while still focusing on the old position.
There are a few ways to solve this. The first is this:
scene.onPointerDown = function(evt, pickInfo) {
if(pickInfo.hit) {
camera.focusOn([pickInfo.pickedMesh], true);
}
}
The ArcRotate camera provides focusOn function that focuses on a group of meshes, while fixing the orientation of the camera. this is very helpful. You can see a demo here:
https://playground.babylonjs.com/#A1210C#51
Another solution would be to use the setTarget function:
https://playground.babylonjs.com/#A1210C#52
Which works a bit differently (notice the orientation change of the camera).
Another thing - use the pointer events integrated in Babylon, as they are saving you the extra call for a scene pick. pointer down is executed with the pickinfo integrated in the function, so you can get the picking info of the current pointer down / up / move each frame.
**** EDIT ****
After a new comment - since you want to animate the values, all you need to do is store the current values, calculate the new ones, and animate the values using the internal animation system (documentation here - https://doc.babylonjs.com/babylon101/animations#basic-animation) . There are many ways to achieve this, I took an old function and modernized it :-)
Here is the demo - https://playground.babylonjs.com/#A1210C#53

Position laser-controls entity after camera rotation

I have the following code for the laser controls which are perfectly positioned when the camera looks straight ahead after entering VR mode.
<a-entity position="0.25 1.25 -0.2" class="laser-controls">
<a-entity laser-controls="hand: right" line="color: red"></a-entity></a-entity>
The issue is: when I rotate my head (camera), I would like to let the controls follow my head rotation smoothly (I have some code which looks if the rotation is greater than 110 degrees). I don't want the controllers be part of the camera since they should keep their own independent rotation. What I like is the behaviour of the controller model in Oculus Home (Gear VR).
How can I achieve this is my custom component, let's say in my tick function, which is called every two seconds (that code works already).
Thanks!
How about using getAttribute() to check the rotation of the camera component and the laser control's entity? Then you could check if the difference exceeds 110 degrees or not:
let angle = laser.getAttribute('rotation');
if (camera.getAttribute('rotation').y - laser.getAttribute('rotation').y>110){
angle.y++;
laser.setAttribute('rotation',angle);
} else if(camera.getAttribute('rotation').y - laser.getAttribute('rotation').y<-110){
angle.y--;
laser.setAttribute('rotation',angle);
}
UPDATE
If You want to position Your controller near Your head You can:
1.Instead of angle.y++/-- change it to Your camera's rotation. You can also change its x/y position close to the camera ( like camera.position.x + 0.5 )
2.But the above is instant, if You want to make it smooth, You could use the animation component, when the delta degree is >110 deg, set the animation attributes to move to the camera component location/rotation, emit a beginning event, disable the rotation check, listen for the animation end event, and enable the check. a bit like this:
init: function(){
this.check = true;
let check = this.check;
animationel.addEventListener('animationend',function(){
check = true;
});
},tick(){
if(this.check){
if(rotationCheck()){
this.check = false;
}
}
}

Three.js - how to make objects with different positions look at Orthographic camera using vectors?

Three.js 76
I start to use Orthographic camera instead Perspective - has some troubles.
I use stemkoski's shader-glow for point animation in scene: he created some sphere and then use shader for it transparancy, i just add animation to it.
function animatePoints() {
var alphas;
var count;
var time;
let j = 0;
while ( animatedPoints[j] ) {
let threeGlow = animatedPoints[j];
let finishAnimation = threeGlow.meta.state.finishAnimation;
let itFinished = 0;
let pointGlowMat = threeGlow.material;
let pointGlowGeom = threeGlow.geometry;
// ########## make glow look at camera
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3().subVectors( threeGlow.position, camera.position);
alphas = pointGlowGeom.attributes.alpha;
count = alphas.count;
time = 0.001* Date.now();
if ( finishAnimation ) {
....
} else {
....
}
alphas.needsUpdate = true;
j++;
}
}
Main goal - to make glow look at camera. When camera was perspective i use solution with subtracts two vectors - camera position and glow position, so it look like glow looking at camera.
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3().subVectors( camera.position, threeGlow.position );
But now, when i use Orthographic camera this solution doesn't work correctly.
The problem is that now the glow should look not at camera position point, it should look at plane of the camera.
I make some pic scheme for that situation:look it, it very useful
So for each new glow (it's positions of course different) i must get new red vector, to make each glow look at orto cam.
Any ideas?
What you need to do to get the red vector in your image, is project vector sub on the vector representing the direction in which the camera is looking.
I think this is what you need:
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3()
.subVectors( camera.position, threeGlow.position )
.projectOnVector( camera.getWorldDirection() );
One way to make an object "look at" an orthographic camera by using this pattern:
object.quaternion.copy( camera.quaternion );
The object's view direction will be perpendicular to the camera's projection plane.
This approach is appropriate if neither the object nor the camera have a rotated parent.
three.js r.84

predicted rendering of css 3d-transformed pixel

I'm working in html, css, and javascript with canvas elements. Sometimes these canvases get css3 3d transformations applied to them (including zooms, rotations, and translations), and I need to predict the 2d-rendered-in-the-browser position of my 3d-transformed canvas, and I'm having some issues with it.
The transformations are actually not being applied to the canvas itself; they're being applied to two container divs around the canvas:
<div class="viewport">
<div class="container-outer">
<div class="container-inner">
<canvas></canvas>
</div>
</div>
</div>
The reason I have two containers is so I can apply two transformations at once. I have css transitions activated on the transforms, so I can, for example, do an x-translation on one container and a y-translation on the other container, and if they have separate easing functions, the resulting animation is a curve rather than a straight line of motion.
I am attempting to predict the position of the four corners of container-inner. The methods I'm using are correctly predicting the result of translations and zooms, but not rotations. Here's how it works. I start with the known original position of a corner, say [0,0,0]. For the sake of the example, we'll say that all of my divs and my canvas are all 500px x 500px, and I have a perspective of 500px set on the viewport div. So my perspective origin is at [250,250,500]. So I take my known corner position, convert it to a heterogeneous 4x1 vector matrix [0,0,0,1], and use matrixVectorMultiply() to multiply that by the matrix3d css matrix corresponding to the transformation being applied to container-inner. I take the resulting vector and use matrixVectorMultiply() again to multiply it by the matrix3d css matrix corresponding to the transformation being applied to container-outer.
matrixVectorMultiply = function(mx, vec)
{ // Given a 4x4 transformation matrix and an xyz vector, this will return the dot product (a new vector)
if (typeof vec.z == 'undefined')
{
vec.z = 0;
}
return {
x: vec.x*mx.array[0] + vec.y*mx.array[1] + vec.z*mx.array[2] + mx.array[3]
,y: vec.x*mx.array[4] + vec.y*mx.array[5] + vec.z*mx.array[6] + mx.array[7]
,z: vec.x*mx.array[8] + vec.y*mx.array[9] + vec.z*mx.array[10] + mx.array[11]
};
};
'vec' is a simple object with x, y, and z attributes. 'mx' is a simple object with two attributes, 'array' containing a row-primary array representation of the matrix3d transformation array, and 'string' containing a column-primary string representation of that array, ready to be plugged into a CSS transform attribute.
Now I have a 3d coordinate that should be the new position of my corner, after the outer and inner transitions have both been performed. But I need to project that 3d coordinate into my 2d viewing window, with the following function:
projection3d = function(pos, origin)
{ // Given an xyz point and an xyz perspective origin point, this will return the xy projected location
// Using the equation found here: http://en.wikipedia.org/wiki/3D_projection#Diagram
var pos2d = {x: null, y: null, z: null},
relPos2d = {x: null, y: null},
relPos = {x: null, y: null, z: null};
// First, we take our given point and locate it relative to the perspective origin, rather than the screen
relPos.x = pos.x - origin.x;
relPos.y = pos.y - origin.y;
relPos.z = pos.z - origin.z;
// Then we take this object and project it onto our 2d plane
relPos2d.x = relPos.x * (Math.abs(origin.z) / Math.abs(relPos.z));
relPos2d.y = relPos.y * (Math.abs(origin.z) / Math.abs(relPos.z));
// Then we take this and locate it relative to the screen again, instead of the perspective origin
pos2d.x = relPos2d.x + origin.x;
pos2d.y = relPos2d.y + origin.y;
return pos2d;
};
I take the result of that function and visually compare it with the actual result of the css 3d transformations in the browser, and find that translations and zooms are being predicted correctly, but not rotations, and I can't for the life of me figure out why not.
Does anyone have any insight? Is there further information I should provide?

Resources