predicted rendering of css 3d-transformed pixel - css

I'm working in html, css, and javascript with canvas elements. Sometimes these canvases get css3 3d transformations applied to them (including zooms, rotations, and translations), and I need to predict the 2d-rendered-in-the-browser position of my 3d-transformed canvas, and I'm having some issues with it.
The transformations are actually not being applied to the canvas itself; they're being applied to two container divs around the canvas:
<div class="viewport">
<div class="container-outer">
<div class="container-inner">
<canvas></canvas>
</div>
</div>
</div>
The reason I have two containers is so I can apply two transformations at once. I have css transitions activated on the transforms, so I can, for example, do an x-translation on one container and a y-translation on the other container, and if they have separate easing functions, the resulting animation is a curve rather than a straight line of motion.
I am attempting to predict the position of the four corners of container-inner. The methods I'm using are correctly predicting the result of translations and zooms, but not rotations. Here's how it works. I start with the known original position of a corner, say [0,0,0]. For the sake of the example, we'll say that all of my divs and my canvas are all 500px x 500px, and I have a perspective of 500px set on the viewport div. So my perspective origin is at [250,250,500]. So I take my known corner position, convert it to a heterogeneous 4x1 vector matrix [0,0,0,1], and use matrixVectorMultiply() to multiply that by the matrix3d css matrix corresponding to the transformation being applied to container-inner. I take the resulting vector and use matrixVectorMultiply() again to multiply it by the matrix3d css matrix corresponding to the transformation being applied to container-outer.
matrixVectorMultiply = function(mx, vec)
{ // Given a 4x4 transformation matrix and an xyz vector, this will return the dot product (a new vector)
if (typeof vec.z == 'undefined')
{
vec.z = 0;
}
return {
x: vec.x*mx.array[0] + vec.y*mx.array[1] + vec.z*mx.array[2] + mx.array[3]
,y: vec.x*mx.array[4] + vec.y*mx.array[5] + vec.z*mx.array[6] + mx.array[7]
,z: vec.x*mx.array[8] + vec.y*mx.array[9] + vec.z*mx.array[10] + mx.array[11]
};
};
'vec' is a simple object with x, y, and z attributes. 'mx' is a simple object with two attributes, 'array' containing a row-primary array representation of the matrix3d transformation array, and 'string' containing a column-primary string representation of that array, ready to be plugged into a CSS transform attribute.
Now I have a 3d coordinate that should be the new position of my corner, after the outer and inner transitions have both been performed. But I need to project that 3d coordinate into my 2d viewing window, with the following function:
projection3d = function(pos, origin)
{ // Given an xyz point and an xyz perspective origin point, this will return the xy projected location
// Using the equation found here: http://en.wikipedia.org/wiki/3D_projection#Diagram
var pos2d = {x: null, y: null, z: null},
relPos2d = {x: null, y: null},
relPos = {x: null, y: null, z: null};
// First, we take our given point and locate it relative to the perspective origin, rather than the screen
relPos.x = pos.x - origin.x;
relPos.y = pos.y - origin.y;
relPos.z = pos.z - origin.z;
// Then we take this object and project it onto our 2d plane
relPos2d.x = relPos.x * (Math.abs(origin.z) / Math.abs(relPos.z));
relPos2d.y = relPos.y * (Math.abs(origin.z) / Math.abs(relPos.z));
// Then we take this and locate it relative to the screen again, instead of the perspective origin
pos2d.x = relPos2d.x + origin.x;
pos2d.y = relPos2d.y + origin.y;
return pos2d;
};
I take the result of that function and visually compare it with the actual result of the css 3d transformations in the browser, and find that translations and zooms are being predicted correctly, but not rotations, and I can't for the life of me figure out why not.
Does anyone have any insight? Is there further information I should provide?

Related

compute mouse position within video with object-fit:contain

I am trying to convert a mouse event to pixel coordinates within a video. By pixel coordinates, I mean coordinates relative to the original video resolution.
My video element has object-fit: contain, which means that the top left corner of the video is not necessarily located at position (0,0), as this picture shows:
If I click on the top-left corner of the white section in this video then I want to get (0,0), but in order to do this I need to discover the offset of the video content (white area) relative to the video element (black border).
How can I recover this offset?
I am already aware of width, height, videoWidth, and videoHeight, but these only let me account for the scaling, not the offset.
The offset can be deduced. I think this kind of code should do the trick:
if(videoHeight/height > videoWidth/width){
scale = videoHeight/height;
offsetX = (videoWidth - width*scale)/2;
offsetY = 0;
}
else{
scale = videoWidth/width;
offsetY = (videoHeight - height*scale)/2;
offsetX = 0;
}
I was also interested in getting the actual pixel positions from mouse or touch events when using object-fit, and this is the only result I found when searching. Although I suspect it is probably too late to be helpful to you, I thought I'd answer in case anybody else comes across this in future like I did.
Because I'm working on code with other people, I needed a robust solution that would work even if someone changed or removed the object-fit or object-property in the css
The approach that I took was:
Implement the cover, contain etc algorithms myself, just functions doing math, not dependent on the DOM
Use getComputedStyle to get the element's objectFit and objectPosition properties
Use .getBoundingClientRect() to get the DOM pixel size of the element
Pass the element's current objectFit, objectPosition, its DOM pixel size and it's natural pixel size to my function to figure out where the fitted rectangle sat within the element
You then have enough information to transform the event point to a pixel location
There's more code than would comfortably fit here, but getting the size of the fitted rectangle for cover or contain is something like:
if ( fitMode === 'cover' || fitMode === 'contain' ) {
const wr = parent.width / child.width
const hr = parent.height / child.height
const ratio = fitMode === 'cover' ? Math.max( wr, hr ) : Math.min( wr, hr )
const width = child.width * ratio
const height = child.height * ratio
const size = { width, height }
return size
}
// handle other object-fit modes here
Hopefully this gives others a rough idea of how to solve this problem themselves, alternately I have published the code at the link below, it supports all object-fit modes and it includes examples showing how to get the actual pixel point that was clicked:
https://github.com/nrkn/object-fit-math

Three.js - how to make objects with different positions look at Orthographic camera using vectors?

Three.js 76
I start to use Orthographic camera instead Perspective - has some troubles.
I use stemkoski's shader-glow for point animation in scene: he created some sphere and then use shader for it transparancy, i just add animation to it.
function animatePoints() {
var alphas;
var count;
var time;
let j = 0;
while ( animatedPoints[j] ) {
let threeGlow = animatedPoints[j];
let finishAnimation = threeGlow.meta.state.finishAnimation;
let itFinished = 0;
let pointGlowMat = threeGlow.material;
let pointGlowGeom = threeGlow.geometry;
// ########## make glow look at camera
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3().subVectors( threeGlow.position, camera.position);
alphas = pointGlowGeom.attributes.alpha;
count = alphas.count;
time = 0.001* Date.now();
if ( finishAnimation ) {
....
} else {
....
}
alphas.needsUpdate = true;
j++;
}
}
Main goal - to make glow look at camera. When camera was perspective i use solution with subtracts two vectors - camera position and glow position, so it look like glow looking at camera.
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3().subVectors( camera.position, threeGlow.position );
But now, when i use Orthographic camera this solution doesn't work correctly.
The problem is that now the glow should look not at camera position point, it should look at plane of the camera.
I make some pic scheme for that situation:look it, it very useful
So for each new glow (it's positions of course different) i must get new red vector, to make each glow look at orto cam.
Any ideas?
What you need to do to get the red vector in your image, is project vector sub on the vector representing the direction in which the camera is looking.
I think this is what you need:
pointGlowMat.uniforms.viewVector.value = new THREE.Vector3()
.subVectors( camera.position, threeGlow.position )
.projectOnVector( camera.getWorldDirection() );
One way to make an object "look at" an orthographic camera by using this pattern:
object.quaternion.copy( camera.quaternion );
The object's view direction will be perpendicular to the camera's projection plane.
This approach is appropriate if neither the object nor the camera have a rotated parent.
three.js r.84

Dynamically change plane height in babylonjs

I created a plane by using
this.plane = BABYLON.MeshBuilder.CreatePlane("plane", { width: 0.5, height: 10 }, scene, true);
and then I try to modify the plane height in rendering, but the height is not changed.
I think you are looking for scaling. Here is the link to Babylon.js Scaling. Essentially, you do
objectName.scaling.x = 1.50 //This is a scaling multiplier (you can substitute x, y, and z)
//Or, if you want to scale in more than one direction, you can use a vector3
objectName.scaling = new Babylon.Vector3(1.5,.5, 2)
This should let you dynamically change the dimensions of your object. If you wanted the make the change visibly noticeable (instead of jumping straight to the correct size) you could also add an animation to it.

Why is lookAt not looking at specified vector?

I have this three js scene: http://codepen.io/giorgiomartini/pen/ZWLWgX
The scene contains 5 things:
Camera - Not Visible
origen (3D vector) - At 0,0,0.
objOne - Green
objParent - Red
CenterOfscene - Blue
objOne is a child of objParent. And ObjOne looksAt origen, which is a 3d vector at 0,0,0.
But the objOne instead of looking at the 0,0,0. where the origin vector is, It looks at objParent....?
Got any ideas?
What i want is the objOne to look at the 0,0,0. Which is the origen vector.
Any ideas why this is misbehaving? thanks.
THREE.SceneUtils.detach( objOne, objParent, scene );
THREE.SceneUtils.attach( objOne, scene, objParent );
var origen = new THREE.Vector3( 0, 0, 0 );
var render = function () {
objOne.lookAt(origen);
requestAnimationFrame( render );
xOffset += 0.01;
yOffset += 0.011;
zOffset += 0.012;
xOffsetParent += 0.0011;
yOffsetParent += 0.0013;
zOffsetParent += 0.0012;
camXPos = centeredNoise(-1,1,xOffset);
camYPos = centeredNoise(-1,1,yOffset);
camZPos = centeredNoise(-1,1,zOffset);
objOne.position.x = camXPos*4;
objOne.position.y = camYPos*4;
objOne.position.z = camZPos*4;
camParentXPos = centeredNoise(-1,1,xOffsetParent);
camParentYPos = centeredNoise(-1,1,yOffsetParent);
camParentZPos = centeredNoise(-1,1,zOffsetParent);
objParent.position.x = camParentXPos*10;
objParent.position.y = camParentYPos*10;
objParent.position.z = camParentZPos*10;
renderer.render(scene, camera);
};
render();
Object3D.lookAt() does not support objects with rotated and/or translated parent(s).
Your work-around is to (1) add the child as a child of the scene, instead, and (2) replace the child object with a dummy Object3D, which, as a child of the parent object, will move with the parent.
Then, in your render loop,
child.position.setFromMatrixPosition( dummy.matrixWorld );
child.lookAt( origin );
three.js r.75
Here is the corrected codepen:
http://codepen.io/anon/pen/oxGpPQ?editors=0010
Now the green disk rides around its parent (the red sphere) all while looking at the blue disk (or the 'origen' vector).
Uncomment lines 163 and 164 to make the camera be at the green disk's location and have the camera also look at the blue disk ('origen' vector) while it orbits its parent red sphere.
How I accomplished this is:
1. make parent Red Mesh
2. make dummyChild Object3D (this is an invisible math object)
3. make child Green Mesh
4. make origen centerOfScene Blue Mesh
5. attach parent, child, and centerOfScene mesh to Scene (not dummyChild though)
6. attach dummyChild to parent like so: parent.add(dummyChild);
In render function:
1. Move parent around with noise function (which offsets dummyChild)
2. Move dummyChild with another noise function (which revolves around its parent position, the center of dummyChild's world is its red parent's position)
3. Stick the green child mesh wherever the invisible dummyChild is. But since dummyChild is offset by red parent, we need to get it's world coordinates in relation to Scene, not its coordinates in red's world, so we use
child.setFromMatrixPosition(dummyChild.matrixWorld);
Notice its matrixWorld and not matrix - matrix holds its local system and matrixWorld holds its coordinated relative to the Scene or World coordinate system.
4. Use lookAt to make the green child disk 'lookAt' the blue centerOfScene Mesh which is at the origen vector or the center of the Scene.
Hope this helps! :)

QGraphicsItem : emulating an item origin which is not the top left corner

My application is using Qt.
I have a class which is inheriting QGraphicsPixmapItem.
When applying transformations on these items (for instance, rotations), the origin of the item (or the pivot point) is always the top left corner.
I'd like to change this origin, so that, for instance, when setting the position of the item, this would put actually change the center of the pixmap.
Or, if I'm applying a rotation, the rotation's origin would be the center of the pixmap.
I haven't found a way to do it straight out of the box with Qt, so I thougth of reimplementing itemChange() like this :
QVariant JGraphicsPixmapItem::itemChange(GraphicsItemChange Change, const QVariant& rValue)
{
switch (Change)
{
case QGraphicsItem::ItemPositionHasChanged:
// Emulate a pivot point in the center of the image
this->translate(this->boundingRect().width() / 2,
this->boundingRect().height() / 2);
break;
case QGraphicsItem::ItemTransformHasChanged:
break;
}
return QGraphicsItem::itemChange(Change, rValue);
}
I thought this would work, as Qt's doc mentions that the position of an item and its transform matrix are two different concepts.
But it is not working.
Any idea ?
You're overthinking it. QGraphicsPixmapItem already has this functionality built in. See the setOffset method.
So to set the item origin at its centre, just do setOffset( -0.5 * QPointF( width(), height() ) ); every time you set the pixmap.
The Qt-documentation about rotating:
void QGraphicsItem::rotate ( qreal angle )
Rotates the current item
transformation angle degrees clockwise
around its origin. To translate around
an arbitrary point (x, y), you need to
combine translation and rotation with
setTransform().
Example:
// Rotate an item 45 degrees around (0, 0).
item->rotate(45);
// Rotate an item 45 degrees around (x, y).
item->setTransform(QTransform().translate(x, y).rotate(45).translate(-x, -y));
You need to create a rotate function, that translate the object to the parent's (0, 0) corner do the rotation and move the object to the original location.

Resources