Aframe/Three fit to screen - calculate zoom - math

I want to zoom the camera in Three/Aframe so an image fits to screen.
This is the code I'm using:
this._camera = document.getElementById('camera').getAttribute('camera')
this._ratio = this._assetWidth/this._assetHeight
this._vFOV = window.THREE.Math.degToRad( this._camera?.fov || 80 )
this._height = 2 * Math.tan( this._vFOV / 2 ) * this.data.distance
this._width = this._height * this._ratio
this._zoom = this._ratio > 1 ? this._width/window.innerWidth : this._height/window.innerHeight
console.log(this._zoom, this._ratio, this._width, window.innerWidth)
I've got to the part where I need to calculate Zoom so that the object fits to the screen, i.e. if it's landscape fits to width, if it's portrait fit to height.
I thought this was the answer but it's not. that's to calculate the camera position rather than zoom value.
I'm stuck on how you work out the zoom value.
Any clues?

Fitting an object to the screen by:
changing the camera FoV
zooming the camera
repositioning the camera / object
is quite similar once you understand where the formulas came from.
We'll use this neat image (from this SO thread) as it covers all three topics:
0. What do we want to achieve
We want the object (the longer side of either its width or height) to cover the filmHeight - so it fits the screen.
1. Recalculating the FoV
In this case we do know the focalLength (camera distance from the object) and filmHeight (object width or height). We can calculate fov / 2 thanks to our friend trigonometry:
Tan (fov / 2) = (filmHeight / 2) / focalLength
=>
fov = 2 * ATan ((filmHeight / 2)) / focalLength * 180 / PI
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<script>
AFRAME.registerComponent("fit", {
init: function() {
const plane = document.querySelector("a-plane")
const distance = this.el.object3D.position.distanceTo(plane.object3D.position)
var height = plane.getAttribute("geometry").height
var newFov = 2 * Math.atan((height / 2) / distance) * (180 / Math.PI); // in degrees
this.el.sceneEl.camera.fov = newFov
}
})
</script>
<a-scene>
<a-plane position="0 1.6 -2" material="src: https://i.imgur.com/wjobVTN.jpg"></a-plane>
<a-camera position="0 1.6 0" fit></a-camera>
</a-scene>
2. Repositioning the object / camera
Same triangle different variables. Now we want to know the focalLength:
Tan (fov / 2) = (filmHeight / 2) / focalLength
=>
focalLength = (filmHeight / 2) / Tan (fov / 2)
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<script>
AFRAME.registerComponent("fit", {
init: function() {
const plane = document.querySelector("a-plane")
const height = plane.getAttribute("geometry").height
const fov = this.el.sceneEl.camera.fov * (Math.PI / 180);
const newDistance = Math.abs((height / 2) / Math.tan(fov / 2))
plane.object3D.position.z = -1 * newDistance;
}
})
</script>
<a-scene>
<a-plane position="0 1.6 -2" material="src: https://i.imgur.com/wjobVTN.jpg"></a-plane>
<a-camera position="0 1.6 0" fit></a-camera>
</a-scene>
3. Zooming the camera
If we know what distance should the camera be from the object for it to fill the screen - we know what is the relation between the current distance, and the new one:
zoom = currentDistance / necessaryDistance
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<script>
AFRAME.registerComponent("fit", {
init: function() {
const plane = document.querySelector("a-plane");
const distance = this.el.object3D.position.distanceTo(plane.object3D.position);
const height = plane.getAttribute("geometry").height;
const fov = this.el.sceneEl.camera.fov * (Math.PI / 180);
const newDistance = Math.abs((height / 2) / Math.tan(fov / 2));
this.el.sceneEl.camera.zoom = distance / newDistance;
}
})
</script>
<a-scene>
<a-plane position="0 1.6 -2" material="src: https://i.imgur.com/wjobVTN.jpg"></a-plane>
<a-camera position="0 1.6 0" fit></a-camera>
</a-scene>

Related

Three.js calculate position at edge of rectangle when I know the angle from center

I'm trying to calculate position 2 in the illustration below.
I know position 1 from
this._end = new THREE.Vector3()
this._end.copy( this._rectanglePos )
.sub( this._circlePos ).setLength( 1.1 ).add( this._circlePos )
Where the radius of the circle is 2.2
I'm now trying to work out a position on the edge of the rectangle along this intersect.
I've found an equation written in pseudo code which I turned into this function
function positionAtEdge(phi, width, height){
let c = Math.cos(phi)
let s = Math.sin(phi)
let x = width/2
let y = height/2
if (width * Math.abs(s) < height * Math.abs(c)){
x -= Math.sign(c) * width / 2
y -= Math.tan(phi) * x
}
else{
y -= Math.sign(s) * height / 2
x -= cot(phi) * y
}
return {x, y, z: 0}
function cot(aValue){
return 1/Math.tan(aValue);
}
}
And this kind of works for the top of the rectangle but starts throwing crazy values after 90 degrees. Math didn't have a coTan function so I assumed from a little googling they meant this cot function.
Anyone know an easier way of finding this position 2 or how to convert this function into something useable.
This is a general purpose solution, which is independent of their relative position.
Live Example (JSFiddle)
function getIntersection( circle, rectangle, width, height ) {
// offset is a utility Vector3.
// initialized outside the function scope.
offset.copy( circle ).sub( rectangle );
let ratio = Math.min(
width * 0.5 / Math.abs( offset.x ),
height * 0.5 / Math.abs( offset.y )
);
offset.multiplyScalar( ratio ).add( rectangle );
return offset;
}
You don't need any transcendental functions for this.
Vsb = (Spherecenter - rectanglecenter)
P2 = rectanglecenter + ((vsb * rectangleheight * .5) / vsb.y)

Shift A-Frame Origin

When I place a block at the origin it is very far back from my sensors. I wish to move the aframe origin forward 1 meter (1 in the -z direction). Additionally I am using a component which tracks the cameras position and so I cannot just wrap everything in an <a-entity> and move that forward. How can I change the position of the origin?
Component:
AFRAME.registerComponent('info-panel', {
tick: function() {
var el = this.el;
var camera = document.querySelector('a-camera');
var cpos = camera.getAttribute('position');
var x = cpos.x;
var z = cpos.z;
var angle;
if (z === 0) {
if (x === 0) {
angle = 0;
} else if (x > 0) {
angle = 90;
} else {
angle = -90;
}
} else {
angle = (z > 0 ? 0 : 180);
angle += 180 / Math.PI * Math.atan(x / z);
}
el.setAttribute('rotation', {x: 0, y: angle, z: 0});
}
});
Scene:
<a-scene>
    <a-camera></a-camera>
    <a-panel info-panel></a-panel>
</a-scene>
Or you can position the camera with wrapper entity.
How do a place the camera first position
<a-entity position="0 0 5">
<a-camera></a-camera>
</a-entity>
I'm not sure if there is such functionality,
I would wrap everything except the camera in <a-entity>
So You can position 'everything' except the camera.

Undesirable lines in "random" star generation

I'm creating a star field in a three.js scene. The code to generate the random positions of the stars is below. When the stars are rendered and the camera is pulled back enough from the center of the scene, there are a couple of visible "empty" tracks in the placement of the stars.
I'm assuming it has to do with the math in the _addStars method. Can anyone help me to tighten up the placement of the stars throughout the entire canvas?
Note: The canvas I have to work with is somewhere around an 8:1 ratio height:width. So just repositioning the camera is not an option.
UPDATE: I've added a fiddle to demonstrate the issue: https://jsfiddle.net/scottwatkins/5zjoLLpx/5/
/** Method to generate the stars and place them in the particle system */
_addStars: function () {
var starColors = [];
var starGeometry = new THREE.Geometry();
starGeometry.colors = starColors;
for (var i = 0; i < this.totalStars; i++) {
var x = 120 - Math.random() * 1040;
var y = 480 - Math.random() * 1040;
var z = 0 - Math.random() * 1040;
starGeometry.vertices.push( new THREE.Vector3( x, y, z ) );
var starColor = new THREE.Color(0xffffff);
starColor.setRGB(
.8 + Math.random() * .2,
.8 + Math.random() * .2,
.8 + Math.random() * .2);
starColors.push(starColor)
}
var starMaterial = new THREE.PointsMaterial( {
size: 2.0,
map: this.starTexture,
depthTest: false,
depthWrite: false,
blending: THREE.AdditiveBlending,
transparent : true,
vertexColors: true
} );
this.particleSystem = new THREE.Points( starGeometry, starMaterial );
this.scene.add(this.particleSystem);
}
It appears to be caused by Math.random() seems to work with THREE.Math.random16()
var x = 120 - THREE.Math.random16() * 1040;
var y = 480 - THREE.Math.random16() * 1040;
var z = 0 - THREE.Math.random16() * 1040;
Here's what it says in the docs for THREE.Math.random16():
Random float from 0 to 1 with 16 bits of randomness.
Standard Math.random() creates repetitive patterns when applied over larger space.
Updated fiddle: here

D3.geo : responsive frame given a geojson object?

I use Mike Bostock's code to Center a map in d3 given a geoJSON object.
The important part of the code is this:
var width = 960,
height = 500;
var svg = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);
d3.json("/d/4090846/us.json", function(error, us) {
var states = topojson.feature(us, us.objects.states),
state = states.features.filter(function(d) { return d.id === 34; })[0];
/* ******************* AUTOCENTERING ************************* */
// Create a unit projection.
var projection = d3.geo.albers()
.scale(1)
.translate([0, 0]);
// Create a path generator.
var path = d3.geo.path()
.projection(projection);
// Compute the bounds of a feature of interest, then derive scale & translate.
var b = path.bounds(state),
s = .95 / Math.max((b[1][0] - b[0][0]) / width, (b[1][1] - b[0][1]) / height),
t = [(width - s * (b[1][0] + b[0][0])) / 2, (height - s * (b[1][1] + b[0][1])) / 2];
// Update the projection to use computed scale & translate.
projection
.scale(s)
.translate(t);
/* ******************* END *********************************** */
// Landmass
svg.append("path")
.datum(states)
.attr("class", "feature")
.attr("d", path);
// Focus
svg.append("path")
.datum(state)
.attr("class", "outline")
.attr("d", path);
});
For example, bl.ocks.org/4707858 zoom in such:
How to center and zoom on the target topo/geo.json AND adjust the svg frame dimensions so it fit a 5% margin on each size ?
Mike's explained
Basically, Mike's code states the frame dimensions via
var width = 960, height = 500;
var svg = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);
Once the frame is hardly set, then you check out the largest limiting ratio so your geojson shape fill your svg frame on its largest dimension relative to the svg frame dimensions widht & height. Aka, if the shape's width VS frame width or shape height VS frame height is the highest. This, in turn, help to recalculate the scale via 1/highest ratio so the shape is as small as required. It's all done via:
var b = path.bounds(state),
s = .95 / Math.max((b[1][0] - b[0][0]) / width, (b[1][1] - b[0][1]) / height);
// b as [[left, bottom], [right, top]]
// (b[1][0] - b[0][0]) = b.left - b.right = shape's width
// (b[1][3] - b[0][4]) = b.top - b.bottom = shape's height
Then, refreshing your scale and transition you get Mike Bostock's zoom:
New framing
To frame up around the geojson shape is actually a simplification of Mike's code. First, set temporary svg dimensions:
var width = 200;
var svg = d3.select("body").append("svg")
.attr("width", width);
Then, get the dimensions of the shapes and compute around it :
var b = path.bounds(state);
// b.s = b[0][1]; b.n = b[1][1]; b.w = b[0][0]; b.e = b[1][0];
b.height = Math.abs(b[1][1] - b[0][1]); b.width = Math.abs(b[1][0] - b[0][0]);
var r = ( b.height / b.width );
var s = 0.9 / (b.width / width); // dimension of reference: `width` (constant)
//var s = 1 / Math.max(b.width / width, b.height / height ); // dimension of reference: largest side.
var t = [(width - s * (b[1][0] + b[0][0])) / 2, (width*r - s * (b[1][1] + b[0][1])) / 2]; //translation
Refresh projection and svg's height:
var proj = projection
.scale(s)
.translate(t);
svg.attr("height", width*r);
It's done and fit the pre-allocated width=150px, find the needed height, and zoom properly. See http://bl.ocks.org/hugolpz/9643738d5f79c7b594d0

Radius of projected sphere in screen space

I'm trying to find the visible size of a sphere in pixels, after projection to screen space. The sphere is centered at the origin with the camera looking right at it. Thus the projected sphere should be a perfect circle in two dimensions. I am aware of this 1 existing question. However, the formula given there doesn't seem to produce the result I want. It is too small by a few percent. I assume this is because it is not correctly taking perspective into account. After projecting to screen space you do not see half the sphere but significantly less, due to perspective foreshortening (you see just a cap of the sphere instead of the full hemisphere 2).
How can I derive an exact 2D bounding circle?
Indeed, with a perspective projection you need to compute the height of the sphere "horizon" from the eye / center of the camera (this "horizon" is determined by rays from the eye tangent to the sphere).
Notations:
d: distance between the eye and the center of the sphere
r: radius of the sphere
l: distance between the eye and a point on the sphere "horizon", l = sqrt(d^2 - r^2)
h: height / radius of the sphere "horizon"
theta: (half-)angle of the "horizon" cone from the eye
phi: complementary angle of theta
h / l = cos(phi)
but:
r / d = cos(phi)
so, in the end:
h = l * r / d = sqrt(d^2 - r^2) * r / d
Then once you have h, simply apply the standard formula (the one from the question you linked) to get the projected radius pr in the normalized viewport:
pr = cot(fovy / 2) * h / z
with z the distance from the eye to the plane of the sphere "horizon":
z = l * cos(theta) = sqrt(d^2 - r^2) * h / r
so:
pr = cot(fovy / 2) * r / sqrt(d^2 - r^2)
And finally, multiply pr by height / 2 to get the actual screen radius in pixels.
What follows is a small demo done with three.js. The sphere distance, radius and the vertical field of view of the camera can be changed by using respectively the n / f, m / p and s / w pairs of keys. A yellow line segment rendered in screen-space shows the result of the computation of the radius of the sphere in screen-space. This computation is done in the function computeProjectedRadius().
projected-sphere.js:
"use strict";
function computeProjectedRadius(fovy, d, r) {
var fov;
fov = fovy / 2 * Math.PI / 180.0;
//return 1.0 / Math.tan(fov) * r / d; // Wrong
return 1.0 / Math.tan(fov) * r / Math.sqrt(d * d - r * r); // Right
}
function Demo() {
this.width = 0;
this.height = 0;
this.scene = null;
this.mesh = null;
this.camera = null;
this.screenLine = null;
this.screenScene = null;
this.screenCamera = null;
this.renderer = null;
this.fovy = 60.0;
this.d = 10.0;
this.r = 1.0;
this.pr = computeProjectedRadius(this.fovy, this.d, this.r);
}
Demo.prototype.init = function() {
var aspect;
var light;
var container;
this.width = window.innerWidth;
this.height = window.innerHeight;
// World scene
aspect = this.width / this.height;
this.camera = new THREE.PerspectiveCamera(this.fovy, aspect, 0.1, 100.0);
this.scene = new THREE.Scene();
this.scene.add(THREE.AmbientLight(0x1F1F1F));
light = new THREE.DirectionalLight(0xFFFFFF);
light.position.set(1.0, 1.0, 1.0).normalize();
this.scene.add(light);
// Screen scene
this.screenCamera = new THREE.OrthographicCamera(-aspect, aspect,
-1.0, 1.0,
0.1, 100.0);
this.screenScene = new THREE.Scene();
this.updateScenes();
this.renderer = new THREE.WebGLRenderer({
antialias: true
});
this.renderer.setSize(this.width, this.height);
this.renderer.domElement.style.position = "relative";
this.renderer.autoClear = false;
container = document.createElement('div');
container.appendChild(this.renderer.domElement);
document.body.appendChild(container);
}
Demo.prototype.render = function() {
this.renderer.clear();
this.renderer.setViewport(0, 0, this.width, this.height);
this.renderer.render(this.scene, this.camera);
this.renderer.render(this.screenScene, this.screenCamera);
}
Demo.prototype.updateScenes = function() {
var geometry;
this.camera.fov = this.fovy;
this.camera.updateProjectionMatrix();
if (this.mesh) {
this.scene.remove(this.mesh);
}
this.mesh = new THREE.Mesh(
new THREE.SphereGeometry(this.r, 16, 16),
new THREE.MeshLambertMaterial({
color: 0xFF0000
})
);
this.mesh.position.z = -this.d;
this.scene.add(this.mesh);
this.pr = computeProjectedRadius(this.fovy, this.d, this.r);
if (this.screenLine) {
this.screenScene.remove(this.screenLine);
}
geometry = new THREE.Geometry();
geometry.vertices.push(new THREE.Vector3(0.0, 0.0, -1.0));
geometry.vertices.push(new THREE.Vector3(0.0, -this.pr, -1.0));
this.screenLine = new THREE.Line(
geometry,
new THREE.LineBasicMaterial({
color: 0xFFFF00
})
);
this.screenScene = new THREE.Scene();
this.screenScene.add(this.screenLine);
}
Demo.prototype.onKeyDown = function(event) {
console.log(event.keyCode)
switch (event.keyCode) {
case 78: // 'n'
this.d /= 1.1;
this.updateScenes();
break;
case 70: // 'f'
this.d *= 1.1;
this.updateScenes();
break;
case 77: // 'm'
this.r /= 1.1;
this.updateScenes();
break;
case 80: // 'p'
this.r *= 1.1;
this.updateScenes();
break;
case 83: // 's'
this.fovy /= 1.1;
this.updateScenes();
break;
case 87: // 'w'
this.fovy *= 1.1;
this.updateScenes();
break;
}
}
Demo.prototype.onResize = function(event) {
var aspect;
this.width = window.innerWidth;
this.height = window.innerHeight;
this.renderer.setSize(this.width, this.height);
aspect = this.width / this.height;
this.camera.aspect = aspect;
this.camera.updateProjectionMatrix();
this.screenCamera.left = -aspect;
this.screenCamera.right = aspect;
this.screenCamera.updateProjectionMatrix();
}
function onLoad() {
var demo;
demo = new Demo();
demo.init();
function animationLoop() {
demo.render();
window.requestAnimationFrame(animationLoop);
}
function onResizeHandler(event) {
demo.onResize(event);
}
function onKeyDownHandler(event) {
demo.onKeyDown(event);
}
window.addEventListener('resize', onResizeHandler, false);
window.addEventListener('keydown', onKeyDownHandler, false);
window.requestAnimationFrame(animationLoop);
}
index.html:
<!DOCTYPE html>
<html>
<head>
<title>Projected sphere</title>
<style>
body {
background-color: #000000;
}
</style>
<script src="http://cdnjs.cloudflare.com/ajax/libs/three.js/r61/three.min.js"></script>
<script src="projected-sphere.js"></script>
</head>
<body onLoad="onLoad()">
<div id="container"></div>
</body>
</html>
Let the sphere have radius r and be seen at a distance d from the observer. The projection plane is at distance f from the observer.
The sphere is seen under the half angle asin(r/d), so the apparent radius is f.tan(asin(r/d)), which can be written as f . r / sqrt(d^2 - r^2). [The wrong formula being f . r / d.]
The illustrated accepted answer above is excellent, but I needed a solution without knowing the field of view, just a matrix to transform between world and screen space, so I had to adapt the solution.
Reusing some variable names from the other answer, calculate the start point of the spherical cap (the point where line h meets line d):
capOffset = cos(asin(l / d)) * r
capCenter = sphereCenter + ( sphereNormal * capOffset )
where capCenter and sphereCenter are points in world space, and sphereNormal is a normalized vector pointing along d, from the sphere center towards the camera.
Transform the point to screen space:
capCenter2 = matrix.transform(capCenter)
Add 1 (or any amount) to the x pixel coordinate:
capCenter2.x += 1
Transform it back to world space:
capCenter2 = matrix.inverse().transform(capCenter2)
Measure the distance between the original and new points in world space, and divide into the amount you added to get a scale factor:
scaleFactor = 1 / capCenter.distance(capCenter2)
Multiply that scale factor by the cap radius h to get the visible screen radius in pixels:
screenRadius = h * scaleFactor

Resources