After reading through the documentation, I'm still a bit confused on how to execute an animation after another one has completed. I have a timeline like so:
timeline {
keyframe(Duration.seconds(0.5)) {
keyvalue(firstImg.scaleXProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(firstImg.scaleYProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(firstImg.rotateProperty(), 0.0, interpolator = Interpolator.EASE_BOTH)
}
keyframe(Duration.seconds(0.5)) {
keyvalue(secondImg.scaleXProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(secondImg.scaleYProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(secondImg.rotateProperty(), 0.0, interpolator = Interpolator.EASE_BOTH)
}
keyframe(Duration.seconds(0.5)) {
keyvalue(thirdImg.scaleXProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(thirdImg.scaleYProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(thirdImg.rotateProperty(), 0.0, interpolator = Interpolator.EASE_BOTH)
}
keyframe(Duration.seconds(0.5)) {
keyvalue(fourthImg.scaleXProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(fourthImg.scaleYProperty(), 1.0, interpolator = Interpolator.EASE_BOTH)
keyvalue(fourthImg.rotateProperty(), 0.0, interpolator = Interpolator.EASE_BOTH)
}
}
This runs them all at once, but I would like to run each animation after the other one has finished! I can't quite figure out how to do this.. (sorry if this is obvious, I am very new to Kotlin and Java in general!)
I see that the keyframe has an onFinished property but I can't quite figure out what I'm supposed to actually set it to. Is there a better way to do this? Thank you!
Based on the structure proposed by #tornadofx-fan I've added builders for sequentialTransition and parallelTransition, so starting from TornadoFX 1.7.9 you can do the same like this:
class TransitionViews: View() {
val r1 = Rectangle(20.0, 20.0, Color.RED)
val r2 = Rectangle(20.0, 20.0, Color.YELLOW)
val r3 = Rectangle(20.0, 20.0, Color.GREEN)
val r4 = Rectangle(20.0, 20.0, Color.BLUE)
override val root = vbox {
button("Animate").action {
sequentialTransition {
timeline {
keyframe(0.5.seconds) {
keyvalue(r1.translateXProperty(), 50.0, interpolator = Interpolator.EASE_BOTH)
}
}
timeline {
keyframe(0.5.seconds) {
keyvalue(r2.translateXProperty(), 100.0, interpolator = Interpolator.EASE_BOTH)
}
}
timeline {
keyframe(0.5.seconds) {
keyvalue(r3.translateXProperty(), 150.0, interpolator = Interpolator.EASE_BOTH)
}
}
timeline {
keyframe(0.5.seconds) {
keyvalue(r4.translateXProperty(), 200.0, interpolator = Interpolator.EASE_BOTH)
}
}
}
}
pane {
add(r1)
add(r2)
add(r3)
add(r4)
}
}
}
The timeline builder inside of these transitions don't automatically play, while the transition itself automatically plays when the builder is completed. You can pass play=false to the transition builder to disable autoplay.
Also note the usage of 0.5.seconds to generate the Duration objects :)
There's a JavaFX class "SequentialTransition" that will run your timelines in sequence. You'll need to disable the TornadoFX autoplay with a flag passed into the timeline builder. Check out ParallelTransition if you want to run these all at once using a similar coding pattern.
class STTest : View("My View") {
val r1 = Rectangle(20.0, 20.0, Color.RED)
val r2 = Rectangle(20.0, 20.0, Color.YELLOW)
val r3 = Rectangle(20.0, 20.0, Color.GREEN)
val r4 = Rectangle(20.0, 20.0, Color.BLUE)
override val root = vbox {
button("Animate") {
setOnAction {
val t1 = timeline(false) {
keyframe(Duration.seconds(0.5)) {
keyvalue(r1.translateXProperty(), 50.0, interpolator = Interpolator.EASE_BOTH)
}
}
val t2 = timeline(false) {
keyframe(Duration.seconds(0.5)) {
keyvalue(r2.translateXProperty(), 100.0, interpolator = Interpolator.EASE_BOTH)
}
}
val t3 = timeline(false) {
keyframe(Duration.seconds(0.5)) {
keyvalue(r3.translateXProperty(), 150.0, interpolator = Interpolator.EASE_BOTH)
}
}
val t4 = timeline(false) {
keyframe(Duration.seconds(0.5)) {
keyvalue(r4.translateXProperty(), 200.0, interpolator = Interpolator.EASE_BOTH)
}
}
/* functions look better
val st = SequentialTransition()
st.children += t1
st.children += t2
st.children += t3
st.children += t4
st.play()
*/
t1.then(t2).then(t3).then(t4).play()
}
}
pane {
add(r1)
add(r2)
add(r3)
add(r4)
}
}
}
In this case where you're just setting scales and rotations, there are some nice helpers already in the library. This should work for you:
val time = 0.5.seconds
firstImg.scale(time, Point2D(1.0, 1.0), play = false).and(firstImg.rotate(time, 0.0, play = false))
.then(secondImg.scale(time, Point2D(1.0, 1.0), play = false).and(secondImg.rotate(time, 0.0, play = false)))
.then(thirdImg.scale(time, Point2D(1.0, 1.0), play = false).and(thirdImg.rotate(time, 0.0, play = false)))
.then(fourthImg.scale(time, Point2D(1.0, 1.0), play = false).and(fourthImg.rotate(time, 0.0, play = false)))
.play()
The play = false everywhere is required since these helpers were designed for simple one-off auto-playing animations.
Edit
After a discussion in Slack, these may be simplified in a future release, so the above may eventually be as easy as
val time = 0.5.seconds
listOf(
firstImg.scale(time, 1 p 1) and firstImg.rotate(time, 0),
secondImg.scale(time, 1 p 1) and secondImg.rotate(time, 0),
thirdImg.scale(time, 1 p 1) and thirdImg.rotate(time, 0),
fourthImg.scale(time, 1 p 1) and fourthImg.rotate(time, 0)
).playSequential()
Watch the release notes for more info!
Another Edit
Looks like I was over complicating things a bit. You can just use this if you like it more:
val time = 0.5.seconds
SequentialTransition(
firstImg.scale(time, Point2D(1.0, 1.0), play = false).and(firstImg.rotate(time, 0.0, play = false)).
secondImg.scale(time, Point2D(1.0, 1.0), play = false).and(secondImg.rotate(time, 0.0, play = false)),
thirdImg.scale(time, Point2D(1.0, 1.0), play = false).and(thirdImg.rotate(time, 0.0, play = false)),
fourthImg.scale(time, Point2D(1.0, 1.0), play = false).and(fourthImg.rotate(time, 0.0, play = false))
).play()
Related
there is a very rough ThreeJS sketch with a cube at the Vector3(0.0, 0.0, 0.0) rotated with one edge to a viewer. Code gets some screen points from left/right edges, transforms them to 3D world coordinates and transpose further for their projections on the cube. By now I have set them by hand, but it could be done with THREE.Raycaster and the result is the same.
let m0 = new THREE.Vector3(0.0, edges.wtl.y, 100.0);
let m1 = new THREE.Vector3(0.0, edges.wtl.y, -100.0);
let ray0 = new THREE.Raycaster();
let dir = m1.clone().sub(m0.clone()).normalize();
ray0.set(m0, dir);
The initial setup looks fine, but if you rotate scene with OrbitControls you would notice that straight white lines don't match with red ones. Despite the fact that the red lines are built correctly based on the camera FOV distortion I need to tweak red dots in a way illustrated below.
Any ideas? Maybe I need to find screen coordinates for cube left/right edges and find its intersections with whose I am using just in the beginning of calculateEdges() and transform them back to world ones? It's a very clumsy solution and could be use as a last resort only.
THREE.OrbitControls = function ( object, domElement ) {
this.object = object;
this.domElement = ( domElement !== undefined ) ? domElement : document;
// API
this.enabled = true;
this.center = new THREE.Vector3();
this.userZoom = true;
this.userZoomSpeed = 1.0;
this.userRotate = true;
this.userRotateSpeed = 1.0;
this.userPan = true;
this.userPanSpeed = 2.0;
this.autoRotate = false;
this.autoRotateSpeed = 2.0; // 30 seconds per round when fps is 60
this.minPolarAngle = 0; // radians
this.maxPolarAngle = Math.PI; // radians
this.minDistance = 0;
this.maxDistance = Infinity;
// 65 /*A*/, 83 /*S*/, 68 /*D*/
this.keys = { LEFT: 37, UP: 38, RIGHT: 39, BOTTOM: 40, ROTATE: 65, ZOOM: 83, PAN: 68 };
// internals
var scope = this;
var EPS = 0.000001;
var PIXELS_PER_ROUND = 1800;
var rotateStart = new THREE.Vector2();
var rotateEnd = new THREE.Vector2();
var rotateDelta = new THREE.Vector2();
var zoomStart = new THREE.Vector2();
var zoomEnd = new THREE.Vector2();
var zoomDelta = new THREE.Vector2();
var phiDelta = 0;
var thetaDelta = 0;
var scale = 1;
var lastPosition = new THREE.Vector3();
var STATE = { NONE: -1, ROTATE: 0, ZOOM: 1, PAN: 2 };
var state = STATE.NONE;
// events
var changeEvent = { type: 'change' };
this.rotateLeft = function ( angle ) {
if ( angle === undefined ) {
angle = getAutoRotationAngle();
}
thetaDelta -= angle;
};
this.rotateRight = function ( angle ) {
if ( angle === undefined ) {
angle = getAutoRotationAngle();
}
thetaDelta += angle;
};
this.rotateUp = function ( angle ) {
if ( angle === undefined ) {
angle = getAutoRotationAngle();
}
phiDelta -= angle;
};
this.rotateDown = function ( angle ) {
if ( angle === undefined ) {
angle = getAutoRotationAngle();
}
phiDelta += angle;
};
this.zoomIn = function ( zoomScale ) {
if ( zoomScale === undefined ) {
zoomScale = getZoomScale();
}
scale /= zoomScale;
};
this.zoomOut = function ( zoomScale ) {
if ( zoomScale === undefined ) {
zoomScale = getZoomScale();
}
scale *= zoomScale;
};
this.pan = function ( distance ) {
distance.transformDirection( this.object.matrix );
distance.multiplyScalar( scope.userPanSpeed );
this.object.position.add( distance );
this.center.add( distance );
};
this.update = function () {
var position = this.object.position;
var offset = position.clone().sub( this.center );
// angle from z-axis around y-axis
var theta = Math.atan2( offset.x, offset.z );
// angle from y-axis
var phi = Math.atan2( Math.sqrt( offset.x * offset.x + offset.z * offset.z ), offset.y );
if ( this.autoRotate ) {
this.rotateLeft( getAutoRotationAngle() );
}
theta += thetaDelta;
phi += phiDelta;
// restrict phi to be between desired limits
phi = Math.max( this.minPolarAngle, Math.min( this.maxPolarAngle, phi ) );
// restrict phi to be betwee EPS and PI-EPS
phi = Math.max( EPS, Math.min( Math.PI - EPS, phi ) );
var radius = offset.length() * scale;
// restrict radius to be between desired limits
radius = Math.max( this.minDistance, Math.min( this.maxDistance, radius ) );
offset.x = radius * Math.sin( phi ) * Math.sin( theta );
offset.y = radius * Math.cos( phi );
offset.z = radius * Math.sin( phi ) * Math.cos( theta );
position.copy( this.center ).add( offset );
this.object.lookAt( this.center );
thetaDelta = 0;
phiDelta = 0;
scale = 1;
if ( lastPosition.distanceTo( this.object.position ) > 0 ) {
this.dispatchEvent( changeEvent );
lastPosition.copy( this.object.position );
}
};
function getAutoRotationAngle() {
return 2 * Math.PI / 60 / 60 * scope.autoRotateSpeed;
}
function getZoomScale() {
return Math.pow( 0.95, scope.userZoomSpeed );
}
function onMouseDown( event ) {
if ( scope.enabled === false ) return;
if ( scope.userRotate === false ) return;
event.preventDefault();
if ( state === STATE.NONE )
{
if ( event.button === 0 )
state = STATE.ROTATE;
if ( event.button === 1 )
state = STATE.ZOOM;
if ( event.button === 2 )
state = STATE.PAN;
}
if ( state === STATE.ROTATE ) {
//state = STATE.ROTATE;
rotateStart.set( event.clientX, event.clientY );
} else if ( state === STATE.ZOOM ) {
//state = STATE.ZOOM;
zoomStart.set( event.clientX, event.clientY );
} else if ( state === STATE.PAN ) {
//state = STATE.PAN;
}
document.addEventListener( 'mousemove', onMouseMove, false );
document.addEventListener( 'mouseup', onMouseUp, false );
}
function onMouseMove( event ) {
if ( scope.enabled === false ) return;
event.preventDefault();
if ( state === STATE.ROTATE ) {
rotateEnd.set( event.clientX, event.clientY );
rotateDelta.subVectors( rotateEnd, rotateStart );
scope.rotateLeft( 2 * Math.PI * rotateDelta.x / PIXELS_PER_ROUND * scope.userRotateSpeed );
scope.rotateUp( 2 * Math.PI * rotateDelta.y / PIXELS_PER_ROUND * scope.userRotateSpeed );
rotateStart.copy( rotateEnd );
} else if ( state === STATE.ZOOM ) {
zoomEnd.set( event.clientX, event.clientY );
zoomDelta.subVectors( zoomEnd, zoomStart );
if ( zoomDelta.y > 0 ) {
scope.zoomIn();
} else {
scope.zoomOut();
}
zoomStart.copy( zoomEnd );
} else if ( state === STATE.PAN ) {
var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0;
var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0;
scope.pan( new THREE.Vector3( - movementX, movementY, 0 ) );
}
}
function onMouseUp( event ) {
if ( scope.enabled === false ) return;
if ( scope.userRotate === false ) return;
document.removeEventListener( 'mousemove', onMouseMove, false );
document.removeEventListener( 'mouseup', onMouseUp, false );
state = STATE.NONE;
}
function onMouseWheel( event ) {
if ( scope.enabled === false ) return;
if ( scope.userZoom === false ) return;
var delta = 0;
if ( event.wheelDelta ) { // WebKit / Opera / Explorer 9
delta = event.wheelDelta;
} else if ( event.detail ) { // Firefox
delta = - event.detail;
}
if ( delta > 0 ) {
scope.zoomOut();
} else {
scope.zoomIn();
}
}
function onKeyDown( event ) {
if ( scope.enabled === false ) return;
if ( scope.userPan === false ) return;
switch ( event.keyCode ) {
/*case scope.keys.UP:
scope.pan( new THREE.Vector3( 0, 1, 0 ) );
break;
case scope.keys.BOTTOM:
scope.pan( new THREE.Vector3( 0, - 1, 0 ) );
break;
case scope.keys.LEFT:
scope.pan( new THREE.Vector3( - 1, 0, 0 ) );
break;
case scope.keys.RIGHT:
scope.pan( new THREE.Vector3( 1, 0, 0 ) );
break;
*/
case scope.keys.ROTATE:
state = STATE.ROTATE;
break;
case scope.keys.ZOOM:
state = STATE.ZOOM;
break;
case scope.keys.PAN:
state = STATE.PAN;
break;
}
}
function onKeyUp( event ) {
switch ( event.keyCode ) {
case scope.keys.ROTATE:
case scope.keys.ZOOM:
case scope.keys.PAN:
state = STATE.NONE;
break;
}
}
this.domElement.addEventListener( 'contextmenu', function ( event ) { event.preventDefault(); }, false );
this.domElement.addEventListener( 'mousedown', onMouseDown, false );
this.domElement.addEventListener( 'mousewheel', onMouseWheel, false );
this.domElement.addEventListener( 'DOMMouseScroll', onMouseWheel, false ); // firefox
window.addEventListener( 'keydown', onKeyDown, false );
window.addEventListener( 'keyup', onKeyUp, false );
};
THREE.OrbitControls.prototype = Object.create( THREE.EventDispatcher.prototype );
let camera, scene, renderer, raycaster, controls, edges = {}, line0, line1, plane;
let windowHalfX = window.innerWidth / 2;
let windowHalfY = window.innerHeight / 2;
init();
animate();
function init() {
const container = document.createElement('div');
document.body.appendChild(container);
camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 1, 1024);
camera.position.x = 0;
camera.position.y = 0;
camera.position.z = 64;
scene = new THREE.Scene();
edges.tl = new THREE.Vector3(0.0, 0.0, 0.0);
edges.tr = new THREE.Vector3(0.0, 0.0, 0.0);
edges.bl = new THREE.Vector3(0.0, 0.0, 0.0);
edges.br = new THREE.Vector3(0.0, 0.0, 0.0);
edges.wtl = new THREE.Vector3(0.0, 0.0, 0.0);
edges.wtr = new THREE.Vector3(0.0, 0.0, 0.0);
edges.wbl = new THREE.Vector3(0.0, 0.0, 0.0);
edges.wbr = new THREE.Vector3(0.0, 0.0, 0.0);
edges.width = new THREE.Vector3(0.0, 0.0, 0.0);
edges.wwidth = new THREE.Vector3(0.0, 0.0, 0.0);
const ambientLight = new THREE.AmbientLight(0xCCCCCC, 0.4);
scene.add(ambientLight);
const pointLight = new THREE.PointLight(0xFFFFFF, 0.8);
camera.add(pointLight);
scene.add(camera);
renderer = new THREE.WebGLRenderer();
renderer.outputEncoding = THREE.sRGBEncoding;
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
container.appendChild(renderer.domElement);
controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.minPolarAngle = Math.PI / 2.0 -0.15;
controls.maxPolarAngle = Math.PI / 2.0 + 0.15;
controls.minAzimuthAngle = -0.15;
controls.maxAzimuthAngle = 0.15;
controls.minDistance = 42.0; //.75;
controls.maxDistance = 69.0;
//cube
let geometry = new THREE.BoxGeometry(32, 32, 32);
let material = new THREE.MeshPhongMaterial( {color: 0x00FFFF} );
const cube = new THREE.Mesh(geometry, material);
cube.rotation.set(0.0, -Math.PI / 4.0, 0.0);
cube.name = "cube";
cube.updateMatrixWorld();
scene.add(cube);
//window.addEventListener('resize', onWindowResize);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
controls.update();
calculateEdges()
renderer.render(scene, camera);
}
function calculateEdges(){
let toRemove = ["line0", "line1", "topLine", "bottomLine", "frame", "pointTM", "pointBM", "point00", "point01", "point10", "point11", "point20","point21", "point30", "point31", "point40", "point41"];
toRemove.forEach((name_) => { if(scene.getObjectByName(name_) != undefined) { scene.remove(scene.getObjectByName(name_)); } })
let distance = 0.0, w = 50;
edges.tl.x = -1.0;
edges.tl.y = -((windowHalfY - w) / window.innerHeight) * 2 + 1;
edges.tl.z = 0.0;
edges.width.x = ((2.0 * w) / window.innerWidth) * 2 - 1;
edges.width.y = -((windowHalfY - w) / window.innerHeight) * 2 + 1;
edges.width.z = 0.0;
edges.tr.x = (windowHalfX * 2.0 / window.innerWidth) * 2 - 1;
edges.tr.y = -((windowHalfY - w) / window.innerHeight) * 2 + 1;
edges.tr.z = 0.0;
edges.bl.x = -1.0;
edges.bl.y = -((windowHalfY + w) / window.innerHeight) * 2 + 1;
edges.bl.z = 0.0;
edges.br.x = (windowHalfX * 2.0 / window.innerWidth) * 2 - 1;
edges.br.y = -((windowHalfY + w) / window.innerHeight) * 2 + 1;
edges.br.z = 0.0;
edges.tl.unproject(camera);
edges.tl.sub(camera.position).normalize();
distance = -camera.position.z / edges.tl.z;
edges.wtl = edges.wtl.copy(camera.position).add(edges.tl.multiplyScalar(distance));
edges.width.unproject(camera);
edges.width.sub(camera.position).normalize();
distance = -camera.position.z / edges.width.z;
edges.wwidth = edges.wwidth.copy(camera.position).add(edges.width.multiplyScalar(distance));
edges.tr.unproject(camera);
edges.tr.sub(camera.position).normalize();
distance = -camera.position.z / edges.tr.z;
edges.wtr = edges.wtr.copy(camera.position).add(edges.tr.multiplyScalar(distance));
edges.bl.unproject(camera);
edges.bl.sub(camera.position).normalize();
distance = -camera.position.z / edges.bl.z;
edges.wbl = edges.wbl.copy(camera.position).add(edges.bl.multiplyScalar(distance));
edges.br.unproject(camera);
edges.br.sub(camera.position).normalize();
distance = -camera.position.z / edges.br.z;
edges.wbr = edges.wbr.copy(camera.position).add(edges.br.multiplyScalar(distance));
const material = new THREE.LineBasicMaterial({ color: 0x0FFFFFF });
const points0 = [edges.wtl, edges.wtr];
let geometry = new THREE.BufferGeometry().setFromPoints(points0);
line0 = new THREE.Line(geometry, material);
line0.name = "line0";
scene.add(line0);
const points1 = [edges.wbl, edges.wbr];
geometry = new THREE.BufferGeometry().setFromPoints(points1);
line1 = new THREE.Line(geometry, material);
line1.name = "line1";
scene.add(line1);
const sphereGeometry = new THREE.SphereGeometry(1.0, 8, 8);
const sphereMaterial = new THREE.MeshBasicMaterial( { color: 0xFFFFFF } );
const sphereMaterial2 = new THREE.MeshBasicMaterial( { color: 0xFF0000 } );
let p00 = new THREE.Vector3(edges.wtl.x, edges.wtl.y, 0.0);
let p01 = new THREE.Vector3(edges.wbl.x, edges.wbl.y, 0.0);
let p10 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, edges.wtl.y, 0.0);
let p11 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, edges.wbl.y, 0.0);
let p20 = new THREE.Vector3(0.0, edges.wtl.y, Math.sqrt(2.0) * 16.0);
let p21 = new THREE.Vector3(0.0, edges.wbl.y, Math.sqrt(2.0) * 16.0);
let p30 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, edges.wtl.y, 0.0);
let p31 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, edges.wbl.y, 0.0);
let p40 = new THREE.Vector3(edges.wtr.x, edges.wtr.y, 0.0);
let p41 = new THREE.Vector3(edges.wbr.x, edges.wbr.y, 0.0);
let sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p00.x, p00.y, p00.z);
sphere.name = "point00";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p01.x, p01.y, p01.z);
sphere.name = "point01";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial2);
sphere.position.set(p10.x, p10.y, p10.z);
sphere.name = "point10";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p20.x, p20.y, p20.z);
sphere.name = "point20";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial2);
sphere.position.set(p30.x, p30.y, p30.z);
sphere.name = "point30";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial2);
sphere.position.set(p11.x, p11.y, p11.z);
sphere.name = "point11";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p21.x, p21.y, p21.z);
sphere.name = "point21";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial2);
sphere.position.set(p31.x, p31.y, p31.z);
sphere.name = "point31";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p40.x, p40.y, p40.z);
sphere.name = "point40";
scene.add(sphere);
sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.position.set(p41.x, p41.y, p41.z);
sphere.name = "point41";
scene.add(sphere);
const material2 = new THREE.LineBasicMaterial({ color: 0x0FF0000 });
let points = [p00, p10, p20, p30, p40];
geometry = new THREE.BufferGeometry().setFromPoints(points);
let topLine = new THREE.Line(geometry, material2);
topLine.name = "topLine";
scene.add(topLine);
points = [p01, p11, p21, p31, p41];
geometry = new THREE.BufferGeometry().setFromPoints(points);
let bottomLine = new THREE.Line(geometry, material2);
bottomLine.name = "bottomLine";
scene.add(bottomLine);
let pf0 = new THREE.Vector3(edges.wtl.x + edges.wtl.distanceTo(edges.wwidth), p00.y, p00.z);
let pf1 = new THREE.Vector3(edges.wbl.x + edges.wtl.distanceTo(edges.wwidth), p01.y, p01.z);
//let pf1 = new THREE.Vector3(edges.wwidth.x * 2, p01.y, p01.z);
points = [p00, pf0, pf1, p01, p00];
geometry = new THREE.BufferGeometry().setFromPoints(points);
let frameLine = new THREE.Line(geometry, material);
frameLine.name = "frame";
scene.add(frameLine);
}
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/106/three.min.js"></script>
As is in the figure below,
consider a Plane made by camera.position and both end-points(P00 and P40)
and move the points(P10 and P30) to the intersection points of the plane and the edges.
Then, the points would align straight on the screen as expected.
For example, by using ray.intersectPlane(),
let p00 = new THREE.Vector3(edges.wtl.x, edges.wtl.y, 0.0);
let p01 = new THREE.Vector3(edges.wbl.x, edges.wbl.y, 0.0);
//let p10 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, edges.wtl.y, 0.0);
//let p11 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, edges.wbl.y, 0.0);
let p10 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, 16.0, 0.0);
let p11 = new THREE.Vector3(-Math.sqrt(2.0) * 16.0, -16.0, 0.0);
let p20 = new THREE.Vector3(0.0, edges.wtl.y, Math.sqrt(2.0) * 16.0);
let p21 = new THREE.Vector3(0.0, edges.wbl.y, Math.sqrt(2.0) * 16.0);
//let p30 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, edges.wtl.y, 0.0);
//let p31 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, edges.wbl.y, 0.0);
let p30 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, 16.0, 0.0);
let p31 = new THREE.Vector3(Math.sqrt(2.0) * 16.0, -16.0, 0.0);
let p40 = new THREE.Vector3(edges.wtr.x, edges.wtr.y, 0.0);
let p41 = new THREE.Vector3(edges.wbr.x, edges.wbr.y, 0.0);
let nwt = p00.clone().sub(camera.position).cross(p40.clone().sub(camera.position)).normalize();
let nwb = p01.clone().sub(camera.position).cross(p41.clone().sub(camera.position)).normalize();
let planewt = new THREE.Plane(nwt, -nwt.dot(camera.position));
let planewb = new THREE.Plane(nwb, -nwb.dot(camera.position));
let r10 = new THREE.Ray(p10.clone(), p11.clone().sub(p10).normalize());
let r11 = new THREE.Ray(p11.clone(), p10.clone().sub(p11).normalize());
r10.intersectPlane(planewt, p10);
r11.intersectPlane(planewb, p11);
let r30 = new THREE.Ray(p30.clone(), p31.clone().sub(p30).normalize());
let r31 = new THREE.Ray(p31.clone(), p30.clone().sub(p31).normalize());
r30.intersectPlane(planewt, p30);
r31.intersectPlane(planewb, p31);
Map in Dart is an hash table, am I right?
Map starsData = {
'stars':{
'star1': {'x': 0, 'y': 10},
'star2': {'x': 0, 'y': 10}
}
};
This object below in JavaScript can be accessed as an hash table, faster!! I just want to do the some in Dart, but I am not sure if the best way is using Map.
const starsData = {
stars:{
'star1': {'x': 0, 'y': 10},
'star2': {'x': 0, 'y': 10}
}
};
I have rewritten your JavaScript implementation (based on the project your linked: https://github.com/ToniCalfim/fallingstars/blob/master/index.js) in Dart:
Can also be tested with:
https://dartpad.dev/900989f4e35e5a61200e4ad04ecd399a
import 'dart:html';
import 'dart:math' as math;
const starsDiameter = 1.25;
const colorPallete = [
'white',
'yellow',
'blue',
'red',
'orange',
'turquoise',
'purple',
'green',
'lightblue',
'lightyellow',
'lightgreen',
'darkred',
'darkblue',
'darkorange',
'darkturquoise',
'darkgreen'
];
final math.Random _rnd = math.Random();
int getRandomNumber(int min, int max) => min + _rnd.nextInt(max - min);
class Star {
int x, y;
int positionX = getRandomNumber(2, 650);
int positionY = getRandomNumber(3, 125);
double diameter = starsDiameter;
int pulsing = 0;
int blinking = 0;
int timeToFall = getRandomNumber(0, 7500);
int velocityToFall = getRandomNumber(1, 5);
int directionToFall = getRandomNumber(-1, 1);
String color = colorPallete[getRandomNumber(0, colorPallete.length)];
Star() {
x = positionX;
y = positionY;
}
}
final List<Star> stars = List.generate(175, (_) => Star());
void update() {
for (final currentStar in stars) {
final currentTimeToFall = currentStar.timeToFall;
if (currentTimeToFall != 0) {
currentStar.timeToFall = currentTimeToFall - 1;
} else {
final currentVelocityToFall = currentStar.velocityToFall;
final currentAngleToFall = currentStar.directionToFall;
final currentPositionX = currentStar.x;
final currentPositionY = currentStar.y;
currentStar.x = currentPositionX + 1 * currentAngleToFall;
currentStar.y = currentPositionY + currentVelocityToFall;
}
}
}
final CanvasElement canvas = querySelector('#canvas') as CanvasElement;
final CanvasRenderingContext2D context2D = canvas.context2D;
void drawStars() {
context2D.clearRect(
0, 0, context2D.canvas.width, context2D.canvas.height); // Clear canvas
for (final currentStar in stars) {
context2D.beginPath();
context2D.fillStyle = currentStar.color;
context2D.arc(currentStar.x, currentStar.y, starsDiameter, 0, 2 * math.pi);
context2D.fill();
context2D.closePath();
}
}
void animateLoop([num highResTime]) {
update();
drawStars();
window.requestAnimationFrame(animateLoop);
}
void main() {
animateLoop();
}
By looking at your code I could not see any reason why the stars should be saved in a Map or other Hash tables related structure. You are using the stars in two ways: draw and update. In both cases your are just going through all the stars which can be done by using a simple list and iterate over all elements.
I should add that I am not a front-end programmer and I cannot really judge if the way your are drawing the 2D canvas is the most efficient way to do that. My converted code are only are attempt to show how the data could be structured in Dart.
val playTexture = assetManager?.get(A.image.gamePlayBtn, Texture::class.java)
val skin: Skin = Skin()
skin.add("up", TextureRegion(playTexture, 0, 0, 296, 96))
skin.add("down", TextureRegion(playTexture, 0, 96, 296, 96))
val playButton: ImageTextButton = ImageTextButton("PLAY THE GAME", with(ImageTextButton.ImageTextButtonStyle()) {
up = skin.getDrawable("up")
down = skin.getDrawable("down")
font = BitmapFont()
font.data.setScale(3f)
font.color = Color.ORANGE
this
})
OnClick events work ok, but there is no button background change for onClicked state(down). Where I'm wrong?
I've tested your code, that's working fine. Added complete text code :
class Main : ApplicationAdapter(){
internal lateinit var stage: Stage
override fun create() {
stage= Stage()
stage.setDebugAll(true)
Gdx.input.setInputProcessor(stage)
var skin= Skin()
var tex1 =Texture("badlogic.jpg")
skin.add("up", TextureRegion(tex1, 0, 0, 100, 50))
skin.add("down", TextureRegion(tex1, 0, 96, 100, 50))
var playButton: ImageTextButton = ImageTextButton("Play The Game", with(ImageTextButton.ImageTextButtonStyle()){
up = skin.getDrawable("up")
down = skin.getDrawable("down")
font = BitmapFont()
font.data.setScale(3f)
font.color = Color.ORANGE
this
});
playButton.setPosition(0F,100F)
playButton.addListener(object : ClickListener(){
override fun clicked(event: InputEvent?, x: Float, y: Float) {
print("clicked")
super.clicked(event, x, y)
}
})
stage.addActor(playButton)
}
override fun render() {
Gdx.gl.glClearColor(1f, 0f, 0f, 1f)
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT)
stage.draw()
stage.act()
}
}
You have to update your button.
There is updateImage() function, but its protected so u have to do it using Stage.
So here is what u do:
Create stage.
Add your button to stage.
Call stage.draw() in render function.
...
Stage stage = new Stage();
//create button
stage.addActor(playButton);
...
#Override
public void render() {
stage.draw();
}
I have the following test code where I have a ClothMesh (from FXyz lib) that I
can drag, rotate and drag my circle handles. All works well, FXyz is great. Now I want to use SegmentedSphereMesh, it mostly work except that my circle handles are 2D and not wrapped around the sphere. I know the possible problems mixing 2D and 3D. However, it is so close to working; how can I make my handles work with the sphere, or what would be another way to do the same function.
Note, I do not want to control the shape/mesh by moving the camera around.
import org.fxyz.shapes.complex.cloth.ClothMesh
import org.fxyz.shapes.primitives.SegmentedSphereMesh
import scalafx.Includes._
import scalafx.application.JFXApp
import scalafx.application.JFXApp.PrimaryStage
import scalafx.beans.property.DoubleProperty
import scalafx.collections.ObservableFloatArray
import scalafx.scene.image.Image
import scalafx.scene.input.{MouseButton, MouseEvent}
import scalafx.scene.paint.PhongMaterial
import scalafx.scene.shape._
import scalafx.scene.transform.Rotate
import scalafx.scene._
import scalafx.scene.paint.Color
/**
* left mouse to drag the meshView and also to drag the handles
* right mouse drag + ctrl to rotate about X axis
* right mouse drag + alt to rotate about Y axis
* right mouse drag + shift to rotate about Z axis
*/
object ClothTest2 extends JFXApp {
private var dx = 0.0
private var dy = 0.0
stage = new PrimaryStage {
scene = new Scene(600, 600, true, SceneAntialiasing.Balanced) {
fill = Color.LightGray
val testImg = "https://upload.wikimedia.org/wikipedia/commons/c/c4/PM5544_with_non-PAL_signals.png"
val img = new Image(testImg, 400, 400, false, true)
val meshView = new SegmentedSphereMesh(20, 4, 2, 200d)
// val meshView = new ClothMesh(4, 4, 200, 200, 0.5, 0.5, 1.0)
meshView.setDrawMode(DrawMode.Fill)
meshView.setCullFace(CullFace.None)
meshView.style = "-fx-background-color: #00000000"
meshView.setMaterial(new PhongMaterial(Color.White, img, null, null, null))
val controller = new MeshController(meshView.getMesh().asInstanceOf[javafx.scene.shape.TriangleMesh].points)
val viewGroup = new Group(meshView, controller)
root = new Group(new AmbientLight(Color.White), viewGroup) { translateX = 70; translateY = 70 }
camera = new PerspectiveCamera(true) {
nearClip = 0.0
farClip = 100000.0
fieldOfView = 42
verticalFieldOfView = true
translateZ = -900
}
val rotHandler = new RotHandler(viewGroup)
onMouseDragged = (event: MouseEvent) => {
rotHandler.onMouseDragged(event)
if (event.button == MouseButton.PRIMARY) {
viewGroup.layoutX = event.sceneX + dx
viewGroup.layoutY = event.sceneY + dy
event.consume()
}
}
onMousePressed = (event: MouseEvent) => {
rotHandler.onMousePressed(event)
dx = viewGroup.layoutX.value - event.sceneX
dy = viewGroup.layoutY.value - event.sceneY
event.consume()
}
}
}
}
class CircleHandle(color: Color) extends Circle {
radius = 8
var dx = 0.0
var dy = 0.0
fill <== when(hover) choose Color.Red.deriveColor(1, 1, 1, 0.4) otherwise color.deriveColor(1, 1, 1, 0.4)
strokeWidth <== when(hover) choose 3 otherwise 2
stroke = color
onMousePressed = (event: MouseEvent) => {
dx = centerX.value - event.x
dy = centerY.value - event.y
event.consume()
}
onMouseDragged = (event: MouseEvent) => {
centerX = event.x + dx
centerY = event.y + dy
event.consume()
}
}
class MeshController(thePoints: ObservableFloatArray) extends Group {
children = for (i <- 0 until thePoints.size by 3) yield new CircleHandle(Color.Yellow) {
centerX() = thePoints.get(i)
centerX.onChange { (obs, oldVal, newVal) => thePoints.set(i, newVal.floatValue()) }
centerY() = thePoints.get(i + 1)
centerY.onChange { (obs, oldVal, newVal) => thePoints.set(i + 1, newVal.floatValue()) }
}
}
class RotHandler(val viewer: Group) {
private val angleX = DoubleProperty(0)
private val angleY = DoubleProperty(0)
private val angleZ = DoubleProperty(0)
private var anchorX = 0d
private var anchorY = 0d
private val rotX = new Rotate { angle <== angleX; axis = Rotate.XAxis }
private val rotY = new Rotate { angle <== angleY; axis = Rotate.YAxis }
private val rotZ = new Rotate { angle <== angleZ; axis = Rotate.ZAxis }
viewer.transforms = Seq(rotX, rotY, rotZ)
def onMousePressed(event: MouseEvent) = {
anchorX = event.sceneX
anchorY = event.sceneY
event.consume()
}
def onMouseDragged(event: MouseEvent) = {
// right mouse only
if (event.button == MouseButton.SECONDARY) {
event match {
// rotation about the Y axis, dragging the mouse in the x direction
case ev if ev.altDown => angleY() = anchorX - event.sceneX
// rotation about the X axis, dragging the mouse in the y direction
case ev if ev.controlDown => angleX() = anchorY - event.sceneY
// rotation about the Z axis, dragging the mouse in the x direction
case ev if ev.shiftDown => angleZ() = anchorX - event.sceneX
case _ => // ignore everything else
}
}
event.consume()
}
}
So I am in dire need of a code read through, if someone would be so kind- I have no idea where a fault can come from- I've been comparing this code to the code from the tutorial here:
http://learningwebgl.com/lessons/lesson05/index.html
and have looked through both about 10 times- I just don't know...Need some help from the pros... Just trying to texture a square, without any of the 3d stuff that I don't care for at the moment-
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
// gl_FragColor= vec4(0.0, 1.0, 0.0, 1.0);
}
</script>
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
}
</script>
<script type="text/javascript">
var gl;
function initGL(canvas) {
try {
gl = canvas.getContext("experimental-webgl");
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
} catch (e) {
}
if (!gl) {
alert("Could not initialise WebGL, sorry :-(");
}
}
function getShader(gl, id) {
var shaderScript = document.getElementById(id);
if (!shaderScript) {
return null;
}
var str = "";
var k = shaderScript.firstChild;
while (k) {
if (k.nodeType == 3) {
str += k.textContent;
}
k = k.nextSibling;
}
var shader;
if (shaderScript.type == "x-shader/x-fragment") {
shader = gl.createShader(gl.FRAGMENT_SHADER);
} else if (shaderScript.type == "x-shader/x-vertex") {
shader = gl.createShader(gl.VERTEX_SHADER);
} else {
return null;
}
gl.shaderSource(shader, str);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert(gl.getShaderInfoLog(shader));
return null;
}
return shader;
}
var shaderProgram;
function initShaders() {
var fragmentShader = getShader(gl, "shader-fs");
var vertexShader = getShader(gl, "shader-vs");
shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert("Could not initialise shaders");
}
gl.useProgram(shaderProgram);
shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
shaderProgram.textureCoordAttribute=gl.getAttribLocation(shaderProgram, "aTextureCoord");
gl.enableVertexAttribArray(shaderProgram.textureCoordAttribute); //**
shaderProgram.samplerUniform= gl.getUniformLocation(shaderProgram, "uSampler");
shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
}
var mvMatrix = mat4.create();
var pMatrix = mat4.create();
function setMatrixUniforms() {
gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
}
var triangleVertexPositionBuffer;
var squareVertexPositionBuffer;
function initBuffers() {
squareVertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
vertices = [
1.0, 1.0, 0.0,
-1.0, 1.0, 0.0,
1.0, -1.0, 0.0,
-1.0, -1.0, 0.0
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
squareVertexPositionBuffer.itemSize = 3;
squareVertexPositionBuffer.numItems = 4;
squareTexPositionBuffer=gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareTexPositionBuffer);
texvert= [1.0, 0.0,
0.0, 0.0,
1.0, 1.0,
0.0, 1.0];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(texvert), gl.STATIC_DRAW);
squareTexPositionBuffer.itemSize=2;
squareTexPositionBuffer.numItems=4;
}
function drawScene() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix);
mat4.identity(mvMatrix);
mat4.translate(mvMatrix, [-2.0, 0.0, -7.0]);
mat4.translate(mvMatrix, [3.0, 0.0, 0.0]);
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, squareTexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.textureCoordAttribute, squareTexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, neheTexture);
gl.uniform1i(shaderProgram.samplerUniform, 0);
setMatrixUniforms();
gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);
}
function handleLoadedTexture(texture) {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture.image);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.bindTexture(gl.TEXTURE_2D, null);
}
var neheTexture;
function initTexture(){
neheTexture = gl.createTexture();
neheTexture.image = new Image();
neheTexture.image.onload = function() {
handleLoadedTexture(neheTexture)
}
neheTexture.image.src = "nehe.gif";
}
function webGLStart() {
var canvas = document.getElementById("lesson01-canvas");
initGL(canvas);
initShaders();
initBuffers();
initTexture();
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.enable(gl.DEPTH_TEST);
drawScene();
}
</script>
Texture won't be loaded immediately.
This is not a problem when you have an animation, because the scene will be rendered with blank texture while it's not fully loaded, and once it is the objects will become textured. You don't have animation, only one drawing call, that executes before fully loading image.
So after you load image you should make another drawing call, so the scene is drawn with texture.
So something like:
neheTexture.image.onload = function() {
handleLoadedTexture(neheTexture);
drawScene(); // <- now draw scene again, once I got my texture
}
Hope this helps. :)
Here's kind of a follow up question- Let's say I wanted to have two different shapes- one with a color buffer and another with a texture buffer, how woud I write that code out in the shaders since the tutorials made it almost like you could only have one or the other, but not both?
So like in the following code, I have something for the texture and something to make the color blue in another line of code- how would I make that differentiation in this language- I tried using ints to symbolize the choice between the two but it didn't work out very well...
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
// gl_FragColor= vec4(0.0, 1.0, 0.0, 1.0);
}
</script>