I'm making a little model solar system, and trying to learn the finer points of lighting. The sun is modeled as a sphere with a diffuse map texture I found, and I added a PointLight to its center. It illuminates the other planets very nicely, but the sun itself is dark. What's the right way to make an object appear radiant and not just reflective?
Of course I found an answer RIGHT after posting. The key is the setSelfIlluminationMap method in PhongMaterial:
private static Sphere buildGlowingPlanet(double radius, Image diffuseMap, Image selfIlluminationMap) {
Sphere planet = new Sphere(radius);
PhongMaterial planetMaterial = new PhongMaterial();
planetMaterial.setDiffuseMap(diffuseMap);
planetMaterial.setSelfIlluminationMap(selfIlluminationMap);
planet.setMaterial(planetMaterial);
return planet;
}
It would be nice if there was a way to just make the illumination be a solid color, but you could just use a blank white image for that.
Related
I am trying to implement a 2D selection window that can select the 3D vertices inside the 2D window (indicated by dashed cyan rectangle). Each of my 3D models is currently a composite group of MeshView's, one MeshView per face. My plan was to iterate over each face (MeshView) and check if the 2D bounds intersect with the selection box bounds (I am planning to update this later using an Atlas texture to reduce the amount of meshes, but for now I just want the selection mechanism working).
Currently I have the following, but this isn't correct.
val selectionBounds = selectionRectangle.boundsInParent
val localBounds = meshView.localToScene(meshView.boundsInLocal, true)
if (selectionBounds.intersects(localBounds))
// do something with the mesh in meshView
My subscene contains a perspective camera, now I saw two useful posts:
Convert coordinates from 3D scene to 2D overlay
How to get 2D coordinates on window for 3D object in javafx
I think I have to first project the meshView's bounds properly using my perspective camera. But I am unsure how to proceed, do I have to project every 3D point in the local bounds to 2D as is done in referenced question 2 (above). Not very familiar with the math, or related concepts, any help would be appreciated.
(wink, wink MVP José)
EDIT 1:
After José's suggestion I added red bounding boxes for each meshview which gives the following result:
Apparently it adds some offset which appears to be the same regardless of the camera rotation. Here the red boxes are drawn around each meshview. Will investigate further..
EDIT 2:
I use a Pane which contains the SubScene and another Node. This is done to control the sizing of the SubScene and to reposition/resize the other Node accordingly by overriding the layoutChildren method, as such (this is in Kotlin):
override fun layoutChildren() {
val subScene = subSceneProperty.get()
if (subScene != null){
subScene.width = width
subScene.height = height
}
overlayRectangleGroup.resize(width, height)
val nodeWidth = snapSize(overlayMiscGroup.prefWidth(-1.0))
val nodeHeight = snapSize(overlayMiscGroup.prefHeight(-1.0))
overlayMiscGroup.resizeRelocate(width - nodeWidth, 0.0, nodeWidth, nodeHeight)
}
I am trying to highlight an Area that gets intersected by 2 Circle:
Example 1.:
The Yellow dots get, for testing purposes, random values. Those are used to draw a circle around, as well as to store an ellipse in the Background. In the Case of no intersection, the GUI acts correctly and display this:
After the random Values, the Shapes intersect. As I cannot seem to be able to add the new Shape made trough .intersect(), I just did a quick sp.setContent(), and got this image:
This basicly shows me the intersected space and colors it blue.
Everything is drawn on a Canvas, which basicly does the following:
Canvas canvas = new Canvas(250, 250);
....
gc = canvas.getGraphicsContext2D();
canvas.setHeight(imgTemp.getHeight());
canvas.setWidth(imgTemp.getWidth());
gc.drawImage(imgTemp, 0, 0);
Aswell as with some other Loops to draw the shapes and the circles.
Now, the code for the intersect is the following:
if (!(e.equals(eT))) {
if (e.getBoundsInParent().intersects(eT.getBoundsInParent())) {
System.out.println("Collision detected!");
Shape inter = Shape.intersect(e, eT);
if(inter.getBoundsInLocal().getWidth() > 0 && 0 < inter.getBoundsInLocal().getHeight()){
inter.setFill(BLUE);
inter.setStrokeWidth(3);
sp.setContent(inter);
}
}
I'm not that used to JavaFX and have only begun really working on it this Weekend for a small Project, but I am guessing that I might need to change from canvas to something else to make use of the shapes? Or is there a way to "tranform" the Shape of the intersect into something drawable by GraphicsContext2D?
Why don't you just put your Canvas into a Group and then add your shapes to the same Group. Why would you insist on drawing everything into the Canvas? A Canvas is just a Node like all the other Shapes and you can mix them freely in the SceneGraph.
Another question would be why you are using the Canvas at all if you have already realized that this leads to problems in your case.
I have been trying to create a simple application whereby a 360 image is shown on a Sphere. The next step is to dynamically add 'hotspots' onto this image as the user pans around.
I've tried many different approaches (including delving deep deep into Three.JS) but it doesn't seem to be that easy. (UVs, positions, vertices etc).
A-Frame is awesome at making it super simple for developers to get up and running. I managed to find an example whereby there were two 'hotspots' on a sphere, which when hovered/focused, the sphere background changed.
I'm looking to create a new 'hotspot' in the cameras direction, however this is proving to be quite difficult because the camera position never changes but the rotation does.
For example, after panning, I see the following:
<a-camera position="0 2 4" camera="" look-controls="" wasd-controls="" rotation="29.793805346802827 -15.699043586584567 0">
<a-cursor color="#4CC3D9" fuse="true" timeout="10" material="color:#4CC3D9;shader:flat;opacity:0.8" cursor="maxDistance:1000;fuse:true;timeout:10" geometry="primitive:ring;radiusOuter:0.016;radiusInner:0.01;segmentsTheta:64" position="0 0 -1" raycaster=""></a-cursor>
</a-camera>
Ideally, to create a new hotspot in this position, I assumed all I'd need to do it add the rotation values to the position value and that'd be the correct place for the hotspot, but unfortunately, this doesn't work, either the hotspot is in the wrong position, it's not facing the camera or its no longer in view.
How can I create a 'hotspot' in the cameras viewpoint (in the middle)?
I've setup a codepen which should help show the problem I'm having better. Ideally when I pan around and click 'create hotspot', a 'hotspot' should be created directly in the middle (just like the red and yellow ones).
Thanks!
UPDATE: An idea I've had is to have a secondary sphere (smaller radius) with no background. If it's easy to know the XYZ coordinates of the camera view intersecting it, then this is made super simple. Is there a way of finding this out?
Without any intersection, when you know the position of the camera, this is the vector which needs to be: negated, normalized, multiplied by a scalar value which equals to the radius of the sphere. The result is the point of your new hotspot.
var pos = camera.position.clone().negate().normalize().multiplyScalar(sphereRadius);
jsfiddle example
Yes, I think your second idea is right, you can make a new sphere which has seam position with your camera. If you want to make a new 'hotspot' on your center of screen. you can make a raycaster from camera to the 'hotspot' and it will intersect the sphere. But, you should set the material's side property with 'doubleside', if not, you won't intersect the sphere. then, you can get the intersect[0].point.
var pos = new THREE.Vector3().copy(cameraEl.getComputedAttribute("position"));
cameraEl.object3D.localToWorld(pos);
var vector = new THREE.Vector3(( event.clientX / window.inneenter code hererWidth ) * 2 - 1, -( event.clientY / window.innerHeight ) * 2 + 1, 0.5);
vector = vector.unproject(camera);
var raycaster = new THREE.Raycaster(pos, vector.sub(camera.position).normalize());
var intersects = raycaster.intersectObjects([sphere],true);
if(intersects.length > 0)
{
console.log(intersects[0].point);
}
`
I want to have a right-handed Cartesian coordinate system in JavaFX, so (0,0) at lower left corner of window, x increasing to the right and y increasing upwards. I can't figure out how to do that with transforms. If I apply a rotation transform, the buttons will be upside down. All I want is to be able to use this coordinate system instead of the default one.
As mentioned in the JavaFX documentation (see chapter Y-down versus Y-up), Y down is used by many 2D graphics libraries, which is where JavaFX has started.
To force Y up and correct drawing, you could put all your content in a rotated parent node:
// Rotate camera to use Y up.
camera.setRotationAxis(Rotate.Z_AXIS);
camera.setRotate(180.0);
// Rotate scene content for correct drawing.
Group yUp = new Group();
yUp.setRotationAxis(Rotate.Z_AXIS);
yUp.setRotate(180.0);
Scene scene = new Scene(yUp);
scene.setCamera(camera);
Now add everything to yUp to use those nodes like in a Y up environment.
Bear in mind that this is fine in 2D space. If you come up with additional 3D features, make sure your models grow in negative Y direction. Otherwise you would have to use another container.
JavaFX's prism renderer eventually uses a 3D Camera transform to render it's shapes.
There are two cameras that can be set to the scene, Parallel and Perspective.
If you look in the javafx source for parallel camera here you will find some maths to compute the transform.
If you override that method and implement the proper maths, you should be able to invert the coordinate system.
The kind of math you would use is something like this.
You would have to look in the source to see what ortho does exactly. But this should get you on the right track.
Flex 3, ActionScript 3, Flash player 9.
I have a picture in a BitmapData object. And an array of points. I nead to erase the part of the picture inside a polygon specified by the points. In other words, draw a polygon specified by the points and fill it with transparency.
Any ideas on how it can be done?
Got it working with the following code:
var shape:Shape = new Shape();
shape.graphics.beginFill(0x000000, 1); // solid black
shape.graphics.moveTo(points[0].x, points[0].y);
points.forEach(function (p:Point, i:int, a:Array):void {
shape.graphics.lineTo(p.x, p.y);
});
shape.graphics.endFill();
data.draw(shape, null, null, "erase");
For a rectangle, you can use fillRect. For a polygon you are gonna have to draw the polygon in a totally different color (than other colors in the bitmap) and use floodFill - but I don't know how to draw a polygon. There is no method in bitmap data class to draw lines. Another option would be to write your own logic to find pixels inside the polygon and use setPixel32 method to set their alphas to zero.
This wikipedia page describes algorithms to find if a point is inside a given polygon. You might find it useful.