Turns out I’m working with the Autodesk Forge viewer and Three.js, trying to render 2D text that can be interacted with (specifically select, rotate, and move).
To do this I am working with meshes (using MeshBasicMaterial, Mesh and TextGeometry) but it turns out that the text does not look perfectly sharp, it presents aliasing and I found that according to the API reference, the antialiasing is not applicable to 2d.
Here are some examples of the problem, as you can see, the more I move away from the plane, the worse the text looks (and even up close it doesn't look perfect):
I have tried to make a test representing the text with a Sprite (despite the fact that it would mean having to change the entire implementation already made with meshes of other functionalities) but apart from the fact that I cannot see it, I have seen example images and they do not appear either well: aliasing is visible from a distance and it looks really blurry up close. Here some examples:
Is there a way to correct this problem or is this the most I can get in 2D? I've tried searching for information on this but can't find anything helpful. And what has puzzled me the most has been realizing that antialiasing was not applicable in the case of 2d, like making it clear that nothing can be done to fix it.
I would be very grateful if you could solve my doubts, thank you very much in advance for your help.
An easier alternative, is to just use a higher pixel ratio for the renderer...
window.devicePixelRatio=2;
viewer.resize();
For example, using the custom geometry text, from Joao's demo, you can see the same aliasing issue at DPR=0.5 and DPR=1.0 ...
https://joaomartins-forge.github.io/textgeometry-sample/
But when I set the DPR=2.0, the text looks clean. The trade off is rendering performance, but your 2D drawings may be simple enough that it won't matter. You can use a 'mouse up' camera settle trick, to switch between DPR of 1 and 2, if you want a better UX experience.
There are a few ways to solve this aliasing issue for 2D (and 3D text).
The way I would recommend for your use case, is to use DIV elements (THREE.CSS3DRenderer), instead of text converted into three.js tessellated triangle geometry, as shown in this blog post:
https://forge.autodesk.com/blog/how-do-you-add-labels-forge-viewer
You can find out more information about THREE.CSS3DRenderer here:
https://threejs.org/docs/#examples/en/renderers/CSS3DRenderer
and an Example here: https://threejs.org/examples/#css3d_periodictable
Using CSS3DRenderer instead of CSS2DRenderer, means you will get the correct scaling (and rotation) of the div element as you zoom into your 2D drawing and the mathematics inside the calculation for the matrix transform has less edge-cases.
Once you are using DIV elements for your text, you will notice that the text is sharper and has no aliasing issues. That's because it is not being rasterized by the webGL pipeline, but by the SKIA library used by chrome/firefox/opera/etc for rasterizing text.
There is one final option, that uses signed-distance fields, but it's probably overkill for what you need.
Let me know if you want some example code.
I am trying to implement a tool into my application. the tool would allow the user to plot a triangle meshes using the mouse. I have looked everywhere for a way to do this, tutorials, examples, etc and have not been successful. I have seen the FXyz library but that does not really simulate What I am trying to accomplish. The goal of the program looks as follows:
Use Case Sequence:
The user adds a png image to the 3D scene or drags it into the 3D scene using the mouse.
Once the image is being displayed the user would then be able to plot a mesh around the image.
Once the user has finished plotting the mesh. There would have to be a way to add the image being overlayed by the mesh as a texture to the mesh. The image should look the same after being added as a texture. Is this too hard or beyond the scope of what can be achieve in JavaFX?
Theoretically It would then be possible to drag the vertices of the mesh at draggable points and successfully applying transformations to the texture. Would this be possible?
Images showing what I am trying to achieve
As you can see maybe after plotting the mesh the points connecting the vertices can be dragged in order to manipulate or transform the shape of the mesh. If the mesh has a texture over it. would the image then also transform ?
Would this be possible with the TriangleMesh class that JavaFX has which by the way there is very little out there which explains how to use it and how the points, face points and texture points work. Very confusing =(.
Target End Result
My question is would the type of manipulation shown in the image above be possible in JavaFX? Can this sort of functionality be achieved using the TriangleMesh class or other similar class in JavaFX ? As you can see what I am trying to achieve is really image manipulation I would appreciate knowing if there is another better way to do this.
I unfortunately do not have any code I can share. I just cant seem to produce any regarding this task. I am not asking to be given code or for someone to solve the problem for me. I just want to see examples be guided into the right direction on how to do this and to know if it is even possible or should I just give up on it!
If you have read this far Thank you so much for your time I really appreciate it.
I have read your question till the end :-) In contrast to many people here you are at least providing a clear description of what you want to achieve. According to my own experience I would say that what you want to do is easily possible with JavaFX and the MeshView is the way to go here.
You can use your image as the texture for this mesh and you can distort the image by manipulating the vertices of this mesh. I have implemented part of your functionality myself for a project so I know that it works.
A basic rotating question - How you can couple 2 figures (a box/cube with a sphere in it ANYWHERE in the cube, BUT in the center) so that these 2 are coupled ROTATIONALLY (that is why I don't want the sphere to be in the center of the cube) IN PERSPECTIVE.
In other words, when I rotate the cube with the mouse and "bring" the sphere closer to the front (say, make a 180-degree rotation), the perspective changes accordingly and the sphere gets bigger visually (compared to the position on the back)?
Asked a couple of ScalaFX experts - they both said it was a very good question and recommended to post it here.
Cheers:
Zar
>
I'm not entirely sure what you're trying to do, but you can can rotate multiple objects by applying a Rotate transform to the Group that contains all of those objects. If you only want to rotate some of the objects, but not all of them, you have to structure the scene so that the objects being rotated have a common parent Group - with none of the non-rotating objects belonging to it. Applying Rotate transforms to that parent Group will rotate all of its child objects too. Rotation will be about the origin of that parent Group.
Update: I forgot to mention how to address the issue of perspective. The 3D objects in a scene aren't directly affected by perspective, since perspective is a property of how the scene is rendered. This rendering is performed by Camera objects. To render the scene using perspective (as opposed to using orthogonal, or parallel, as it's referred to in JavaFX/ScalaFX), add a PerspectiveCamera to the scene and view the scene using that camera. For further information on this, refer to the following: Getting Started with JavaFX 3D Graphics: Camera
Update 2: I've created a gist on GitHub with a complete program for doing this.
Update 3: Made box transparent & moved sphere inside the box. Now left/primary mouse button rotates box + sphere when dragged; right/secondary mouse button moves camera dolly towards/away from boxes, changing perspective accordingly.
Update 4: So, if I understand you right, you want to transform the shapes in your 3D scene so that they look as though perspective has been applied to them. Do I have that right?
If so, the reason that this is not a "built-in" capability is for the reasons outlined below. Please forgive me if you already know all of this, incidentally - I'm just trying to provide a comprehensive answer. :-)
Scene graphs (as typically used by retained mode 3D systems, such as JavaFX) capture the geometry, location, rotation, color, etc. of a 3D scene in a hierarchical tree structure. The idea is that the modeler only need to worry about the content of the scene - ensuring dimensions, alignments, etc. are correct - and do not need to worry about how the scene is rendered.
Perspective can be applied when the scene is rendered as it would appear from a specific viewpoint; i.e. when the scene is translated into a 2D projection such as a GUI window. (The process of determining what the scene looks like in perspective is a part of the rendering algorithm - but does not require modification, deformation, etc. of the scene.) If perspective is not enabled, then the scene is typically rendered orthogonally, without any vanishing points, apparent scaling, etc. The key point here is that the scene itself is unaffected by how it is viewed.
With this arrangement, it's possible to have multiple views of the same scene. Not only can they each have a different viewpoint, but some can be orthogonal and some can use perspective - yet each can render the scene correctly without any confusing artifacts. If it worked the way you seem to think it does, then you could only ever have a single view of the scene at a time, as the scene would need to be deformed during rendering to look right from that sole viewpoint. When editing the scene, you'd need to remove those deformations to prevent mind-blowing confusion for the modeler.
In short, it's a very unusual requirement that the scene itself be deformed to show what it would look like in perspective. That's why there's no built-in capability to do this in any 3D system that I know of.
Assuming that you wish to proceed - using JavaFX - here's some points to bear in mind:
I don't believe that the regular 3D primitives (namely Box, Sphere & Cylinder) can be deformed to represent a perspective view of them. You will have to construct the shapes using the TriangleMesh and MeshView objects (the former captures the geometry of the shape, the latter allows it to be treated as a 3D shape).
To apply perspective, you would have to reposition the vertices in the TriangleMesh instances to deform the scene appropriately. If you need to be able to change the viewpoint, or rotate the box & sphere, then these changes would need to be dynamic, so that the calculated vertex coordinates react to the changing viewpoint and/or rotation. Because of fish-eye effects at high levels of perspective dilation, you might need more vertices than you might expect.
Given your requirements, you still need a camera to view the scene. Clearly, you cannot use the PerspectiveCamera to render the scene, or it will treat the scene as unadjusted and will apply a second level of perspective, ruining your carefully calculated deformations. You will then need to use ParallelCamera to produce orthogonal views of your scene.
Unfortunately, JavaFX's support for using ParallelCamera with 3D scenes is still very immature. (The ParallelCamera is primarily used to render 2D scenes, such as dialogs, buttons, menu's, sliders, etc.) You might find it difficult to use in practice. (You can approximate an orthogonal projection using the PerspectiveCamera by utilizing a very narrow field of view and moving the camera away from the scene by some distance. You would also need to adjust the clipping planes to avoid the image disappearing.)
Finally, at some point, you will need to be able to position the camera at the same location as the viewpoint being used for the perspective deformation. When the camera is synchronized with that viewpoint, then your scene - although rendered orthogonally - will appear as a correct perspective projection of the intended scene. Whenever the camera and the viewpoint are separated, the scene will appear unnatural and distorted, which - I understand - is your intention.
In summary, I would say that what you intend to do is far from trivial, and the implementation is way beyond the scope of a StackOverflow answer. Good luck!
Mike:
Sorry for the delay, I am finishing something for a client in a totally different application area and pulled-away from my "teaching perspective" toy ...
Just got the notification for the new answers and had a quick look - one thing I noticed right away was that the sphere is outside rather than inside the transparent box (haven’t looked at the code yet).
What I actually expected was a "built-in" perspective "argument" (either in the rotation transformation, or the scene definition, or a stand-alone function - in one way are another), which allows for different perspective to be rendered depending on the angle between the 2 (initially) parallel opposing edges at the bottom, for example. I understand of course that in reality it depends on the ViewPoint Position and you are not "allowed" to forcefully change this angle, but the goal here is simply a "cause-and-effect" toy in a 3D scene.
Controlling the camera will not allow for that, since it imposes the perspective very smoothly (as it would be in real life) rather than allowing the child to directly control the edges and immediately see how her action changes the perspective, rather than playing with the viewpoint.
As mentioned in the original question, I'd expect a function like the one sketched below (and I would expect it to be BUILT-IN in a sensible 3D-product since it is so basic, rather than forcing me or you to manually craft some code for something that should have been there from the get-go - perspective is simply a basic fundamental and hopefully will be covered by the rendering in some form in the next release):
def doPersepctive( myBox: MyBoxContainer, angle: Int, viewPoint: Point): Int = {
// Presents the perspective look in a way defined by the "angle" between the
// (initially) parallel edges of myBoxContainer, from the viewPoint Point (3D).
// Rotate everything within the boundaries of MyBoxContainer. The
// bigger the angle, the smaller the sphere at the back, of course.
// Returns the rotatedAngle after the mouseEvent to enable auto-replay later, so
// the kid can examine what her actions were and see the effect of those actions.
}
Tnx again for your entry, I will certainly have a look at the code, and reply - but you see the general picture above. Sorry for the delay again:
Z
>
Mike:
This is more like the original intension - although I find merits in the first attempt too, actually.
I made the sphere (in the first version) transparent (via diffuseColor = Color.web("#ffff0080") ), so now she can play with both versions, and both are pretty much SIMILAR from a child's perspective (meaning one of the objects is transparent, moreover different objects in these versions).
Now - I tried to make the BOX transparent (where the sphere is outside) and I failed - is there a reason for that? In other words trying to make the object passing "behind" visible? One transparent object passing behind another transparent object, so to say?
In the second version I ALSO cannot see "behind the object", meaning I can NOT see the EDGE of the box passing behind the sphere. Not only that – I can NOT see the back edge EVEN when it is not behind the sphere (but only behind the front side of the box)!
My question in a sense is "CAN both objects be made transparent" - I guess this is the closest to what I am trying to ask. May be with different “transparency %”, but still transparent…
Tnx again:
Zar
>
Yes, Mike - your answer was completely relevant and I do accept it with the thoughtfully-explained shortcomings of the current ScalaFX implementation. If I need to "click" somewhere to formally TAG this, please let me know - I am new to this group and don't know the formalities really.
Tnx again:
Zar
I expected that there was a parameter that controls the Perspective during a Rotational Transformation, but cannot find one. The sample problem is clearly defined - you have a BOX/CUBE and a smaller sphere inside; now when you rotate the BOX, the sphere rotates WITH it but "in perspective", meaning that if you bring the sphere in front, it looks (draws) bigger correspondingly with the "perspective".
Zar
>
>
BTW, if it were possible to add the Sphere as a child of the Box, then you ...
<
That is not possible, but CAN I add the sphere AT RUN TIME rather than at compile time? In other words is there an "addObject" that adds the sphere after the kid has played with box for 1 min and 1 min after running the program, the sphere appears. Cannot see anything like that here:
http://www.scalafx.org/api/8.0/index.html#package
May be I am missing something ?
Zar
>
I would like to add a sphere with a 2d gradient as texture to create a skydome. I read that in openGL this is often solved by rendering the skybox without depthtest in an additonal pass.
I disabled depthTest on my sphere so everything else is drawn in front of it, it's kinda giving me the disired effect but depending on the camera angle it clips through other objects in my scene.
I was looking at several examples which make use of THREE.EffectComposer and a second scene, I may be completely after the wrong thing here but I think that could solve this. The thing is I havent ever touched the effectComposer and have no idea at all how to work with it and which things i exactly need.
I would aprreciate any input on this, maybe I'm after the wrong stuff at all.
Here are two three.js examples in which a skydome with a gradient is created. They do not involve EffectComposer or disabling depth test.
http://mrdoob.github.com/three.js/examples/webgl_lights_hemisphere.html
http://mrdoob.github.com/three.js/examples/webgl_materials_lightmap.html
three.js r.55
You dont have to use a cone or other 3D-geometry to simulate a gradient sky.
I solved it using a canvas (with 3 gradient-spots, lightblue -> white (horizon) -> darkblue) and draw it as sprite in front of my camera with the right distance to it (fog-distance).
You only have to manage the distance when moving/rotating your cam.
Tip: Use mesh.scale.set (xx,xx,1) to zoom the canvas-texture to needed size.
I have a very complicated site built on CSS3 that has html elements 3d-transformed, rotated, flipped, flopped and just generally distorted.
I'm trying to figure out the on-screen location of one of these elements and don't see any way to do so. I was wondering if anyone has any ingenious ideas.
Alternatively, if anyone can explain the math behind -webkit-perspective, I can figure out the position as that's the only thing I'm not sure how to model.
Have you tried using getBoundingClientRect()?
I've used it successfully in the past to calculate the dimensions of elements that have been transformed with the transform property.
The problem is, that the CSS3 transformations doesn't actually change the position of the elements in anyway. Of course the browsers "know" that they are repositioned, because it renders them, but this information is not provided back to the DOM/API.
The only thing I can think of, is to calculate the positions based on the transformations yourself, since these are "simple" matrix transformations.
Unfortunately Algebra class has been too long ago, that I can't tell you anymore how to do it - only that it is possible.
Using getBoundingClientRect is a good idea but will only give you the coordinates of the rectangle that contains your shape, not the exact coordinates of the 4 topleft, bottomright, bottomleft, topright corners.
You'd only be able to do this by taking each of those non-transformed coordinates and applying the transform via javascript.