A basic rotating question - How you can couple 2 figures (a box/cube with a sphere in it ANYWHERE in the cube, BUT in the center) so that these 2 are coupled ROTATIONALLY (that is why I don't want the sphere to be in the center of the cube) IN PERSPECTIVE.
In other words, when I rotate the cube with the mouse and "bring" the sphere closer to the front (say, make a 180-degree rotation), the perspective changes accordingly and the sphere gets bigger visually (compared to the position on the back)?
Asked a couple of ScalaFX experts - they both said it was a very good question and recommended to post it here.
Cheers:
Zar
>
I'm not entirely sure what you're trying to do, but you can can rotate multiple objects by applying a Rotate transform to the Group that contains all of those objects. If you only want to rotate some of the objects, but not all of them, you have to structure the scene so that the objects being rotated have a common parent Group - with none of the non-rotating objects belonging to it. Applying Rotate transforms to that parent Group will rotate all of its child objects too. Rotation will be about the origin of that parent Group.
Update: I forgot to mention how to address the issue of perspective. The 3D objects in a scene aren't directly affected by perspective, since perspective is a property of how the scene is rendered. This rendering is performed by Camera objects. To render the scene using perspective (as opposed to using orthogonal, or parallel, as it's referred to in JavaFX/ScalaFX), add a PerspectiveCamera to the scene and view the scene using that camera. For further information on this, refer to the following: Getting Started with JavaFX 3D Graphics: Camera
Update 2: I've created a gist on GitHub with a complete program for doing this.
Update 3: Made box transparent & moved sphere inside the box. Now left/primary mouse button rotates box + sphere when dragged; right/secondary mouse button moves camera dolly towards/away from boxes, changing perspective accordingly.
Update 4: So, if I understand you right, you want to transform the shapes in your 3D scene so that they look as though perspective has been applied to them. Do I have that right?
If so, the reason that this is not a "built-in" capability is for the reasons outlined below. Please forgive me if you already know all of this, incidentally - I'm just trying to provide a comprehensive answer. :-)
Scene graphs (as typically used by retained mode 3D systems, such as JavaFX) capture the geometry, location, rotation, color, etc. of a 3D scene in a hierarchical tree structure. The idea is that the modeler only need to worry about the content of the scene - ensuring dimensions, alignments, etc. are correct - and do not need to worry about how the scene is rendered.
Perspective can be applied when the scene is rendered as it would appear from a specific viewpoint; i.e. when the scene is translated into a 2D projection such as a GUI window. (The process of determining what the scene looks like in perspective is a part of the rendering algorithm - but does not require modification, deformation, etc. of the scene.) If perspective is not enabled, then the scene is typically rendered orthogonally, without any vanishing points, apparent scaling, etc. The key point here is that the scene itself is unaffected by how it is viewed.
With this arrangement, it's possible to have multiple views of the same scene. Not only can they each have a different viewpoint, but some can be orthogonal and some can use perspective - yet each can render the scene correctly without any confusing artifacts. If it worked the way you seem to think it does, then you could only ever have a single view of the scene at a time, as the scene would need to be deformed during rendering to look right from that sole viewpoint. When editing the scene, you'd need to remove those deformations to prevent mind-blowing confusion for the modeler.
In short, it's a very unusual requirement that the scene itself be deformed to show what it would look like in perspective. That's why there's no built-in capability to do this in any 3D system that I know of.
Assuming that you wish to proceed - using JavaFX - here's some points to bear in mind:
I don't believe that the regular 3D primitives (namely Box, Sphere & Cylinder) can be deformed to represent a perspective view of them. You will have to construct the shapes using the TriangleMesh and MeshView objects (the former captures the geometry of the shape, the latter allows it to be treated as a 3D shape).
To apply perspective, you would have to reposition the vertices in the TriangleMesh instances to deform the scene appropriately. If you need to be able to change the viewpoint, or rotate the box & sphere, then these changes would need to be dynamic, so that the calculated vertex coordinates react to the changing viewpoint and/or rotation. Because of fish-eye effects at high levels of perspective dilation, you might need more vertices than you might expect.
Given your requirements, you still need a camera to view the scene. Clearly, you cannot use the PerspectiveCamera to render the scene, or it will treat the scene as unadjusted and will apply a second level of perspective, ruining your carefully calculated deformations. You will then need to use ParallelCamera to produce orthogonal views of your scene.
Unfortunately, JavaFX's support for using ParallelCamera with 3D scenes is still very immature. (The ParallelCamera is primarily used to render 2D scenes, such as dialogs, buttons, menu's, sliders, etc.) You might find it difficult to use in practice. (You can approximate an orthogonal projection using the PerspectiveCamera by utilizing a very narrow field of view and moving the camera away from the scene by some distance. You would also need to adjust the clipping planes to avoid the image disappearing.)
Finally, at some point, you will need to be able to position the camera at the same location as the viewpoint being used for the perspective deformation. When the camera is synchronized with that viewpoint, then your scene - although rendered orthogonally - will appear as a correct perspective projection of the intended scene. Whenever the camera and the viewpoint are separated, the scene will appear unnatural and distorted, which - I understand - is your intention.
In summary, I would say that what you intend to do is far from trivial, and the implementation is way beyond the scope of a StackOverflow answer. Good luck!
Mike:
Sorry for the delay, I am finishing something for a client in a totally different application area and pulled-away from my "teaching perspective" toy ...
Just got the notification for the new answers and had a quick look - one thing I noticed right away was that the sphere is outside rather than inside the transparent box (haven’t looked at the code yet).
What I actually expected was a "built-in" perspective "argument" (either in the rotation transformation, or the scene definition, or a stand-alone function - in one way are another), which allows for different perspective to be rendered depending on the angle between the 2 (initially) parallel opposing edges at the bottom, for example. I understand of course that in reality it depends on the ViewPoint Position and you are not "allowed" to forcefully change this angle, but the goal here is simply a "cause-and-effect" toy in a 3D scene.
Controlling the camera will not allow for that, since it imposes the perspective very smoothly (as it would be in real life) rather than allowing the child to directly control the edges and immediately see how her action changes the perspective, rather than playing with the viewpoint.
As mentioned in the original question, I'd expect a function like the one sketched below (and I would expect it to be BUILT-IN in a sensible 3D-product since it is so basic, rather than forcing me or you to manually craft some code for something that should have been there from the get-go - perspective is simply a basic fundamental and hopefully will be covered by the rendering in some form in the next release):
def doPersepctive( myBox: MyBoxContainer, angle: Int, viewPoint: Point): Int = {
// Presents the perspective look in a way defined by the "angle" between the
// (initially) parallel edges of myBoxContainer, from the viewPoint Point (3D).
// Rotate everything within the boundaries of MyBoxContainer. The
// bigger the angle, the smaller the sphere at the back, of course.
// Returns the rotatedAngle after the mouseEvent to enable auto-replay later, so
// the kid can examine what her actions were and see the effect of those actions.
}
Tnx again for your entry, I will certainly have a look at the code, and reply - but you see the general picture above. Sorry for the delay again:
Z
>
Mike:
This is more like the original intension - although I find merits in the first attempt too, actually.
I made the sphere (in the first version) transparent (via diffuseColor = Color.web("#ffff0080") ), so now she can play with both versions, and both are pretty much SIMILAR from a child's perspective (meaning one of the objects is transparent, moreover different objects in these versions).
Now - I tried to make the BOX transparent (where the sphere is outside) and I failed - is there a reason for that? In other words trying to make the object passing "behind" visible? One transparent object passing behind another transparent object, so to say?
In the second version I ALSO cannot see "behind the object", meaning I can NOT see the EDGE of the box passing behind the sphere. Not only that – I can NOT see the back edge EVEN when it is not behind the sphere (but only behind the front side of the box)!
My question in a sense is "CAN both objects be made transparent" - I guess this is the closest to what I am trying to ask. May be with different “transparency %”, but still transparent…
Tnx again:
Zar
>
Yes, Mike - your answer was completely relevant and I do accept it with the thoughtfully-explained shortcomings of the current ScalaFX implementation. If I need to "click" somewhere to formally TAG this, please let me know - I am new to this group and don't know the formalities really.
Tnx again:
Zar
I expected that there was a parameter that controls the Perspective during a Rotational Transformation, but cannot find one. The sample problem is clearly defined - you have a BOX/CUBE and a smaller sphere inside; now when you rotate the BOX, the sphere rotates WITH it but "in perspective", meaning that if you bring the sphere in front, it looks (draws) bigger correspondingly with the "perspective".
Zar
>
>
BTW, if it were possible to add the Sphere as a child of the Box, then you ...
<
That is not possible, but CAN I add the sphere AT RUN TIME rather than at compile time? In other words is there an "addObject" that adds the sphere after the kid has played with box for 1 min and 1 min after running the program, the sphere appears. Cannot see anything like that here:
http://www.scalafx.org/api/8.0/index.html#package
May be I am missing something ?
Zar
>
I don't know how to explain this but the objects I make in ELEMENT 3D aren't 3D but more like 2.5D.
I made a video so you can see the problem.
https://sendvid.com/s1hv1ay3
My recording software didn't record the Element interface at 0:24, but I was trying to show in that interface that you could rotate it without problems.
You're not understanding how Element3D (and aspects of the AE interface) work. Think of the layer you apply it to as being its own window "into a 3D world". You don't rotate the layer itself, you rotate the objects within the Element3D world by changing the parameters in the timeline or Effects controls pane. You were using the rotate tool in the video with the Element3D-effected layer selected. Don't do that. Use the individual parameters within Element. Another way to rotate around the 3D object(s) is to use the camera. I suspect this is what you were attempting after seeing a tutorial or something. What you do is make a two-node camera and use the camera tool by cycling through the tool with the "c" key until you get camera rotate, which looks a bit similar to the rotate layer tool. With a two-node camera, rotating the camera allows you to rotate around the point of interest of the camera so it rotates around the object in 3D space. I suggest you get a more familiar with how 3D works in AE (which is not "true 3D", not 2.5D, but is "Planar 3D"; the Element3D plugin is one of the best 3D-integrated plugins working within this model).
don't ever move the layer the element 3d model is on.
Use the element effects controls (f3). usually found in group 1
I am trying build a 2D QGraphicsScene and there is several 2D picture items in the scene,
then create an animation like zoom in, the camera move onto an item. During the animation, These items shall be perspective view just like 3D.
How to make 2D object look like having the 3D coordinate.
Changing the sign should work for the rotation. You can get an effect of depth by correct drawing order and size. smaller objects must be behind larger ones and in case of any scrolling should scroll slower (google for paralaxis effect and parallax scrolling) 30 years old video games used also that technology.
All this, the layering, the size and the sensible movement speed simulate the existence of a depth coordinate.
If you make the "distant" objects darker the depth effect will be even stronger.
And you need continuous movement to get the proper effect. As long as the scene moves it will have the depth effect. A static scene will always be flat.
I'm kind of newbie in both OSG and Qt, still I'm trying to make Qt HUD upon my OSG window, what I want is Qt interface elements fixed inside OSG scene, not spinning with the model. The thing is, I need Qt elements INSIDE osg scene, not OSG scene inside Qt window (like in OSGviewerQt example).
What I've got yet is OSGQtWidgets example with --useWidgetImage --fullscreen arguments, which shows fixed Qt controls ontop of OSG Model. The thing is, it creates new (FIXED) camera for qt element ontop of OSG model -- because of that, user cannot spin and move OSG model because camera is not transparent.
So the question is: is there a way to make transparent camera with useable Qt elements in it? Or is there some other way to achieve my goals?
Thank you in advance!
I am unable to try this out for myself first, but you can try setting a property on the Orthogonal "HUD" camera to let events pass through to the regular OSG viewer:
camera->setAllowEventFocus(false);
If you're using a recent version of osgQtWidgets.cpp, you'd want to add this around line 414.
I'm developing a Flex 2 application, and I noticed that part of the library which is responsible for moving GUI windows (TitleWindows) around when you drag them with the mouse gets confused if there is a clickable (buttonMode = true) sprite beneath them. When I say confused, I mean that the window is moved around normally for a while, but then at some point "jumps" into the upper left corner of the flash app, and makes very minor movement there. Then at some other point it jumps back. It is more difficult to explain than to experience, so please go and see for yourself. Here's how to reproduce the problem:
Go to http://www.panocast.com
In the left sidebar, choose "Real Estate"
Just below the bottom right corner of the flash window, choose "high res" by clicking on the rightmost icon.
When (part of) the video loads, click on the staircase. A TitleWindow will pop up.
Try dragging it around the screen. When the mouse cursor is moved above one of the clickable areas (like the staircase), the window is misplaced.
(Sorry, but can't give you a direct link, part of the page is generated dynamically.)
(What's makes the problem even more interesting is that for me, in "low res" mode, the problem does not occur! There is very little difference between the various modes.) I would really appreciate if someone told me what was going on here and how it can be fixed.
I'm not sure if it matters, but the underlying sprite is actually not just plain sprite, rather it is a Papervision3D renderer object with some 3D elements in it. I'm telling this because it is possible that the incorrect mouse coordinates somehow come from the texture UV mapped on the clickable objects.
I've managed to replicate this on the low res mode as well, so I don't think it's related to the resolution.
This looks to be because the MouseEvent is being handled by the TitleWindow AND the Papervision3D window. Perhaps you need to force stopImmediatePropagation() on one or the other? Or maybe switch off the MouseEvent handling for the Pv3D window when the TitleWindow pops up?
That's a tough one to debug without some source; something's apparently calling either move() or setting x and y properties on that TitleWindow and scheduling it be moved.
When I first read the post, it "smelled" like maybe a rotation miscalculation somewhere (using Math.atan vs. Math.atan2 can sometimes have that kind of effect), so you're right, it could have something to do with PaperVision, assuming you're not using Math.atan or setting rotation properties yourself anywhere. Just thought I'd mention it, though it's probably not happening in your case. You never know, though. ;)
More likely the LayoutManager is moving the component in response to a property change on the component. The Flex docs explain that in addition to setting its x and y properties, and explicit calls to move(), a UIComponent's move event can also be triggered when any of the following other properties change:
minWidth
minHeight
maxWidth
maxHeight
explicitWidth
explicitHeight
PaperVision or no, maybe that info might help you isolate the source of the move. Good luck.
I got this figured out. Apparently, this is a Papervision3D problem. There is a class deep inside Papervision3D called VirtualMouse, which is supposed to generate MouseEvents programmatically. This happens, for example, when the user interacts with any of the interactive objects on stage, e.g., a Plane with an interactive material on it (as in my case).
The problem is that the x and y coordinates of the generated event represent texture UV coordinates (just as I suspected) and not real world screen coordinates. When a TitleWindow (or any Panel object) is dragged, a "mouseMove" handler (among others) is added to the SystemManager, which then uses the stageX and stageY properties of the event object to determine the new position of the window. Unfortunately for VirtualMouse's mouse events, these are invalid, since the original x,y coordinates, which are probably used to determine the global stage coordinates are, as I said, not screen coordinates.
Honestly, I'm still unsure whether the events dispatched by VirtualMouse are used anywhere within Papervision3D itself, or they are just offered for convenience, but they sure make it difficult to integrate a viewport into a Flex program. Assuming that such events aren't necessary for PV3D itself, there is a one-liner fix for my problem, which must be added right after the creation of the viewport:
viewport.interactiveSceneManager.virtualMouse.
disableEvent(MouseEvent.MOUSE_MOVE);
BTW., there was a very similar (or rather, as it turns out, the same) bug with dragging sliders, also fixed by this line.