Re-render one layer only in paper.js - paperjs

Is there a way to re-render a singular layer in paper.js? I know that you can just re-render the entire scene, but a canvas with thousands of shapes will affect invariably affect performance, so doing it only on the layers would be ideal.

There is no way to do such thing unfortunately.
I suggest you use bitmap caching: save all your shapes (the layer) in a raster and draw this raster instead of the layer (just hide the layer). You would have to remove the raster and redraw everything if any of the shape change.
There could be other ways to implement this for really specific cases.
Another way is to render with webgl but it is quite complex to draw thick lines in webgl.

You can simulate each layer as a different PaperScope.
Since each PaperScope is tied to a specific <canvas> element, changes to one scope (layer in your case) will cause only it's respective canvas to be updated.
Keep in mind that if you do eventually follow this approach you'd have to manually paperScope.activate() your scope before working with it, since paperJS tends to do some "magic" when it comes to scope activation.

You can set the visible param on the layers that you don't want to render to false. Then draw() and set the visible back to true. It increases the performance quite a bit! ;-)
var tNow=(new Date()).getTime();
paper.project.layers[1].visible =false;
paper.view.draw();
paper.project.layers[1].visible = true;
console.log('Rendertime: '+((new Date()).getTime()-tNow));

Related

ScalaFX: How to couple 2 figures rotationally in perspective

A basic rotating question - How you can couple 2 figures (a box/cube with a sphere in it ANYWHERE in the cube, BUT in the center) so that these 2 are coupled ROTATIONALLY (that is why I don't want the sphere to be in the center of the cube) IN PERSPECTIVE.
In other words, when I rotate the cube with the mouse and "bring" the sphere closer to the front (say, make a 180-degree rotation), the perspective changes accordingly and the sphere gets bigger visually (compared to the position on the back)?
Asked a couple of ScalaFX experts - they both said it was a very good question and recommended to post it here.
Cheers:
Zar
>
I'm not entirely sure what you're trying to do, but you can can rotate multiple objects by applying a Rotate transform to the Group that contains all of those objects. If you only want to rotate some of the objects, but not all of them, you have to structure the scene so that the objects being rotated have a common parent Group - with none of the non-rotating objects belonging to it. Applying Rotate transforms to that parent Group will rotate all of its child objects too. Rotation will be about the origin of that parent Group.
Update: I forgot to mention how to address the issue of perspective. The 3D objects in a scene aren't directly affected by perspective, since perspective is a property of how the scene is rendered. This rendering is performed by Camera objects. To render the scene using perspective (as opposed to using orthogonal, or parallel, as it's referred to in JavaFX/ScalaFX), add a PerspectiveCamera to the scene and view the scene using that camera. For further information on this, refer to the following: Getting Started with JavaFX 3D Graphics: Camera
Update 2: I've created a gist on GitHub with a complete program for doing this.
Update 3: Made box transparent & moved sphere inside the box. Now left/primary mouse button rotates box + sphere when dragged; right/secondary mouse button moves camera dolly towards/away from boxes, changing perspective accordingly.
Update 4: So, if I understand you right, you want to transform the shapes in your 3D scene so that they look as though perspective has been applied to them. Do I have that right?
If so, the reason that this is not a "built-in" capability is for the reasons outlined below. Please forgive me if you already know all of this, incidentally - I'm just trying to provide a comprehensive answer. :-)
Scene graphs (as typically used by retained mode 3D systems, such as JavaFX) capture the geometry, location, rotation, color, etc. of a 3D scene in a hierarchical tree structure. The idea is that the modeler only need to worry about the content of the scene - ensuring dimensions, alignments, etc. are correct - and do not need to worry about how the scene is rendered.
Perspective can be applied when the scene is rendered as it would appear from a specific viewpoint; i.e. when the scene is translated into a 2D projection such as a GUI window. (The process of determining what the scene looks like in perspective is a part of the rendering algorithm - but does not require modification, deformation, etc. of the scene.) If perspective is not enabled, then the scene is typically rendered orthogonally, without any vanishing points, apparent scaling, etc. The key point here is that the scene itself is unaffected by how it is viewed.
With this arrangement, it's possible to have multiple views of the same scene. Not only can they each have a different viewpoint, but some can be orthogonal and some can use perspective - yet each can render the scene correctly without any confusing artifacts. If it worked the way you seem to think it does, then you could only ever have a single view of the scene at a time, as the scene would need to be deformed during rendering to look right from that sole viewpoint. When editing the scene, you'd need to remove those deformations to prevent mind-blowing confusion for the modeler.
In short, it's a very unusual requirement that the scene itself be deformed to show what it would look like in perspective. That's why there's no built-in capability to do this in any 3D system that I know of.
Assuming that you wish to proceed - using JavaFX - here's some points to bear in mind:
I don't believe that the regular 3D primitives (namely Box, Sphere & Cylinder) can be deformed to represent a perspective view of them. You will have to construct the shapes using the TriangleMesh and MeshView objects (the former captures the geometry of the shape, the latter allows it to be treated as a 3D shape).
To apply perspective, you would have to reposition the vertices in the TriangleMesh instances to deform the scene appropriately. If you need to be able to change the viewpoint, or rotate the box & sphere, then these changes would need to be dynamic, so that the calculated vertex coordinates react to the changing viewpoint and/or rotation. Because of fish-eye effects at high levels of perspective dilation, you might need more vertices than you might expect.
Given your requirements, you still need a camera to view the scene. Clearly, you cannot use the PerspectiveCamera to render the scene, or it will treat the scene as unadjusted and will apply a second level of perspective, ruining your carefully calculated deformations. You will then need to use ParallelCamera to produce orthogonal views of your scene.
Unfortunately, JavaFX's support for using ParallelCamera with 3D scenes is still very immature. (The ParallelCamera is primarily used to render 2D scenes, such as dialogs, buttons, menu's, sliders, etc.) You might find it difficult to use in practice. (You can approximate an orthogonal projection using the PerspectiveCamera by utilizing a very narrow field of view and moving the camera away from the scene by some distance. You would also need to adjust the clipping planes to avoid the image disappearing.)
Finally, at some point, you will need to be able to position the camera at the same location as the viewpoint being used for the perspective deformation. When the camera is synchronized with that viewpoint, then your scene - although rendered orthogonally - will appear as a correct perspective projection of the intended scene. Whenever the camera and the viewpoint are separated, the scene will appear unnatural and distorted, which - I understand - is your intention.
In summary, I would say that what you intend to do is far from trivial, and the implementation is way beyond the scope of a StackOverflow answer. Good luck!
Mike:
Sorry for the delay, I am finishing something for a client in a totally different application area and pulled-away from my "teaching perspective" toy ...
Just got the notification for the new answers and had a quick look - one thing I noticed right away was that the sphere is outside rather than inside the transparent box (haven’t looked at the code yet).
What I actually expected was a "built-in" perspective "argument" (either in the rotation transformation, or the scene definition, or a stand-alone function - in one way are another), which allows for different perspective to be rendered depending on the angle between the 2 (initially) parallel opposing edges at the bottom, for example. I understand of course that in reality it depends on the ViewPoint Position and you are not "allowed" to forcefully change this angle, but the goal here is simply a "cause-and-effect" toy in a 3D scene.
Controlling the camera will not allow for that, since it imposes the perspective very smoothly (as it would be in real life) rather than allowing the child to directly control the edges and immediately see how her action changes the perspective, rather than playing with the viewpoint.
As mentioned in the original question, I'd expect a function like the one sketched below (and I would expect it to be BUILT-IN in a sensible 3D-product since it is so basic, rather than forcing me or you to manually craft some code for something that should have been there from the get-go - perspective is simply a basic fundamental and hopefully will be covered by the rendering in some form in the next release):
def doPersepctive( myBox: MyBoxContainer, angle: Int, viewPoint: Point): Int = {
// Presents the perspective look in a way defined by the "angle" between the
// (initially) parallel edges of myBoxContainer, from the viewPoint Point (3D).
// Rotate everything within the boundaries of MyBoxContainer. The
// bigger the angle, the smaller the sphere at the back, of course.
// Returns the rotatedAngle after the mouseEvent to enable auto-replay later, so
// the kid can examine what her actions were and see the effect of those actions.
}
Tnx again for your entry, I will certainly have a look at the code, and reply - but you see the general picture above. Sorry for the delay again:
Z
>
Mike:
This is more like the original intension - although I find merits in the first attempt too, actually.
I made the sphere (in the first version) transparent (via diffuseColor = Color.web("#ffff0080") ), so now she can play with both versions, and both are pretty much SIMILAR from a child's perspective (meaning one of the objects is transparent, moreover different objects in these versions).
Now - I tried to make the BOX transparent (where the sphere is outside) and I failed - is there a reason for that? In other words trying to make the object passing "behind" visible? One transparent object passing behind another transparent object, so to say?
In the second version I ALSO cannot see "behind the object", meaning I can NOT see the EDGE of the box passing behind the sphere. Not only that – I can NOT see the back edge EVEN when it is not behind the sphere (but only behind the front side of the box)!
My question in a sense is "CAN both objects be made transparent" - I guess this is the closest to what I am trying to ask. May be with different “transparency %”, but still transparent…
Tnx again:
Zar
>
Yes, Mike - your answer was completely relevant and I do accept it with the thoughtfully-explained shortcomings of the current ScalaFX implementation. If I need to "click" somewhere to formally TAG this, please let me know - I am new to this group and don't know the formalities really.
Tnx again:
Zar
I expected that there was a parameter that controls the Perspective during a Rotational Transformation, but cannot find one. The sample problem is clearly defined - you have a BOX/CUBE and a smaller sphere inside; now when you rotate the BOX, the sphere rotates WITH it but "in perspective", meaning that if you bring the sphere in front, it looks (draws) bigger correspondingly with the "perspective".
Zar
>
>
BTW, if it were possible to add the Sphere as a child of the Box, then you ...
<
That is not possible, but CAN I add the sphere AT RUN TIME rather than at compile time? In other words is there an "addObject" that adds the sphere after the kid has played with box for 1 min and 1 min after running the program, the sphere appears. Cannot see anything like that here:
http://www.scalafx.org/api/8.0/index.html#package
May be I am missing something ?
Zar
>

Practical differences between SVG and Canvas within a ggvis & Shiny Context

I have already read
What is the difference between SVG and HTML5 Canvas?
&&
https://en.wikipedia.org/wiki/Canvas_element#Canvas_versus_Scalable_Vector_Graphics_.28SVG.29
So i am aware of the basic differences, but i was wondering if anyone had encountered any practical difference between the two within the context of ggvis and shiny apart from SVG inability to deal with NA's in the data
The short answer:
SVG would be easier for you, since selection and moving it around is already built in. SVG objects are DOM objects, so they have "click" handlers, etc.
DIVs are okay but clunky and have awful performance loading at large numbers.
Canvas has the best performance hands-down, but you have to implement all concepts of managed state (object selection, etc) yourself, or use a library.
The long answer:
HTML5 Canvas is simply a drawing surface for a bit-map. You set up to draw (Say with a color and line thickness), draw that thing, and then the Canvas has no knowledge of that thing: It doesn't know where it is or what it is that you've just drawn, it's just pixels. If you want to draw rectangles and have them move around or be selectable then you have to code all of that from scratch, including the code to remember that you drew them.
SVG on the other hand must maintain references to each object that it renders. Every SVG/VML element you create is a real element in the DOM. By default this allows you to keep much better track of the elements you create and makes dealing with things like mouse events easier by default, but it slows down significantly when there are a large number of objects
Those SVG DOM references mean that some of the footwork of dealing with the things you draw is done for you. And SVG is faster when rendering really large objects, but slower when rendering many objects.
A game would probably be faster in Canvas. A huge map program would probably be faster in SVG. If you do want to use Canvas, I have some tutorials on getting movable objects up and running here.
Canvas would be better for faster things and heavy bitmap manipulation (like animation), but will take more code if you want lots of interactivity.
I've run a bunch of numbers on HTML DIV-made drawing versus Canvas-made drawing. I could make a huge post about the benefits of each, but I will give some of the relevant results of my tests to consider for your specific application:
I made Canvas and HTML DIV test pages, both had movable "nodes." Canvas nodes were objects I created and kept track of in Javascript. HTML nodes were movable Divs.
I added 100,000 nodes to each of my two tests. They performed quite differently:
The HTML test tab took forever to load (timed at slightly under 5 minutes, chrome asked to kill the page the first time). Chrome's task manager says that tab is taking up 168MB. It takes up 12-13% CPU time when I am looking at it, 0% when I am not looking.
The Canvas tab loaded in one second and takes up 30MB. It also takes up 13% of CPU time all of the time, regardless of whether or not one is looking at it. (2013 edit: They've mostly fixed that)
Dragging on the HTML page is smoother, which is expected by the design, since the current setup is to redraw EVERYTHING every 30 milliseconds in the Canvas test. There are plenty of optimizations to be had for Canvas for this. (canvas invalidation being the easiest, also clipping regions, selective redrawing, etc.. just depends on how much you feel like implementing)
There is no doubt you could get Canvas to be faster at object manipulation as the divs in that simple test, and of course far faster in the load time. Drawing/loading is faster in Canvas and has far more room for optimizations, too (ie, excluding things that are off-screen is very easy).
Conclusion:
SVG is probably better for applications and apps with few items (less than 1000? Depends really)
Canvas is better for thousands of objects and careful manipulation, but a lot more code (or a library) is needed to get it off the ground.
HTML Divs are clunky and do not scale, making a circle is only possible with rounded corners, making complex shapes is possible but involves hundreds of tiny tiny pixel-wide divs. Madness ensues.
I have past content from the following link.
Please see this link for more details
HTML5 Canvas vs. SVG vs. div

Rendering an invisible occluder

I'm currently upgrading from a DirectDraw system (yeah I know, it's very old) to DirectX10. It's a 2D system but simulates real world as each object has a range/depth in meters. There is a background image that is rendered and kept on the farthest z-order. All other objects are drawn on top of it and scaled according to what their range/depth would be. However, there is a certain type of object I have that is defined as a polygon and renders a bit different. It acts as an invisible occluder. For instance, an occluder is at a range/depth of 40 (my units are meters) and is defined by 5 vertices (a pentagon) in the middle of the viewport. There is a sprite object at the same viewport position but at a range/depth of 50. The desired output is to have the sprite object not rendered, but the background should be seen through both of them. So in essence these are invisible occluders, except that they do not occlude the background.
As a note, the occluders and the sprites all derive from the same base object type and are mixed together in a depth-sorted container.
My idea was to override the occluders Render method so they draw to a render target writing the range/depth values. I then would render the sprites as normal, but in the vertex or pixel shader would compare the range value of the sprite with the range values in the render target. However, it seems to me that I'd have to potentially read/write from the render target in the same pass before Present is called, and that's undefined. If i was to render the occluders, unbind the render target and pass the texture in for a lookup by the other objects, I'll have to convert the sprite positions into that texture space which may be non-trivial. Are either of these methods possible?
After thinking some more about it, one other idea came to mind. I could take the occluders and set their texture coordinates in reference to the background texture. In this way they would draw the same color values as the background, and because of the sorting if a sprite was behind it the user would still see the "background" but really it's the occluder looking like it.
Sorry if this is less a question and more thinking out loud, but I wanted to get impressions and ideas on the best way to go about this. Seems to me I have options but wasn't certain which was most efficient and which is easiest. Thanks in advance for any responses.
As stated in my comments I went with setting the texture coordinates in reference to the background image and then making sure the occluder, which was a simple polygon, was triangulated properly to make use of those texture coordinates.

how to render a map defined by a matrix of cells using QT Gui?

I have to render using QT a map defined by a matrix of cells in overview mode (think GPS map). I'd like to be able to zoom on it and each cell is defined by its color and some properties (bitmaps to put on the cell). I also need, like in a GPS, to be able to move around (using the directional arrows) in the map.
Right now, I'm thinking about drawing a matrix of QImages on my screen, and loading each one of them with the info of the cells I need, but it doesn't seem like a very good solution.
Thank you for every possibility you might provide.
Your initial idea is a reasonable one, but put all the QImages and info you need into a custom QGraphicsItem and add them to a QGraphicsScene (and fix their position) - then you only need a QGraphicsView to visualise everything. This way you get the BSP painting and selection optimisations, view transformations, and nice animations (if required!) for free.

Moving a Flex GUI window confused by underlying Papervision3D viewport

I'm developing a Flex 2 application, and I noticed that part of the library which is responsible for moving GUI windows (TitleWindows) around when you drag them with the mouse gets confused if there is a clickable (buttonMode = true) sprite beneath them. When I say confused, I mean that the window is moved around normally for a while, but then at some point "jumps" into the upper left corner of the flash app, and makes very minor movement there. Then at some other point it jumps back. It is more difficult to explain than to experience, so please go and see for yourself. Here's how to reproduce the problem:
Go to http://www.panocast.com
In the left sidebar, choose "Real Estate"
Just below the bottom right corner of the flash window, choose "high res" by clicking on the rightmost icon.
When (part of) the video loads, click on the staircase. A TitleWindow will pop up.
Try dragging it around the screen. When the mouse cursor is moved above one of the clickable areas (like the staircase), the window is misplaced.
(Sorry, but can't give you a direct link, part of the page is generated dynamically.)
(What's makes the problem even more interesting is that for me, in "low res" mode, the problem does not occur! There is very little difference between the various modes.) I would really appreciate if someone told me what was going on here and how it can be fixed.
I'm not sure if it matters, but the underlying sprite is actually not just plain sprite, rather it is a Papervision3D renderer object with some 3D elements in it. I'm telling this because it is possible that the incorrect mouse coordinates somehow come from the texture UV mapped on the clickable objects.
I've managed to replicate this on the low res mode as well, so I don't think it's related to the resolution.
This looks to be because the MouseEvent is being handled by the TitleWindow AND the Papervision3D window. Perhaps you need to force stopImmediatePropagation() on one or the other? Or maybe switch off the MouseEvent handling for the Pv3D window when the TitleWindow pops up?
That's a tough one to debug without some source; something's apparently calling either move() or setting x and y properties on that TitleWindow and scheduling it be moved.
When I first read the post, it "smelled" like maybe a rotation miscalculation somewhere (using Math.atan vs. Math.atan2 can sometimes have that kind of effect), so you're right, it could have something to do with PaperVision, assuming you're not using Math.atan or setting rotation properties yourself anywhere. Just thought I'd mention it, though it's probably not happening in your case. You never know, though. ;)
More likely the LayoutManager is moving the component in response to a property change on the component. The Flex docs explain that in addition to setting its x and y properties, and explicit calls to move(), a UIComponent's move event can also be triggered when any of the following other properties change:
minWidth
minHeight
maxWidth
maxHeight
explicitWidth
explicitHeight
PaperVision or no, maybe that info might help you isolate the source of the move. Good luck.
I got this figured out. Apparently, this is a Papervision3D problem. There is a class deep inside Papervision3D called VirtualMouse, which is supposed to generate MouseEvents programmatically. This happens, for example, when the user interacts with any of the interactive objects on stage, e.g., a Plane with an interactive material on it (as in my case).
The problem is that the x and y coordinates of the generated event represent texture UV coordinates (just as I suspected) and not real world screen coordinates. When a TitleWindow (or any Panel object) is dragged, a "mouseMove" handler (among others) is added to the SystemManager, which then uses the stageX and stageY properties of the event object to determine the new position of the window. Unfortunately for VirtualMouse's mouse events, these are invalid, since the original x,y coordinates, which are probably used to determine the global stage coordinates are, as I said, not screen coordinates.
Honestly, I'm still unsure whether the events dispatched by VirtualMouse are used anywhere within Papervision3D itself, or they are just offered for convenience, but they sure make it difficult to integrate a viewport into a Flex program. Assuming that such events aren't necessary for PV3D itself, there is a one-liner fix for my problem, which must be added right after the creation of the viewport:
viewport.interactiveSceneManager.virtualMouse.
disableEvent(MouseEvent.MOUSE_MOVE);
BTW., there was a very similar (or rather, as it turns out, the same) bug with dragging sliders, also fixed by this line.

Resources