What is the best approach to displaying drawings on different-sized Paper JS views? - paperjs

Context
I'm using Paper JS to build a multi-player drawing game. At any given point, a single user will be drawing to his/her canvas, and the data will get sent to the server to be broadcast to other users. Each user's canvas may be of variable size, and it resizes as the window resizes while maintaining the same aspect ratio.
The goal is for each user to have a scaled representation of the drawing (i.e. everything fits inside the different sized canvases and the content doesn't get distorted). This should be the case if a drawing transfers from a larger canvas to a smaller canvas, and vice-versa. The project supports a drawing tool as well as an eraser tool.
Problem
Approach 1 below scales the drawings the way I want, but there is substantial lag. Approach 2 deals with the lag, but doesn't scale the drawings the way I want.
My understanding is that SVGs will scale nicely whether they are scaled-up or scaled-down. But rasters are pixel-based and will become "blurry" when scaled-up. When I test approach 2, a drawing from a smaller canvas gets blurred on a larger canvas. The result is the same whether I use export/importJSON or export/importSVG. Is there a way to get both good performance and scaled-drawings? See below for example implementations of the tools.
Approach 1: Paths + Symbols:
Every path/symbol placement is kept in the active layer.
The eraser tool draws a white rectangle (defined as a symbol) to
mimic an "erasing" effect.
This works fine as a demo, but will start to lag very quickly as the
number of items in the active layer increases. The eraser tool in
particular will not function smoothly.
Relevant sketch
Approach 2: Rasterization:
After a path is drawn or a symbol is placed, the active layer is
rasterized and its children are removed.
This seems to work quite well on a single canvas, and the eraser
doesn't lag like in the first approach. There are only 2 items in the
active layer after each rasterization.
When a drawing from a client with a smaller canvas is exported (using exportJSON or exportSVG) to a client with a larger canvas, the result is "blurry".
The above also happens when a drawing is made and then the canvas is re-sized to be larger.
Relevant sketch

You could send your objects as SVG and rasterize them once received.

Related

Is there a reason clip path a div with an image inside slows performance in chrome?

I have a div that uses:
-webkit-clip-path: polygon(0 0, 100% 7%, 100% 100%, 0 100%);
clip-path: polygon(0 0, 100% 7%, 100% 100%, 0 100%);
And there is an image inside this div inside another div. Is there a reason why this specific code causes chrome performance to drop - scrolling becomes choppy too. In Firefox everything looks normal.
Strangely enough, it only affects scrolling when view is on that element, once you scroll past it looks fine again
Clip-Path GPU Rendering
clip-path uses the GPU for rendering, so it is likely to be a graphics card/driver issue or that your system was out of resources and unable to render it effectively.
Try viewing on other machines to see if the same problem exists.
To understand the performance issues and how to debug them these articles will help
Debugging a Canvas Element
Chrome allows you to profile and debug canvas elements from the
Developer Tools. It can be used for both 2D and WebGL canvas projects.
To be able to do this, you need to have enabled the "Experiments" tab.
If you haven't already, navigate to chrome://flags and enable the
option marked "Enable Developer Tools experiments". You'll need to
press "Relaunch Now" button at the bottom of the page to apply your
changes. Go to the Settings panel of Chrome Developer Tools by
clicking the cog on the bottom right. Click the "Experiments" tab and
check the option "Canvas inspection".
Now visit the "Profile" tab and you will see an option called "Capture
Canvas Frame". The Developer Tools may ask you to Reload the page to
use the canvas. Pressing "Start" captures a single frame of the canvas
application. Alternatively, you can click the box below to switch to
"Consecutive Frames" which allows for capture of multiple frames.
Chrome creates a log of each call to canvas, providing a list of each
call to the context and a screenshot. You can click one of the log
items to replay the frame in the Developer Tools and see which
commands were called in the order they were called and from which
line.
Firefox has Canvas and WebGL Shader debugger, giving you features to
inspect frames, fps, modify shaders and more.
In order to enable these tools, go to Devtools settings (the cog icon
in devtools) and check "Canvas" and "Shader Editor".
Picking Your Properties
Animation is not selecting a syntax, it’s designing the animation for
fast rendering. The difference between a smooth, life-like animation
and a janky, stuttery one is rarely as simple as CSS versus
JavaScript. Instead, it’s often determined by which properties or
attributes you animate, on which elements.
Regardless of whether you’re changing a style property with CSS or
with SMIL or with JavaScript, the browser needs to determine which
pixels on the screen need to be updated, and how.
If the DOM and style computation steps determine that no styles or SVG
rendering attributes have changed for any elements, the browser can
stop right there.
If the changed styles don’t affect layout (only painting), or if
layout has changed for some elements but not for others, the browser
has to determine which parts it needs to repaint. This region is known
as the “dirty” rectangle of the screen. Elements elsewhere on the
screen can be skipped, their pixels unchanged for this update.
The changed element usually needs to be repainted, but also maybe
others. Did the changed element overlap another element, which is now
revealed? If so, the browser may need to redraw that background
element.
But maybe not.
It depends on whether the browser has the original pixel data for the
background saved in memory. The graphical processing units (GPU) in
most modern computers and smartphones can keep a certain number of
rendering layers in memory, not just the final version that appears on
screen. The main browser program may also save partial images in
memory.
Much of browser rendering optimization comes down to how it selects
which parts of the rendered document to divide into separately cached
(saved) layers.
GPUs can perform certain operations on the cached rendering layers,
and are highly optimized for the limited number of operations they can
do.
If browsers know that an element is going to change in a way that can
be efficiently calculated by the GPU, they can save that image’s pixel
data in a different GPU layer from its background (or foreground). The
animated changes can therefore be applied by sending new instructions
to the GPU for how to combine the saved pixels, instead of by
calculating new pixel values in the main processor.
Tip Most browser Dev Tools now have options to highlight the “dirty”
paint rectangles whenever they are updated. If your animation is being
GPU-optimized, you won’t see any colored rectangles flashing when you
run this Dev Tools mode.
Of course, all GPU-optimized pathways are conditional on having a
compatible GPU available—and on the browser knowing how to use it,
which may depend on the operating system. So browser performance, and
sometimes even browser bugs, will depend not just on the browser
version but also on the OS and hardware.
Most GPUs can adjust opacity of the saved layers, and translate them
to different relative positions before combining them. They can also
perform image scaling, usually including 3D perspective scaling—but
the scaling is calculated on a pixel level, not a vector level, and
can cause a visible loss in resolution. More advanced GPUs can
calculate some filter operations and blend modes, and masking of one
image layer with an alpha mask layer.
Some GPUs also have optimized vector rasterization, which can
calculate high-resolution vector shapes for use as clipping paths of
other vector levels. These “clipping paths” aren’t only used for
clip-path effects, though. Filling and stroking a shape is clipping
the paint image layer to the fill-region or stroke-region vector
outline. Similarly, CSS border-radius effects are vector clipping
paths on the content and background image layers.
But you currently can’t rely on your end users having these optimized
pathways.
The best performance, across a wide range of browsers and hardwares,
comes from animations that can be broken into layers (of elements,
groups, or individual graphics) that are animated in the
following ways:
opacity changes
translational and rotational transformations
Warning Currently, Chrome never divides an SVG graphic into different
GPU layers (although they do other optimizations).
To create a fully GPU-optimized animation in Chrome, you can sometimes
position separate inline elements over top of each other,
creating your own layers.
If you can’t define your animation entirely in translation and opacity
layers, consider the following guidelines:
Minimize the size of the “dirty” rectangle at each frame.
Solid-color objects are better than semi-transparent ones, since the
browser doesn’t need to calculate pixel updates for shapes that can’t
be seen behind a solid object. (Although this may not apply if the
browser is using GPU layers for optimization.)
Moving elements around is more efficient than changing what they look
like. (Although it depends on the browser whether “moving around” only
applies to transform movements or also to other absolute position
changes.)
Changing fill and stroke is more efficient than changing shapes and
sizes.
Scaling transformations are better than changing the underlying
geometry; browsers may be able to use GPU image scaling for an
animated scale effect, instead of recalculating the vector image at
the correct resolution at each frame.
Clipping is usually more efficient than masking.
Avoid rescaling gradient and pattern layers; this could mean using
user-space effects instead of bounding-box effects, if the bounding
box is changing.
Avoid any changes that require a filter to be recalculated. That
includes any change to the filtered element or its child content.

Practical differences between SVG and Canvas within a ggvis & Shiny Context

I have already read
What is the difference between SVG and HTML5 Canvas?
&&
https://en.wikipedia.org/wiki/Canvas_element#Canvas_versus_Scalable_Vector_Graphics_.28SVG.29
So i am aware of the basic differences, but i was wondering if anyone had encountered any practical difference between the two within the context of ggvis and shiny apart from SVG inability to deal with NA's in the data
The short answer:
SVG would be easier for you, since selection and moving it around is already built in. SVG objects are DOM objects, so they have "click" handlers, etc.
DIVs are okay but clunky and have awful performance loading at large numbers.
Canvas has the best performance hands-down, but you have to implement all concepts of managed state (object selection, etc) yourself, or use a library.
The long answer:
HTML5 Canvas is simply a drawing surface for a bit-map. You set up to draw (Say with a color and line thickness), draw that thing, and then the Canvas has no knowledge of that thing: It doesn't know where it is or what it is that you've just drawn, it's just pixels. If you want to draw rectangles and have them move around or be selectable then you have to code all of that from scratch, including the code to remember that you drew them.
SVG on the other hand must maintain references to each object that it renders. Every SVG/VML element you create is a real element in the DOM. By default this allows you to keep much better track of the elements you create and makes dealing with things like mouse events easier by default, but it slows down significantly when there are a large number of objects
Those SVG DOM references mean that some of the footwork of dealing with the things you draw is done for you. And SVG is faster when rendering really large objects, but slower when rendering many objects.
A game would probably be faster in Canvas. A huge map program would probably be faster in SVG. If you do want to use Canvas, I have some tutorials on getting movable objects up and running here.
Canvas would be better for faster things and heavy bitmap manipulation (like animation), but will take more code if you want lots of interactivity.
I've run a bunch of numbers on HTML DIV-made drawing versus Canvas-made drawing. I could make a huge post about the benefits of each, but I will give some of the relevant results of my tests to consider for your specific application:
I made Canvas and HTML DIV test pages, both had movable "nodes." Canvas nodes were objects I created and kept track of in Javascript. HTML nodes were movable Divs.
I added 100,000 nodes to each of my two tests. They performed quite differently:
The HTML test tab took forever to load (timed at slightly under 5 minutes, chrome asked to kill the page the first time). Chrome's task manager says that tab is taking up 168MB. It takes up 12-13% CPU time when I am looking at it, 0% when I am not looking.
The Canvas tab loaded in one second and takes up 30MB. It also takes up 13% of CPU time all of the time, regardless of whether or not one is looking at it. (2013 edit: They've mostly fixed that)
Dragging on the HTML page is smoother, which is expected by the design, since the current setup is to redraw EVERYTHING every 30 milliseconds in the Canvas test. There are plenty of optimizations to be had for Canvas for this. (canvas invalidation being the easiest, also clipping regions, selective redrawing, etc.. just depends on how much you feel like implementing)
There is no doubt you could get Canvas to be faster at object manipulation as the divs in that simple test, and of course far faster in the load time. Drawing/loading is faster in Canvas and has far more room for optimizations, too (ie, excluding things that are off-screen is very easy).
Conclusion:
SVG is probably better for applications and apps with few items (less than 1000? Depends really)
Canvas is better for thousands of objects and careful manipulation, but a lot more code (or a library) is needed to get it off the ground.
HTML Divs are clunky and do not scale, making a circle is only possible with rounded corners, making complex shapes is possible but involves hundreds of tiny tiny pixel-wide divs. Madness ensues.
I have past content from the following link.
Please see this link for more details
HTML5 Canvas vs. SVG vs. div

how to draw filled polygon in Google Maps SDK for iOS

I would like draw a filled polygon on iPhone with Google map (Version 1.1.1, the last one).
Anyone knows how to do like that on ios :
(My code on Android)
mMap.addPolygon(new PolygonOptions()
.addAll(latLngList)
.fillColor(Color.BLUE)
.strokeColor(Color.RED)
.strokeWidth(3));
Regards,
PS : If you have many solutions, keep in mind that I have many Polygon to draw.
The SDK currently doesn't support filled polygons, however there is a feature request to add them here:
https://code.google.com/p/gmaps-api-issues/issues/detail?id=5070
In the meantime, one option could be to draw your polygons into an image, and then add them as a ground overlay. This would be very limiting, but might work as a temporary workaround.
Another option is to add another view over the top of the map view and draw the polygons into it, and then update them whenever the map view moves. It isn't possible to perfectly synchronize another view with the map view, so your polygons will lag behind a bit as you pan/zoom around, but this might also be okay for you as a temporary workaround.
UPDATE
These are just some random ideas to try for the ground overlay approach, I'm not sure if they would work, but they might get you started:
I would suggest converting the lat/lon corners of the rectangle into MKMapPoint (using MKMapPointForCoordinate). These are equivalent to Google's coordinate system at zoom level 20.
You can then use the aspect ratio of the width/height of the rectangle in MKMapPoint coordinates to determine the aspect ratio of your ground overlay UIImage. Once you have the aspect ratio, you'll just need to experiment with actual sizes (ie guess a width, calculate the height from the aspect ratio) to find one which looks okay. The bigger it is, the finer the detail of your rectangle will be, but the more memory it will use, and probably the slower the performance will be. Also you might hit a hard limit at some size - I'm guessing the UIImage gets converted by the Google Maps SDK into a texture, and textures have a max size of 2048x2048 on iPhone 3GS+.
Then, use something similar to How to setRegion with google maps sdk for iOS? to calculate a zoom level and centre lat/lon. Instead of the map view width/height you would use your UIImage width/height, and you'd use the bounds of your rectangle instead of the bounds of the desired view. You also wouldn't need to calculate the scale from both the width and height (as the scale should be the same) - so just use one of them. Instead of creating a camera with the zoom level and centre lat/lon, set them on the GMSGroundOverlayOptions. Also set the ground overlay's anchor to the centre of the image (ie 0.5, 0.5).
The above describes how to add one GroundOverlay per rectangle. If you have lots of overlapping or nearby rectangles you could probably combine them into a single UIImage, but that would be a bit more complicated.

Rendering an invisible occluder

I'm currently upgrading from a DirectDraw system (yeah I know, it's very old) to DirectX10. It's a 2D system but simulates real world as each object has a range/depth in meters. There is a background image that is rendered and kept on the farthest z-order. All other objects are drawn on top of it and scaled according to what their range/depth would be. However, there is a certain type of object I have that is defined as a polygon and renders a bit different. It acts as an invisible occluder. For instance, an occluder is at a range/depth of 40 (my units are meters) and is defined by 5 vertices (a pentagon) in the middle of the viewport. There is a sprite object at the same viewport position but at a range/depth of 50. The desired output is to have the sprite object not rendered, but the background should be seen through both of them. So in essence these are invisible occluders, except that they do not occlude the background.
As a note, the occluders and the sprites all derive from the same base object type and are mixed together in a depth-sorted container.
My idea was to override the occluders Render method so they draw to a render target writing the range/depth values. I then would render the sprites as normal, but in the vertex or pixel shader would compare the range value of the sprite with the range values in the render target. However, it seems to me that I'd have to potentially read/write from the render target in the same pass before Present is called, and that's undefined. If i was to render the occluders, unbind the render target and pass the texture in for a lookup by the other objects, I'll have to convert the sprite positions into that texture space which may be non-trivial. Are either of these methods possible?
After thinking some more about it, one other idea came to mind. I could take the occluders and set their texture coordinates in reference to the background texture. In this way they would draw the same color values as the background, and because of the sorting if a sprite was behind it the user would still see the "background" but really it's the occluder looking like it.
Sorry if this is less a question and more thinking out loud, but I wanted to get impressions and ideas on the best way to go about this. Seems to me I have options but wasn't certain which was most efficient and which is easiest. Thanks in advance for any responses.
As stated in my comments I went with setting the texture coordinates in reference to the background image and then making sure the occluder, which was a simple polygon, was triangulated properly to make use of those texture coordinates.

Wrapping image around objects in web app

I'm creating a web app in ASP.NET like this one:
http://www.zazzle.com/cr/design/pt-mug
I know how to do everything except wrapping an image around an object.
It would be a simple task to do if I would only have to stack an image on
top of the other, if they were flat, but if it is a round object, as this mug
is, it's kinda tricky.
My first guess was to create some sort of algorithm for GDI+ that would
simulate "wrapping" image around an object (actualy it wouldn't be an 3d object,
it would just be a screenshot of it).
I figured it would be to raw approach and it would result in very bad quality,
if I could ever make it work.
So, my second guess was to implement somekind of 3d renderer to whom I would
give an image map for some object, it would render me that image onto an object
and in real time return me rendered image. Is that posible?
Is there any other way? Where do I start?
If you are willing to try a commercial product, my company makes a raster processing SDK for .NET called DotImage. If you try it, take a look at PolygonTransform. You supply a polygon as a list of points, and the class warps the image to fit inside the polygon. If you need sample code for it, let me know.
It might be some sort of OpenGL 3D rendering, but an image could easily be morphed in a purely 2D way for this effect. Horizontally, it would need to be squished where it goes off the side of the cup. Each column of pixels needs to be shifted vertically by varying amounts depending on which column - such that a horizontal line the image would become like a "U" shape. With the right parameters, such a morph could mimic the proper 3D shape. Lighting effects could be applied to, by brightening/darkening the image a bit in the right places.

Resources