Drawing a grid efficiently with EaselJS StageGL - grid

I would like to draw a grid on a canvas using EaselJS. I am using the new WebGL stage, StageGL.
A grid is basically N times of a horizontal line and M times of a vertical line.
I see multiple options:
Draw N+M lines as all different shapes (I am talking about EaselJS "Shape" instances), cache them (as WebGL needs rasters) and add them to the stage.
Draw 1 horizontal and 1 vertical line, cache them (as WebGL needs rasters) and somehow draw the same image in the stage
Draw a single shape which consists of N+M paths, cache it and add it to the stage.
Option #1 seems naive to me. They're all the same image, why drawing them to the cache N+M times?
Option #2 would solve the problem in option #1, but I don't know how to do it.
Option #3 results in a very large image. For N=50, M=50 and gridSpacing=50px, it would result in a 2500x2500 px image. I don't know if this is ideal.
Which one is the best approach?
Are there any other approaches? I don't think I am the first person who draws a grid :)

You can pretty easily cache a shape, and use the resulting cache (canvas) as the source for a Bitmap.
var shape = new createjs.Shape();
shape.graphics.drawStuff();
// Since shapes have no bounds, you will have to know the bounds based on what you draw:
shape.cache(x, y, w, h);
var bmp = new createjs.Bitmap(shape.cacheCanvas);
You can draw as many of these Bitmaps without any additional cost, since its the same source canvas/image. EaselJS StageGL (latest NEXT, released shortly hopefully) renders this in WebGL no problem.
Check out the SpriteSheetBuilder demo and docs in GitHub to draw content to a SpriteSheet/Sprite instead of a Bitmap.
Cheers.

Related

Why are the transparent pixels not blending correctly in WebGL

Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth

How to Minimize the saved points from drawn points using free-flow drawing tool

Currently I'm using "Douglas Peucker" algorithm.
My problem is that when I'm drawing,the previously drawn lines are also changing which of course not realistic. Is there other alternative algorithm to minimize the saved points but not altering the previous drawn points or other way to alter "Douglas Peucker" to fit my need?
Give your pencil drawing tool 2 optional methods for drawing:
Draw a new point on the path using mousemove (which is your current freeform method). This option will let the user add many points which will allow them to be very detailed in their drawing.
Draw a new point on the path only upon mousedown. This option simply connects the previous point on the path to the newly clicked point. This option will let the user add just a few very straight lines which will allow them to outline figures with long running straight edges.
If you are concerned about the freeform path changing while the user is drawing you can apply the simplifying algorithm just once after they have stopped moving the mouse for 1 second.
If you specify the Douglas-Peucker algorithm use a high bias for accuracy then the simplified path will remain quite true to the unsimplified path.
BTW, if you want to draw splines through your points then check out this nice previous post: how to draw smooth curve through N points using javascript HTML5 canvas?

Antialiasing in Qt's QGraphicsScene make overlapping lines darker

When using anti-aliasing rendering in Qt's QGraphicsScene, there is a behavior that makes drawings appear not as expected: overlapping lines become darker. I could not see any description of this behavior in the documentation, and I cannot find a way to disable it.
For example if I want to draw such a polygon:
Because of the number of points, it is impossible not to have overlapping lines - fine. But because anti-aliasing is activated, some borders appear 'thicker' than others.
Is there any way to avoid this and have anti-aliased lines that can overlap and yet at the same time be rendered without getting darker?
I know of course that I can redefine the paint() function and draw manually individual lines that do not overlap, but this is what I want to avoid. I am using Pyside and this would significantly slow down the application, due to the high frequency at which paint() is being called.
EDIT Fixed by defining the object shape using QPainterPath / QGraphicsPathItem instead of QPolygon / QGraphicsPolygonItem. In that case the moveTo function allows to avoid lines that overlap.
Another thing you could try is adding half a pixel to your coordinates (not dimensions). This fixed the anti-aliasing issue for me.
XCoord = int(XValue) + 0.5
YCoord = int(XValue) + 0.5
Also make sure that before that you have integer pixel values.

How to make qt qgraphicsview scale to not affect stipple pattern?

I draw few rectangles inside the QGraphicsView ; I use custom stipple pattern for these by creating a QBrush with my QPixmap. This gets displayed with the default zoom level as expected.
When I call view->scale(), the rectangles show up bigger or smaller as I expected. However Qt has scaled the individual bits of the stipple pattern which is not expected; I expected it to draw the larger or smaller rectangle again with the brush.
Eg.
If I had used a stipple pattern with one pixel dot and pixel space, after zooming in, I want to see a larger rectangle but I want the same stipple pattern with same pixel gaps. Is this achievable somehow? Thanks.
I ran into the same problem while developing an EDA tool companion in Qt.
After some trying, what I did (and seems to work for me) is to create a custom graphics item. On the paint method, I do:
QBrush newBrush = brush_with_pattern;
newBrush.setTransform(QTransform(painter->worldTransform().inverted()));
painter->setBrush(newBrush);
That is to apply the inverse transformation of the item to the brush (so it does not scale).
I think that the setDashOffset is only for the border of the shapes (not the fill).
You may use QPen::setDashOffset:
http://harmattan-dev.nokia.com/docs/library/html/qt4/qpen.html#setDashOffset
You'll need to set the offset based on the scenes zoom/scale level. You can grab a pointer to the scene in your item by calling scene(), don't forget to check for NULL though since it will be NULL when not added to the scene (although you shouldn't in theory get a paint() when not in a scene).
The other option is to use:
http://doc.qt.digia.com/qt/qpainter.html#scale
To undo the views scaling on your painter.
In case anyone is still looking on this, a related question here regarding scaling of standard fill patterns instead of pixmap fill patterns may help. Basically, it may not be possible to modify scaling of standard fill patterns (a few workaround ideas are listed), but, working with alpha values instead gives the desired effect if you are looking for varying colors, especially gray levels - and is much less convoluted.

Matlab Bwareaopen equivalent function in OpenCV

I'm trying to find similar or equivalent function of Matlabs "Bwareaopen" function in OpenCV?
In MatLab Bwareaopen(image,P) removes from a binary image all connected components (objects) that have fewer than P pixels.
In my 1 channel image I want to simply remove small regions that are not part of bigger ones? Is there any trivial way to solve this?
Take a look at the cvBlobsLib, it has functions to do what you want. In fact, the code example on the front page of that link does exactly what you want, I think.
Essentially, you can use CBlobResult to perform connected-component labeling on your binary image, and then call Filter to exclude blobs according to your criteria.
There is not such a function, but you can
1) find contours
2) Find contours area
3) filter all external contours with area less then threshhold
4) Create new black image
5) Draw left contours on it
6) Mask it with a original image
I had the same problem and came up with a function that uses connectedComponentsWithStats():
def bwareaopen(img, min_size, connectivity=8):
"""Remove small objects from binary image (approximation of
bwareaopen in Matlab for 2D images).
Args:
img: a binary image (dtype=uint8) to remove small objects from
min_size: minimum size (in pixels) for an object to remain in the image
connectivity: Pixel connectivity; either 4 (connected via edges) or 8 (connected via edges and corners).
Returns:
the binary image with small objects removed
"""
# Find all connected components (called here "labels")
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(
img, connectivity=connectivity)
# check size of all connected components (area in pixels)
for i in range(num_labels):
label_size = stats[i, cv2.CC_STAT_AREA]
# remove connected components smaller than min_size
if label_size < min_size:
img[labels == i] = 0
return img
For clarification regarding connectedComponentsWithStats(), see:
How to remove small connected objects using OpenCV
https://www.programcreek.com/python/example/89340/cv2.connectedComponentsWithStats
https://python.hotexamples.com/de/examples/cv2/-/connectedComponentsWithStats/python-connectedcomponentswithstats-function-examples.html
The closest OpenCV solution to your question is the morphological closing or opening.
Say you have white regions in your image that you need to remove. You can use morphological opening. Opening is erosion + dilation, in that order. Erosion is when the white regions in your image are shrunk. Dilation is (the opposite) where white regions in your image are enlarged. When you perform an opening operation, your small white region is eroded until it vanishes. Larger white features will not vanish but will be eroded from the boundary. The subsequent dilation step restores their original size. However, since the small element(s) vanished during the erosion step, they will not appear in the final image after dilation.
For example consider this image where we want to remove the small white regions but retain the 3 large white ellipses. Running the following code removes the white regions and displays the clean image
import cv2
im = cv2.imread('sample.png')
clean = cv2.morphologyEx(im, cv2.MORPH_OPEN, np.ones((10, 10)))
cv2.imshwo("Clean image", clean)
The clean image output would be like this.
The command above uses a square block of size 10 as the kernel. You can modify this to suit your requirement. You can even generate a more advanced kernel using the function getStructuringElement().
Note that if your image is inverted, i.e., with black noise on a white background, you simply need to use the morphological closing operation (cv2.MORPH_CLOSE method) instead of opening. This reverses the order of operation - first the image is eroded and then dilated.

Resources