magenta.js Visualizer() renders blurry notes - css

I've noticed that whenever I use magenta.js's built in Visualizer method, it renders ever so slightly blurry (perhaps an anti-aliasing issue?) notes. I've attached an image:
I can see this with varying intensities across many of the documentation's examples as well, such as https://piano-scribe.glitch.me/. Is there a way I can get sharp edges or at the least minimize the blurriness? I'm not sure whether this issue has been addressed or is suitable in the magenta github, so I'm posting here.
Edit: with image-rendering: pixelated on the canvas element, zoomed in.

This is a bug (if you call it that) with magenta-js's visualizer. Taking a look at the redraw method in their source reveals that the x position and w(idth) of each note are determined with the following lines.
const x = (this.getNoteStartTime(note) * this.config.pixelsPerTimeStep) +
offset;
const w = (this.getNoteEndTime(note) - this.getNoteStartTime(note)) *
this.config.pixelsPerTimeStep;
Now, when drawing on a canvas, if you don't draw at an integer, the browser will interpolate and try to draw a close representation, resulting in the miscolored pixels you noticed.
All that's left to do is confirm that x and/or w are not integers. I loaded the demo page, opened the relevant js file in the sources tab, searched for this line and put a breakpoint.
Sure enough. x = 13.8 and w = 15.35999. I've submitted magenta-js#238 with a fix.

Related

Why are the transparent pixels not blending correctly in WebGL

Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth

Drawing a grid efficiently with EaselJS StageGL

I would like to draw a grid on a canvas using EaselJS. I am using the new WebGL stage, StageGL.
A grid is basically N times of a horizontal line and M times of a vertical line.
I see multiple options:
Draw N+M lines as all different shapes (I am talking about EaselJS "Shape" instances), cache them (as WebGL needs rasters) and add them to the stage.
Draw 1 horizontal and 1 vertical line, cache them (as WebGL needs rasters) and somehow draw the same image in the stage
Draw a single shape which consists of N+M paths, cache it and add it to the stage.
Option #1 seems naive to me. They're all the same image, why drawing them to the cache N+M times?
Option #2 would solve the problem in option #1, but I don't know how to do it.
Option #3 results in a very large image. For N=50, M=50 and gridSpacing=50px, it would result in a 2500x2500 px image. I don't know if this is ideal.
Which one is the best approach?
Are there any other approaches? I don't think I am the first person who draws a grid :)
You can pretty easily cache a shape, and use the resulting cache (canvas) as the source for a Bitmap.
var shape = new createjs.Shape();
shape.graphics.drawStuff();
// Since shapes have no bounds, you will have to know the bounds based on what you draw:
shape.cache(x, y, w, h);
var bmp = new createjs.Bitmap(shape.cacheCanvas);
You can draw as many of these Bitmaps without any additional cost, since its the same source canvas/image. EaselJS StageGL (latest NEXT, released shortly hopefully) renders this in WebGL no problem.
Check out the SpriteSheetBuilder demo and docs in GitHub to draw content to a SpriteSheet/Sprite instead of a Bitmap.
Cheers.

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

Antialiasing in Qt's QGraphicsScene make overlapping lines darker

When using anti-aliasing rendering in Qt's QGraphicsScene, there is a behavior that makes drawings appear not as expected: overlapping lines become darker. I could not see any description of this behavior in the documentation, and I cannot find a way to disable it.
For example if I want to draw such a polygon:
Because of the number of points, it is impossible not to have overlapping lines - fine. But because anti-aliasing is activated, some borders appear 'thicker' than others.
Is there any way to avoid this and have anti-aliased lines that can overlap and yet at the same time be rendered without getting darker?
I know of course that I can redefine the paint() function and draw manually individual lines that do not overlap, but this is what I want to avoid. I am using Pyside and this would significantly slow down the application, due to the high frequency at which paint() is being called.
EDIT Fixed by defining the object shape using QPainterPath / QGraphicsPathItem instead of QPolygon / QGraphicsPolygonItem. In that case the moveTo function allows to avoid lines that overlap.
Another thing you could try is adding half a pixel to your coordinates (not dimensions). This fixed the anti-aliasing issue for me.
XCoord = int(XValue) + 0.5
YCoord = int(XValue) + 0.5
Also make sure that before that you have integer pixel values.

Where can I find information on line growing algorithms?

I'm doing some image processing, and I need to find some information on line growing algorithms - not sure if I'm using the right terminology here, so please call me out on this is needs be.
Imagine my input image is simply a circle on a black background. I'd basically like extract the coordinates, so that I may draw this circle elsewhere based on the coordinates.
Note: I am already using edge detection image filters, but I thought it best to explain with a simple example.
Basically what I'm looking to do is detect lines in an image, and store the result in a data type where by I have say a class called Line, and various different Point objects (containing X/Y coordinates).
class Line
{
Point points[];
}
class Point
{
int X, Y;
}
And this is how I'd like to use it...
Line line;
for each pixel in image
{
if pixel should be added to line
{
add pixel coordinates to line;
}
}
I have no idea how to approach this as you can probably establish, so pointers to any subject matter would be greatly appreciated.
I'm not sure if I'm interpreting you right, but the standard way is to use a Hough transform. It's a two step process:
From the given image, determine whether each pixel is an edge pixel (this process creates a new "binary" image). A standard way to do this is Canny edge-detection.
Using the binary image of edge pixels, apply the Hough transform. The basic idea is: for each edge pixel, compute all lines through it, and then take the lines that went through the most edge pixels.
Edit: apparently you're looking for the boundary. Here's how you do that.
Recall that the Canny edge detector actually gives you a gradient also (not just the magnitude). So if you pick an edge pixel and follow along (or against) that vector, you'll find the next edge pixel. Keep going until you don't hit an edge pixel anymore, and there's your boundary.
What you are talking about is not an easy problem! I have found that this website is very helpful in image processing: http://homepages.inf.ed.ac.uk/rbf/HIPR2/wksheets.htm
One thing to try is the Hough Transform, which detects shapes in an image. Mind you, it's not easy to figure out.
For edge detection, the best is Canny edge detection, also a non-trivial task to implement.
Assuming the following is true:
Your image contains a single shape on a background
You can determine which pixels are background and which pixels are the shape
You only want to grab the boundary of the outside of the shape (this excludes donut-like shapes where you want to trace the inside circle)
You can use a contour tracing algorithm such as the Moore-neighbour algorithm.
Steps:
Find an initial boundary pixel. To do this, start from the bottom-left corner of the image, travel all the way up and if you reach the top, start over at the bottom moving right one pixel and repeat, until you find a shape pixel. Make sure you keep track of the location of the pixel that you were at before you found the shape pixel.
Find the next boundary pixel. Travel clockwise around the last visited boundary pixel, starting from the background pixel you last visited before finding the current boundary pixel.
Repeat step 2 until you revisit first boundary pixel. Once you visit the first boundary pixel a second time, you've traced the entire boundary of the shape and can stop.
You could take a look at http://processing.org/ the project was created to teach the fundamentals of computer programming within a visual context. There is the language, based on java, and an IDE to make 'sketches' in. It is a very good package to quickly work with visual objects and has good examples of things like edge detection that would be useful to you.
Just to echo the answers above you want to do edge detection and Hough transform.
Note that a Hough transform for a circle is slightly tricky (you are solving for 3 parameters, x,y,radius) you might want to just use a library like openCV

Resources