I want to increase pixel density per unit area on every zoom operation in 'QPixmap'.
To increasing pixel density I create pixmap on every zoom according to the rectangle get from sceneboundingrect() but I think it does not increase the pixel density
The QPixmap is a raster image, that means a finite amount of pixels, making it bigger will not make it clearer (as it does on CSI).
You will need a considerably bigger / larger resolution image to begin with, then you will downsample it when you render it "un-zoomed" and the more you zoom in the closer you render it to its original size.
Related
What is a point? whereas pixel is clear enough: it is a physical unit on the screen, the nature of the point in not so explicit.
A point is a measure that equals 1/72 inches. The main difference between pointSize and pixelSize is that pointSize is density independent, which means that the size is physically fixed, whatever screen you use.
Highdpi scaling can or can not affect point size, it depends on the setup: https://doc.qt.io/qt-5/highdpi.html.
If I am correct, the point size is based on an abstract layer provided by Qt to do a High DPI scaling.
Please follow:
https://doc.qt.io/qt-6/highdpi.html
Some of the folks on my team, including myself, find it pretty disorienting that in a Bokeh scatter plot, say using the circle method, that for an initial autoscale fit of the data on the figure we can dial in a reasonable size for our data, using for example something like plot.circle( x , y , size=3 )
However when we interactively zoom into our data the glyph sizes as displayed are invariant to the zoom. Is there a way to have them scale proportionally to the zoom we've dialed into? Something akin to an vector graphics interaction (eg svg). If memory serves me right matlab figures and matplotlib figures should maintain zoom proportionality behavior. To demonstrate the behavior we're seeing consider the first image and the red box I approximately zoom into on the second image.
Just as a quick demo using Powerpoint to illustrate the sort of desired behavior...
For circles, set the radius kwarg instead of the size value. (There similar, glyph-specific values for the other glyph-types).
i.e.:
plot.circle(x=[1,2,3], y=[1,2,3], radius=0.5)
size is always rendered in screen coordinates (pixels), but radius and the related properties are computed in data coordinates and should change in magnitude with zooming.
Here's a good demo by Bryan Van de Ven showing the difference between pixel coordinates (size) and data coordinates (radius) given in this conference talk:
Intro to Data Visualization with Bokeh - Part 2 - Strata Hadoop San Jose 2016
... the point is all of these attributes can be vectorized. We could
for instance say size equals you know 2, 4, 6, 8, 10, and now the size
is modulated right. So we have one that has size 2 and one that has
size 4. Size is usually in pixels, radius is usually in data dimension
units. But all the other ones here as well all the colors, all the
visual attributes can be vectorized in this way. You can either give
them a single value as we've done for instance with the line fill
color, or you can give them a vector of values in which case all of
the things are different.
So next exercise here you go to this
notebook this is that second notebook "02 - plotting" it is to try to
create the same example but now set the radius instead of the size and
sort of see what's the difference if you set if you set radius instead
of size.
I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.
What is the maximum field of view that can be accomplished via a projection matrix with no distortion? There is a hard limit of < 180 degrees before the math completely breaks down, but experimenting with 170-180 degrees leads me to believe that distortion and deviation from reality begins prior to the hard limit. Where does the point at which the projection matrix begins to distort the view lie?
EDIT: Maybe some clarification is in order. As I increased the FOV angle toward 180 with a fixed render size, I observed objects getting smaller much faster than they should in reality. With a fixed render size and the scene/camera being identical, the diameter of objects should be inversely proportionate to the field of view size, if I'm not mistaken. Yet I observed them shrinking exponentially, down to 0 size at 180 degrees. This is undoubtedly due to the fact that X and Y scaling in a projection matrix are proportionate to cot(FOV / 2). What I'm wondering is when exactly this distortion effect begins.
Short answer: There is no deviation from reality and there is always distortion.
Long answer: Common perspective projection matrices project a 3D scene onto a 2D plane with respect to a camera position. If you consider a fixed distance of the plane from the camera, then the field of view defines the plane's size. Larger angles define larger planes. If you fix the size, then the field of view defines the distance. Larger angles define a smaller distance.
Viewed from the camera, the image does not change whether it sees the original scene or the plane with the projected scene (i.e. there is no deviation from reality).
Problems occur when you look at the plane from a different view point. E.g. when the projected plane is displayed on the screen (fixed size), there is only one position of the camera (your eye) from which the image is realistic. For very large field of view angles, you'll need to be very close to the screen to find that position. All other positions will not result in the correct image. For small field of view angles, the resulting distortion is very small and users will mostly consider it a realistic projection. That's because for small angles, the projected image does not change significantly if you change the distance slightly (changing the distance from 1 meter to 1.1 meters (10%) with a small fov is less problematic than changing the distance from 0.1 meters to 0.2 meters (100%) with a large fov). The most extreme case is an orthographic projection with virtually zero fov. Then, the projection does not depend on the distance at all.
And there is always distortion if objects are not at the projection axis (i.e. for any fov greater than zero). This results in spheres not projecting to perfect circles. This effect also happens with small fovs but there it is less obvious.
I've written some code which lets an arbitrary sized rectangle collide with a grid-based terrain setup (for a platformer game). The way I do it is something like this:
For each tile the rectangle intersects with, do:
Calculate the primary axis that this tile is on with respect to the rectangle
Calculate the interpenetration of this tile into the rectangle along the primary axis (factoring in previous position offsets from other tiles)
If this tile is solid, add that interpenetration to a total collision resolution vector
Adjust the rectangle's position by the total calculated collision resolution vector
Which works just fine, except i run into random "hang-ups" as my rectangle gets pulled into the ground just over the border of two tiles, my code decides that it needs to resolve the collision with this new tile by pushing it in the X axis, thus stopping the rectangle's motion unless it is manually pushed out of the terrain to get over it.
I've tried only resolving the collision on one axis at a time (so it ignores any x axis collision resolution if the Y axis resolution is the largest and vice versa), but that results in jittering when the rectangle is being pressed into a corner (as this is a situation that actually needs both axes resolved at once).
In short, what method can I use to fix both of these problems at once?
This is very hard problem because it has infinity interactions in some cases.
To choose between speed and accuracy:
1.add interaction counter for every object (rectangle)
before collision detection reset all counters to zero
2.if any collision detected increment counters for all objects in collision
3.if counter value exceeds limit value stop computation of interaction for that object
Beware that this approach can also create some hiccups if forced to but will not hang up.