Openlayers 3 - What does resolution represent as a style function parameter? - scale

I am trying to use resolution passed into the StyleFunction to work out the size of my image Icons. Using tests, at a zoom where the scale line is 100m the resolution reported to the styling function is 2.3886.
I've take screenshots of the scale line and measured its length in pixels. A 100m scaleline is 68 pixels, or 1.4705 metres per pixel.
1.4705 !== 2.3886, so what is the resolution unit? The API documentation does not explain it and says it is just a number, but without an idea of units it is difficult to work out.
This is to accurately scale the icon to real world length BTW.
Using this jsfiddle.net/dz9gL0g0/ I find that 200m scaleline reports the 2.38, but 100m returns less (1.19). Is the resolution I'm getting from the previous zoomlevel? If I use the resolution passed in OR call the getResolution function directly, 100m scaleline always returns 2.83 for me, not the 1.19 I think it should, although 1.19 * 84 is mostly correct (scale line is bigger in example than my app, which gives me a 68 pixel scale line for 100m).
Moving the window alters the resolution - resize the jsfiddle and the resolution unit changes. My 100m scaleline still fits the geographical feature I use to test, but now reports 2.38.

The resolution is the size of 1 pixel in map units. Let's say your map projection is EPSG:3857 whose unit is meters. For example if the current resolution is 20000 m/px, this means that 1 px represents 20000m.
Maybe it also helps taking a look at this example.

After posting a bug request at github (https://github.com/openlayers/ol3/issues/3770) it was suggested that ol.proj.Projection.getPointResolution() be used to gather the correct resolution for the projection (which I assumed was being passed into the styling function at run-time - assumptions, eh?).
var view = map.getView();
var coords = view.getCenter();
var resolution = view.getResolution();
var projection = view.getProjection();
var resolutionAtCoords = projection.getPointResolution(resolution, coords);
This results in a much closer 1.50XXXX resolution being returned, which is dependant on the latitude. It remains to be seen if my tiled map source is scaled appropriately to the projection, but this means that my pixels per metre are now within a margin of error, correct.

AFAIK the resolution is in meters per pixel. Here is a feature from my application that is styled as ol.style.Circle with radius 3 / resolution:
Is your scale line working correctly? Using a non standard projection i had to use ol.proj.addProjection(...) and ol.proj.addCoordinateTransforms(...) in addition to just defining the projection via proj4.defs(...) to get it working.

Related

how to change the dimensions of a jpg image in R?

I always resized images in CorelDraw from 1219,20 x 914,40mm to 78.36x51.00 to make photo boards. But now it turns out I have a lot of images in different folders and I needed to create an auto-lop to do this for me. If I were to do this in Corel like I used to do, it would take a lot of time.
I have used the the resize function from Magick package, but didn't have obtain sucess.
resize(Image, "78.36x51.00!")
Error in resize("78,36x51,00!") :
Not compatible with requested type: [type=character; target=integer].
So I also tried the image_scale function, in this case the dimensions changed, but the size and resolution of the image was much smaller than expected.
image_scale(Image, "78.36x51.00!")
Demonstration with the generated image after the resize (photo) and the expected size (white square)
Documentation indicates that magick's scaling functions are based on pixels. Scaling based on X and Y dimensions expressed as pixels is shown below.
I'm not sure if this directly addresses the issue because the units in the question seem to change and dots per inch (dpi) is not defined. 1219,20 x 914,40 is in mm, but the units for 78.36 x 51.00 are undefined. The first pair of numbers has an aspect ratio of 1.3 while the second is 1.5.
Scaling by X or Y in pixels will retain the original aspect ratio. Getting the right size involves knowing the desired dpi.
install.packages("magic")
library(magick)
Image <- image_read("https://i.stack.imgur.com/42fvN.png")
print(Image)
# Increase 300 in X dimension
Image_300x <- image_scale(Image, "300")
print(Image_300x)
# Increase 300 in y
Image_300y <- image_scale(Image, "x300")
print(Image_300y)

How to get Bokeh to scale scatter plot size according to zoom

Some of the folks on my team, including myself, find it pretty disorienting that in a Bokeh scatter plot, say using the circle method, that for an initial autoscale fit of the data on the figure we can dial in a reasonable size for our data, using for example something like plot.circle( x , y , size=3 )
However when we interactively zoom into our data the glyph sizes as displayed are invariant to the zoom. Is there a way to have them scale proportionally to the zoom we've dialed into? Something akin to an vector graphics interaction (eg svg). If memory serves me right matlab figures and matplotlib figures should maintain zoom proportionality behavior. To demonstrate the behavior we're seeing consider the first image and the red box I approximately zoom into on the second image.
Just as a quick demo using Powerpoint to illustrate the sort of desired behavior...
For circles, set the radius kwarg instead of the size value. (There similar, glyph-specific values for the other glyph-types).
i.e.:
plot.circle(x=[1,2,3], y=[1,2,3], radius=0.5)
size is always rendered in screen coordinates (pixels), but radius and the related properties are computed in data coordinates and should change in magnitude with zooming.
Here's a good demo by Bryan Van de Ven showing the difference between pixel coordinates (size) and data coordinates (radius) given in this conference talk:
Intro to Data Visualization with Bokeh - Part 2 - Strata Hadoop San Jose 2016
... the point is all of these attributes can be vectorized. We could
for instance say size equals you know 2, 4, 6, 8, 10, and now the size
is modulated right. So we have one that has size 2 and one that has
size 4. Size is usually in pixels, radius is usually in data dimension
units. But all the other ones here as well all the colors, all the
visual attributes can be vectorized in this way. You can either give
them a single value as we've done for instance with the line fill
color, or you can give them a vector of values in which case all of
the things are different.
So next exercise here you go to this
notebook this is that second notebook "02 - plotting" it is to try to
create the same example but now set the radius instead of the size and
sort of see what's the difference if you set if you set radius instead
of size.

Dicom - normalization and standardization

I am new to the field of medical imaging - and trying to solve this (potentially basic problem). For a machine learning purpose, I am trying to standardize and normalize a library of DICOM images, to ensure that all images have the same rotation and are at the same scale (e.g. in mm). I have been playing around with the Mango viewer, and understand that one can create transformation matrices that might be helpful in this regard. I have however the following basic questions:
I would have thought that a scaling of the image would have changed the pixel spacing in the image header. Does this tag not provide the distance between pixels, and should this not change as a result of scaling?
What is the easiest way to standardize a library of images (ideally in python)? Is it possible and should one extract a mean pixel spacing across all images, and then scaling all images to match that mean? or is there a smarter way to ensure consistency in scaling and rotation?
Many thanks in advance, W
Does this tag not provide the distance between pixels, and should this
not change as a result of scaling?
Think of the image voxels as fixed units of space, which are sampling your image. When you apply your transform, you are translating/rotating/scaling your image around within these fixed units of space. That is, the size and shape of the voxels doesn't change. They just sample different parts of your image.
You can resample your image by making your voxels bigger or smaller or changing their shape (pixel spacing), but this can be independent of the transform you are applying to the image.
What is the easiest way to standardize a library of images (ideally in
python)?
One option is FSL-FLIRT, although it only accepts data in NIFTI format, so you'd have to convert your DICOMs to NIFTI. There is also this Python interface to FSL.
Is it possible and should one extract a mean pixel spacing across all
images, and then scaling all images to match that mean? or is there a
smarter way to ensure consistency in scaling and rotation?
I think you'd just to have pick a reference image to register all your other images too. There's no right answer: picking the highest resolution image/voxel dimensions or an average or some resampling into some other set of dimensions all sound reasonable.

How to resize an existing point cloud file?

I am trying to enlarge a point cloud data set. Suppose I have a point cloud data set consisting of 100 points & I want to enlarge it to say 5 times. Actually I am studying some specific structure which is very small, so I want to zoom in & do some computations. I want something like imresize() in Matlab.
Is there any function to do this? What does resize() function do in PCL? Any idea about how can I do it?
Why would you need this? Points are just numbers, regardless whether they are 1 or 100, until all of them are on the same scale and in the same coordinate system. Their size on the screen is just a visual representation, you can zoom in and out as you wish.
You want them to be a thousandth of their original value (eg. millimeters -> meters change)? Divide them by 1000.
You want them spread out in a 5 times larger space in that particular coordinate system? Multiply their coordinates with 5. But even so, their visual representations will look exactly the same on the screen. The data remains basically the same, they will not be resized per se, they numeric representation will change a bit. It is the simplest affine transform, just a single multiplication.
You want to have finer or coarser resolution of your numeric representation? Or have different range? Change your data type accordingly.
That is, if you deal with a single set.
If you deal with different sets, say, recorded with different kinds of sensors and the numeric representations differ a bit (there are angles between the coordinate systems, mm vs cm scale, etc.) you just have to find the transformation from one coordinate system to the other one and apply it to the first one.
Since you want to increase the number of points while preserving shape/structure of the cloud, I think you want to do something like 'upsampling'.
Here is another SO question on this.
The PCL offers a class for bilateral upsampling.
And as always google gives you a lot of hints on this topic.
Beside (what Ziker mentioned) increasing allocated memory (that's not what you want, right?) or zooming in in visualization you could just rescale your point cloud.
This can be done by multiplying each points dimensions with a constant factor or using an affine transformation. So you can e.g switch from mm to m.
If i understand your question correctly
If you have defined your cloud like this
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
in fact you can do resize
cloud->points.resize (cloud->width * cloud->height);
Note that doing resize does nothing more than allocate more memory for variable thus after resizing original data remain in cloud. So if you want to have empty resized cloud dont forget to add cloud->clear();
If you just want zoom some pcd for visual puposes(i.e you cant see what is shape of cloud because its too small) why dont you use PCL Visualization and zoom by scrolling up/down

PyGame: Load tile image information just before blitting?

My project is a large map that can be panned around containing "info spots" that can be clicked. For now I use four large images, each spans 5000x5000 pixels (so total map size is 20'000x20'000 pixel). On my AMD Phenom 9950 Quad-Core with 8GB RAM and an NVIDIA GeForce 610 this takes a certain while to load while it's quite fast afterwards when panning the image. I tried tiling it up but there's no visible enhancement in loading speed as the image still has to be loaded completely before it's separated into tiles.
The only way to have some real improvement on speed and memory usage would be, to only load those parts of the map image that are actually shown.
Does PyGame offer any way of doing so? I'm thinking of a "theoretical" tile map which contains the needed x and y values of each tile (I group them a little, less to compute each frame) and a theoretical image information (like: which image and which position therein). Only when a tile comes near the visible part of the screen, its image information is loaded, otherwise it remains a number and string value.
Would this make any sense? Is there any way to achieve this?
The only way to accomplish this with Pygame would be to break the images themselves into smaller squares (say 250x250), and then, as the user pans, just get the current topleft x,y coordinates, as well as the screen size, and load any tiles that fit into that screen or around the buffer edge into memory, and clear out any others that are outside that range. The math will be fairly straightforward unless you add support for rotation and/or zooming. I would name the tiles after their location as a multiple of the square size (for example the tile at 500, 500 would be named 2-2.png). This will make it very trivial to generate the tile name that you need to load at each location - take the current x/y coordinates, integer divide by 250, subtract the buffer tile amount, and then loop by your screen width integer divided by 250 plus 1 plus the buffer tile amount for each row. Do that loop for each column.
After reading #lukevp 's reply, I was interested and tried this:
http://imgur.com/Q1N2UtU
Go get this image and create a folder named 'test_data'. Now place this image, code outside of test_data folder and run. The output would be cropped figures named as per their order (It's a bit off on the edges as the image is 1920 * 1080). You can try it with your custom size tho. Also note that I am on ubuntu so take care to appropriate paths.
OUTPUT: http://imgur.com/2v4ucGI Final link: http://imgur.com/a/GHc9l
import pygame, os
pygame.init()
original_image = pygame.image.load('test_pic.jpg')
x_max = 1920
y_max = 1080
current_x = 0
current_y = 0
count = 1
begin_surf = pygame.Surface((x_max,y_max), flags = pygame.SRCALPHA)
begin_surf.blit(original_image,(0,0))
cropped_surf = pygame.Surface((100,100),flags=pygame.SRCALPHA)
while current_y + 100 < y_max:
while current_x + 100 < x_max:
cropped_surf.blit(begin_surf, (0,0), (current_x, current_y,100,100))
pygame.image.save(cropped_surf, os.path.join("test_data", str(count) + '.jpg'))
current_x += 100
count += 1
current_x = 0
current_y += 100
Would now actually be working to load those images and span them as he said.

Resources