gdal_grid linear producing odd results - raster

I Have a large Point shapefile (xyz, about 65,000 points) from a LiDAR las file and am trying to interpolate this onto a grid using gdal_grid:
gdal_grid -ot Float64
-txe 422306.5970 422343.9970
-tye 4037022.9899 4036967.3399
-outsize 747 1112
-a linear:radius=0:nodata=0
in.shp out.tif
This runs without error and produces a map that looks like the first image below. You'll notice the triangular pattern, as if most of the points are ignored. The values in this funny image are within <1 of what they should be, so gdal_grid appears to be reading the z field appropriately, it is just missing most points it seems. If I try invdist or average, keeping all else the same, the issue goes away and the grid looks as it should (see second image with same color scale). In this example, the variation is between 1091 and 1093. I tried scaling the Z to make the variation larger in the shapefile and still found the same issue. I also tried the -z_multiply and -z_increase to no avail. Unfortunately I need bilinear interpolation so I'm at an impasse. Any ideas?
I get the output below by only changing the interpolation method to invdist:
gdal_grid -ot Float64
-txe 422306.5970 422343.9970
-tye 4037022.9899 4036967.3399
-outsize 747 1112
-a invdist:radius1=1:radius2=1
in.shp out.tif

Sorry to be late, but I faced the same issue and I couldn't find any explanation.
Then I noticed that both OP and mine dataset origin parameters were pretty high.
I don't know anything about this specific implementation of Delaunay algorithm, but my guess is that for small changes of X,Y (small compared to the absolute space position of each point, not sure what is the ratio) are ignored and represented under a bigger triangle, thus losing details and precision.
You can clearly appreciate the difference between an interpolation made with a point cloud of absolute values and one with relative ones:
I solved by removing a fixed offset from the original points, interpolating and then adding the same offset.
I hope this can help.

Related

Convert curved line between two points to straight connection

I have the two points p1 and p2 and the line l (black). The line is made of 100+ internal points arranged in an array starting from p1 and ending in p2.
Now, I would like to convert the curved line to a "straight" line like the red line on the above illustration. I am, however, a little unsure how to do this.
So far, my idea is to iterate the line with a fixed distance (e.g. take all points from start and 100 pixels forward), calculate the curve of the line, if it exceeds a threshold, make the straigt line change direction, then iterate the next part and so on. I'm not sure this would work as intended.
Another idea would to make a greedy algorith trying to minimize the distance between the black and red line. This could, however, result in small steps which I would like to avoid. The steps might be avoided by making turns costly.
Are there any algorithms about this particular problem, or how would you solve it?
Search for the phrase polygonal chain simplification and you'll see there is quite a literature on this topic.
Here is one reference that could lead you to others:
Buzer, Lilian. "Optimal simplification of polygonal chains for subpixel-accurate rendering." Computational Geometry 42.1 (2009): 45-59.

Handle "Division by Zero" in Image Processing (or PRNU estimation)

I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)

How to deal with arbitrary size for Laplacian Pyramid?

Recently I had much fun with the Laplacian Pyramid algorithm (http://persci.mit.edu/pub_pdfs/pyramid83.pdf). But one big problem is that the original paper is limited to 2^m+1*2^n+1 images. My question is: What is the best way to deal with arbitrary w*h instead? I can think of a couple of options:
Up sample the input to the next 2^m+1,2^n+1 up front
Pad even lines. How exactly? Wouldn't it shift the signal?
Shift even lines by half a sample? Wouldn't it loose half a sample?
Does anybody have experience with this? What is the most practical and efficient approach? Also any pointers to papers dealing with this would be very welcome.
One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the average value for the image is a good choice for this). Then encode in the normal way, storing the original image dimensions along with the pyramid. When decoding, decode and then crop to the original size.
This won't introduce any visual artifacts or degradation because you aren't stretching or offsetting the image in any way.
Because the empty space to the right and below the original image is a constant value, the high-pass bands at each level in the image pyramid will be all zero in this area. So if you are using a compression scheme like run length encoding to store each level this will be automatically taken care off and these areas will be compressed to almost nothing. If not then you can simply store the top-left (potentially non-zero) area of each level and then fill out the rest with zeros when decoding.
You could find the min and max x and y bounding rectangle of the non-zero values for each level and store this along with the level, cropped to include only non-zero values. The decoder could also be optimized so that areas of the image that are going to be cropped away are not actually decoded in the first place, by only processing the top-left of each level.
Here's an illustration of the technique:
Instead of just filling the lower-right area with a flat color, you could fill it with horizontally and vertically mirrored copies of the image to the right and below, and a copy mirrored in both directions to the bottom-right, like this:
This will avoid the discontinuities of the first technique, although there will be a discontinuity in dx (e.g. if the value was gradually increasing from left to right it will suddenly be decreasing). Choosing a mirror that keeps dx constant and ddx zero will avoid this second-order discontinuity by linearly extrapolating the values.
Another technique, which is similar to what some JPEG encoders do to pad out an image to a whole number of MCU blocks, is to take the last pixel value of each row and repeat it, and likewise for columns, with the bottom-right-most pixel of the image used to fill the bottom-right area:
This last technique could easily be modified to extrapolate the gradient of values or even the gradient of gradients instead of just repeating the same value for the remainder of the row or column.

Detecting a small curve

Suppose you have a contour made of lines, arcs, etc. It can be of any size from 1e-6 to 1e+6. How can I detect tiny useless curves inside it? At the moment we are taking the diagonal of the contour bounding rect * 1e-9 and for very distorted contours (where width is for example many times bigger of the height) it fails.
Does any scentific approach exist to eliminate this tiny useless curves?
Thanks.
By the phrasing of your question I assume your problem is using floating point for geometry. It is a common mistake. Use integers instead and it will become very clear at what point a curve is really a line. Or when two points are equal. You need to normalize all your data and work with a fixed precision from there.

Scalling connected lines

I have some kind of a shape consisting of vertical, horizontal and diagonal lines. I have starting X,Y and ending X,Y (this is my input - just 2 points defining a line) of each line and I would like to make the whole shape scalable (just by changing the value of a scale ratio variable), so that I can still preserve the proper connection of the lines and the proportions as well. Just for getting a better idea of what I mean: it'd be as if I had the same lines in a vector editor.
Would that be possible with an algorithm, and could you please, give me another possible solution if there is no such algorithm ?
Thank you very much in advance!
what point do you want it to scale about? You could scale relative to the first point, the center, or some other arbitrary location. Typically, you subtract out an offset (for instance the first point in your input), multiply by a scale factor, and then add back the offset.
A more systematic approach in computer graphics would be to use a transformation matrix... although thats probably overkill in your case.

Resources