Unmatching pixel values when using atScale() or reduceResolution() - projection

I am checking exporting a Senitnel-1 image at native 10m resolution pixel size, exporting it by a modified projection of a lower resolution (20m), and exporting after applying reduceResolution and reprojection. In my mind, the last two exports should give me exactly the same pixel values and they should be the average of 10m pixel values over each bigger output pixel size (image_S1 is the input Sentinel-1 image. Can be any image).
var image_S1 = ...
var projection_S1 = image_S1.projection().getInfo()
Export.image.toDrive({
image: image_S1,
description: 'S1_10meter',
crs: projection_S1.crs,
crsTransform: projection_S1.transform,
region: geometry
})
var projection_S1_2X = image_S1.projection().atScale(20).getInfo()
Export.image.toDrive({
image: image_S1,
description: 'S1_20meter_S1_2xproj',
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform,
region: geometry
})
var image_S1_RR = image_S1.reduceResolution({
reducer: ee.Reducer.mean(),
maxPixels: 1024
})
.reproject({
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform
});
Export.image.toDrive({
image: image_S1_RR,
description: 'S1_20meter_RR_S1_2xproj',
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform,
region: geometry
})
However, while the third export exactly give me the average values of 4 pixels within 20mx20m output, the second export pixel values differ from that a little. The difference is not much, but I do not understand where does it come from and I do not think they should differ because atScale() function should have exactly the same thing as reduceResolution(). Where is the problem?

Related

Distributing A Set Number Of Points At Random Locations Within Acres On Map

I am trying to distribute a number of points per acre within a square mile using Azure maps. Currently, I have been looking into the haversine formula, trig, basic division, etc, but believe I may be overthinking it.
Any ideas?
Lets say I want 3 points put per acre within a square mile in randomized but appropriate lat/lng locations within each acre.
Right now it seems I need to divide up the X and Y by feet or yard and then divide into the lat/lng to get appropriate locations.
A bit ignorant of lat/lng distances. The information I have found says that a degree of lat, for example, is worth about 69 miles. Then it divides it into "seconds," etc, apparently. A bit confusing.
Ideas?
A square with sides of 63.61 meters is equal to one acre. To calculate random points within that square, start off with a latitude and longitude coordinate for a corner, then calculate opposite corners coordinates, then calculate the latitude/longitude widths and use that to calculate random offsets from the starting coordinate. For example, take coordinate 45, -110 and assume this is the top left corner of the square. The opposite corner would have a heading of 135 degrees, and the distance to the corner would be sqrt(a^2 + b^2) = sqrt(63.61^2 + 63.61^2) = 89.9581247 meters. Here is code that would calculate three random points within that square acre.
var lat = 45;
var lon = -110;
var cornerDistance = 89.9581247;
var cornerHeading = 135;
//Bottom right corner.
var cornerPosition = atlas.math.getDestination([lat, lon], cornerHeading, cornerDistance);
var latWidth = lat - cornerPosition[1]; //Corner is lower.
var lonWidth = cornerPosition[0] - lon; //Corner is to the right.
var randomPositions = [];
for(var i=0;i<3;i++){
randomPositions.push(lon + Math.random()*lonWidth, lat - Math.random() * latWidth]);
}

Calculating loss area of elevation classes in Google Earth Engine

I would like to calculate the loss area in a certain elevation class in GEE. When I run code 1 below, it gives the same amount as the total loss area in my study region, code 2. I switched arealoss and class3 in code 1 as well, and didn't work. Besides, elevation(1), (2), .. all classes give the same result. How can I calculate the loss area for each elevation class?
code 1:
var class3 = elevation.eq(3).selfMask();
var stats1 = arealoss.reduceRegion({ reducer:
ee.Reducer.sum(),
geometry: class3.geometry(),
scale: 30,
maxPixels: 1e9,
bestEffort: true });
code 2:
var stats2 = arealoss.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: peru.geometry(),
scale: 30,
maxPixels: 1e9,
bestEffort: true });
Besides, I want to repeat this calculation for 7 different elevation classes. is it possible to write a function for this calculation in GEE?
class3.geometry() just gives you the footprint of the image — the region in which it is known to have data. It doesn't care at all about the mask, or the values of the pixels.
What you need is to mask the image that you're reducing by your classification. When you do that, the reducer (which works on a per-pixel basis) will ignore all masked pixels.
var class3 = elevation.eq(3);
var stats1 = arealoss
.updateMask(class3) // This hides all non-class-3 pixels in arealoss
.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: peru.geometry(), // This only needs to be big enough, not exact
scale: 30,
maxPixels: 1e9,
bestEffort: true
});

Is there a function that gives the size of a pixel (not in meters) of an "ee.Image" in google earth engine?

I have an ee.Image that I export to TFRecord. I follow this tutorial (https://developers.google.com/earth-engine/guides/tfrecord).
I use this function :
ee.batch.Export.image.toDrive(
image = image,
description = name,
folder = folder,
fileNamePrefix = name,
region = region,
scale = 30,
fileFormat = 'TFRecord',
formatOptions = {
'patchDimensions': [128,128],
'kernelSize': [1,1],
'compressed': True,
}
)
After classifying my image, I want to convert it to KML. For that, I need the geodesic coordinates of my image's corners.
Normally, I would get them using ee.image.geometry().bounds(). However, when converting ee.Image to tfrecord, the patch dimensions (128,128) do not evenly divide the bounding box, so the border tiles along the greatest x/y edges are be dropped. Hence, the coordinates of the 4 corners of my image change (except for the top-left corner).
So, given the coordinates of the top-left corner of my image, and knowing the number of pixels (128,128), I want to recover the coordinates (geodesic) of the four corners.
How do I get the geodesic size of my pixel ?
ie :
x2 = x1 + size*128
y2 = y1 + size*128
Note: I know that my pixel is 30 meters !
Can anyone help? Thanks

Kitti Velodyne point to pixel coordinate

From the Velodyne point, how to get pixel coordinate for each camera?
Using pykitti
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)
We can get the projection on the image which is equation 7 of the Kitti Dataset paper:
y = Prect(i) Rrect(0) Tvelocam x
But from there, how to get the actual pixel coordinates on each image?
Update: PyKitti version 0.2.1 exposes projection matrices for all cameras.
I recently faced the same problem. For me, the problem was that pykitty didn't expose Prect and Rrect matrices for all cameras.
For Pykitti > 0.2.1, use Prect and Rrect from calibration data.
For previous versions, you have two options:
Enter the matrices by hand (data is in the .xml calibration file for each sequence).
Use this fork of pykitti: https://github.com/Mi-lo/pykitti/
Then, you can use equation 7 to project a velodyne point into an image. Note that:
You will need 3D points as a 4xN array in homogeneous coordinates. Points returned by pykitti are a Nx4 numpy array, with the reflectance in the 4th column. You can prepare the points with the prepare_velo_points function below, which keeps only points with reflectance > 0, then replaces reflectance values with 1 to get homogeneous coordinates.
The velodyne is 360°. Equation 7 will give you a result even for points that are behind the camera (they will get projected as if they were in front, but vertically mirrored). To avoid this, you should project only points that are in front of the camera. For this, you can use the function project_velo_points_in_img below. It returns 2d points in homogeneous coordinates so you should discard the 3rd row.
Here are the functions I used:
def prepare_velo_points(pts3d_raw):
'''Replaces the reflectance value by 1, and tranposes the array, so
points can be directly multiplied by the camera projection matrix'''
pts3d = pts3d_raw
# Reflectance > 0
pts3d = pts3d[pts3d[:, 3] > 0 ,:]
pts3d[:,3] = 1
return pts3d.transpose()
def project_velo_points_in_img(pts3d, T_cam_velo, Rrect, Prect):
'''Project 3D points into 2D image. Expects pts3d as a 4xN
numpy array. Returns the 2D projection of the points that
are in front of the camera only an the corresponding 3D points.'''
# 3D points in camera reference frame.
pts3d_cam = Rrect.dot(T_cam_velo.dot(pts3d))
# Before projecting, keep only points with z>0
# (points that are in fronto of the camera).
idx = (pts3d_cam[2,:]>=0)
pts2d_cam = Prect.dot(pts3d_cam[:,idx])
return pts3d[:, idx], pts2d_cam/pts2d_cam[2,:]
Hope this helps!

Cylindrical Projection - how to get y image coordinates from tangens(latitude)

I need to implement a transformation from geographic coordinates with hight data to a image.
Like described in http://mathworld.wolfram.com/CylindricalProjection.html.
I simply don't know how to create the suitable y-value...
For example:
double longitude = -180; // (λ)
double latitude = 80; // (φ)
int mapHeight = 360;
int mapWidth = 720;
x = (int)((longitude+180.0)*(mapWidth/360.0));
How do i use the results of
Math.tan(Math.toRadians(latitude))
Thank you!
You can not represent the poles using a cylindrical projection. So you need to set a maximum latitude and store it (say double maxlat=3.1415/2.0). To get y, first, test if the point is not to close to the poles, then compute :
y=(int)(mapHeight/2+mapHeight/2*Math.tan(Math.toRadians(latitude)) /Math.tan(Math.toRadians(maxlat))) ;
If you are looking for a beautiful map, consider using the Mercator projection :
y=(int)(mapHeight/2+mapHeight/2*Math.log(Math.tan(Math.toRadians(latitude)/2.0+M_PI/4.0)) /Math.log(Math.tan(Math.toRadians(maxlat)/2.0+M_PI/4.0))) ;
Store Math.tan(Math.toRadians(maxlat)) or math.log(Math.tan(Math.toRadians(maxlat)/2.0+M_PI/4.0)) if you need to compute many points

Resources