Calculating loss area of elevation classes in Google Earth Engine - google-earth-engine

I would like to calculate the loss area in a certain elevation class in GEE. When I run code 1 below, it gives the same amount as the total loss area in my study region, code 2. I switched arealoss and class3 in code 1 as well, and didn't work. Besides, elevation(1), (2), .. all classes give the same result. How can I calculate the loss area for each elevation class?
code 1:
var class3 = elevation.eq(3).selfMask();
var stats1 = arealoss.reduceRegion({ reducer:
ee.Reducer.sum(),
geometry: class3.geometry(),
scale: 30,
maxPixels: 1e9,
bestEffort: true });
code 2:
var stats2 = arealoss.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: peru.geometry(),
scale: 30,
maxPixels: 1e9,
bestEffort: true });
Besides, I want to repeat this calculation for 7 different elevation classes. is it possible to write a function for this calculation in GEE?

class3.geometry() just gives you the footprint of the image — the region in which it is known to have data. It doesn't care at all about the mask, or the values of the pixels.
What you need is to mask the image that you're reducing by your classification. When you do that, the reducer (which works on a per-pixel basis) will ignore all masked pixels.
var class3 = elevation.eq(3);
var stats1 = arealoss
.updateMask(class3) // This hides all non-class-3 pixels in arealoss
.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: peru.geometry(), // This only needs to be big enough, not exact
scale: 30,
maxPixels: 1e9,
bestEffort: true
});

Related

Unmatching pixel values when using atScale() or reduceResolution()

I am checking exporting a Senitnel-1 image at native 10m resolution pixel size, exporting it by a modified projection of a lower resolution (20m), and exporting after applying reduceResolution and reprojection. In my mind, the last two exports should give me exactly the same pixel values and they should be the average of 10m pixel values over each bigger output pixel size (image_S1 is the input Sentinel-1 image. Can be any image).
var image_S1 = ...
var projection_S1 = image_S1.projection().getInfo()
Export.image.toDrive({
image: image_S1,
description: 'S1_10meter',
crs: projection_S1.crs,
crsTransform: projection_S1.transform,
region: geometry
})
var projection_S1_2X = image_S1.projection().atScale(20).getInfo()
Export.image.toDrive({
image: image_S1,
description: 'S1_20meter_S1_2xproj',
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform,
region: geometry
})
var image_S1_RR = image_S1.reduceResolution({
reducer: ee.Reducer.mean(),
maxPixels: 1024
})
.reproject({
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform
});
Export.image.toDrive({
image: image_S1_RR,
description: 'S1_20meter_RR_S1_2xproj',
crs: projection_S1_2X.crs,
crsTransform: projection_S1_2X.transform,
region: geometry
})
However, while the third export exactly give me the average values of 4 pixels within 20mx20m output, the second export pixel values differ from that a little. The difference is not much, but I do not understand where does it come from and I do not think they should differ because atScale() function should have exactly the same thing as reduceResolution(). Where is the problem?

Is there a function that gives the size of a pixel (not in meters) of an "ee.Image" in google earth engine?

I have an ee.Image that I export to TFRecord. I follow this tutorial (https://developers.google.com/earth-engine/guides/tfrecord).
I use this function :
ee.batch.Export.image.toDrive(
image = image,
description = name,
folder = folder,
fileNamePrefix = name,
region = region,
scale = 30,
fileFormat = 'TFRecord',
formatOptions = {
'patchDimensions': [128,128],
'kernelSize': [1,1],
'compressed': True,
}
)
After classifying my image, I want to convert it to KML. For that, I need the geodesic coordinates of my image's corners.
Normally, I would get them using ee.image.geometry().bounds(). However, when converting ee.Image to tfrecord, the patch dimensions (128,128) do not evenly divide the bounding box, so the border tiles along the greatest x/y edges are be dropped. Hence, the coordinates of the 4 corners of my image change (except for the top-left corner).
So, given the coordinates of the top-left corner of my image, and knowing the number of pixels (128,128), I want to recover the coordinates (geodesic) of the four corners.
How do I get the geodesic size of my pixel ?
ie :
x2 = x1 + size*128
y2 = y1 + size*128
Note: I know that my pixel is 30 meters !
Can anyone help? Thanks

Bokeh: enable hover tool on image glyphs

Is it possible to enable hover tool on the image (the glyph created by image(), image_rgba() or image_url()) so that it will display some context data when hovering on points of the image. In the documentation I found only references and examples for the hover tool for glyphs like lines or markers.
Possible workaround solution:
I think it's possible to convert the 2d signal data into a columnar Dataframe format with columns for x,y and value. And use rect glyph instead of image. But this will also require proper handling of color mapping. Particularly, handling the case when the values are real numbers instead of integers that you can pass to some color palette.
Update for bokeh version 0.12.16
Bokeh version 0.12.16 supports HoverTool for image glyphs. See:
bokeh release 0.12.16
for erlier bokeh versions:
Here is the approach I've been using for Hovering over images using bokeh.plotting.image and adding in top of it an invisible (alpha=0) bokeh.plotting.quad that has Hovering capabilities for the data coordinates. And I'm using it for images with approximately 1500 rows and 40000 columns.
# This is used for hover and taptool
imquad = p.quad(top=[y1], bottom=[y0], left=[x0], right=[x1],alpha=0)
A complete example of and image with capabilities of selecting the minimum and maximum values of the colorbar, also selecting the color_mapper is presented here: Utilities for interactive scientific plots using python, bokeh and javascript. Update: Latest bokeh already support matplotlib cmap palettes, but when I created this code, I needed to generate them from matplotlib.cm.get_cmap
In the examples shown there I decided not to show the tooltip on the image with tooltips=None inside the bokeh.models.HoverTool function. Instead I display them in a separate bokeh.models.Div glyph.
Okay, after digging more deeply into docs and examples, I'll probably answer this question by myself.
The hover effect on image (2d signal) data makes no sense in the way how this functionality is designed in Bokeh. If one needs to add some extra information attached to the data point it needs to put the data into the proper data model - the flat one.
tidying the data
Basically, one needs to tidy his data into a tabular format with x,y and value columns (see Tidy Data article by H.Wickham). Now every row represents a data point, and one can naturally add any contextual information as additional columns.
For example, the following code will do the work:
def flatten(matrix: np.ndarray,
extent: Optional[Tuple[float, float, float, float]] = None,
round_digits: Optional[int] = 0) -> pd.DataFrame:
if extent is None:
extent = (0, matrix.shape[1], 0, matrix.shape[0])
x_min, x_max, y_min, y_max = extent
df = pd.DataFrame(data=matrix)\
.stack()\
.reset_index()\
.rename(columns={'level_0': 'y', 'level_1': 'x', 0: 'value'})
df.x = df.x / df.x.max() * (x_max - x_min) + x_min
df.y = df.y / df.y.max() * (y_max - y_min) + y_min
if round_digits is not None:
df = df.round({'x': round_digits, 'y': round_digits})
return df
rect glyph and ColumnDataSource
Then, use rect glyph instead of image with x,y mapped accordingly and the value column color-mapped properly to the color aesthetics of the glyph.
color mapping for values
here you can use a min-max normalization with the following multiplication by the number of colors you want to use and the round
use bokeh builtin palettes to map from computed integer value to a particular color value.
With all being said, here's an example chart function:
def InteractiveImage(img: pd.DataFrame,
x: str,
y: str,
value: str,
width: Optional[int] = None,
height: Optional[int] = None,
color_pallete: Optional[List[str]] = None,
tooltips: Optional[List[Tuple[str]]] = None) -> Figure:
"""
Notes
-----
both x and y should be sampled with a constant rate
Parameters
----------
img
x
Column name to map on x axis coordinates
y
Column name to map on y axis coordinates
value
Column name to map color on
width
Image width
height
Image height
color_pallete
Optional. Color map to use for values
tooltips
Optional.
Returns
-------
bokeh figure
"""
if tooltips is None:
tooltips = [
(value, '#' + value),
(x, '#' + x),
(y, '#' + y)
]
if color_pallete is None:
color_pallete = bokeh.palettes.viridis(50)
x_min, x_max = img[x].min(), img[x].max()
y_min, y_max = img[y].min(), img[y].max()
if width is None:
width = 500 if height is None else int(round((x_max - x_min) / (y_max - y_min) * height))
if height is None:
height = int(round((y_max - y_min) / (x_max - x_min) * width))
img['color'] = (img[value] - img[value].min()) / (img[value].max() - img[value].min()) * (len(color_pallete) - 1)
img['color'] = img['color'].round().map(lambda x: color_pallete[int(x)])
source = ColumnDataSource(data={col: img[col] for col in img.columns})
fig = figure(width=width,
height=height,
x_range=(x_min, x_max),
y_range=(y_min, y_max),
tools='pan,wheel_zoom,box_zoom,reset,hover,save')
def sampling_period(values: pd.Series) -> float:
# #TODO think about more clever way
return next(filter(lambda x: not pd.isnull(x) and 0 < x, values.diff().round(2).unique()))
x_unit = sampling_period(img[x])
y_unit = sampling_period(img[y])
fig.rect(x=x, y=y, width=x_unit, height=y_unit, color='color', line_color='color', source=source)
fig.select_one(HoverTool).tooltips = tooltips
return fig
#### Note: however this comes with a quite high computational price
Building off of Alexander Reshytko's self-answer above, I've implemented a version that's mostly ready to go off the shelf, with some examples. It should be a bit more straightforward to modify to suit your own application, and doesn't rely on Pandas dataframes, which I don't really use or understand. Code and examples at Github: Bokeh - Image with HoverTool

Formula to match Street Address based on slightly different Lat/Lng values

Is there a proven formula to match up slightly different lat/lng values to a street address without submitting both of them to a Geocoding service (since we've already done so and don't want to make a second api call).
eg. The following coordinates are for the same street address (55 Church Street zip code 07505), but one set points to
the building and the other to the street.
lat : 40.9170109 long: -74.1702248
lat: 40.9171216 long: -74.1704997
So is there a commonly used formula we can use, perhaps something to the effect of , match up the first 4 decimal places or subtract the two lat values and the two long values and if the difference is less than x , it is most likely the same street address. These are just my ideas based on the definitions of lat/long, but we are looking for something proven, perhaps even the industry standard formula if anything like that exists.
For measuring distances in close proximity Pythagoras or a variation of this is adequate. You may have heard of the Haversine formula but it overkill for close proximities.
For locations near the equator pythagoras is adequate but as we move away from the equator it becomes less accurate due to the convergence of the meridians(longitude) as we approach the poles. This is compensated in the Equirectangular() function
As you tag Google Maps the following javascript functions can be used. Note d is in meters.
For your example Pyth() gives 33meters and Equirectangular() 26meters. For completeness haversine() also give 26meters.
function toRad(Value) {
/** Converts numeric degrees to radians */
return Value * Math.PI / 180;
}
function Pyth(lat1,lat2,lng1,lng2){
var x = toRad(lng2-lng1) ;
var y = toRad(lat2-lat1);
var R = 6371000; // gives d in meters
var d = Math.sqrt(x*x + y*y)* R;
return d;
}
function Equirectangular(lat1,lat2,lng1,lng2){
var x = toRad(lng2-lng1) * Math.cos(toRad(lat1+lat2)/2);
var y = toRad(lat2-lat1);
var R = 6371000; // gives d in meters
var d = Math.sqrt(x*x + y*y)* R;
return d;
}

Calculating the Area of a Polygon When the Polygon's Points are Lat Longs: Which Function is More Accurate?

I'm trying to find a way to calculate the area of a polygon using lat long coordinates in a Flex 3 site. Hong007 on Google Maps for Flash group was cool enough to post the following function:
private function GetPolygonArea (polygon : Polygon):Number
{
var nVer : int = polygon.getOuterVertexCount();
var sz : Number =0;
var s : Number =0;
var x : Number =0;
var y0 : Number =0;
var y1 : Number =0;
var Maplatlng:LatLng;
if (nVer>=3){
for (var i:int=0; i<nVer; i++){
Maplatlng = polygon.getOuterVertex(i);
x = Maplatlng.lng();
if (i>0){
Maplatlng = polygon.getOuterVertex(i-1);
y0 = Maplatlng.lat();
}
else{
Maplatlng = polygon.getOuterVertex(nVer-1);
y0 = Maplatlng.lat();
};
if (i<(nVer-1)){
Maplatlng = polygon.getOuterVertex(i+1);
y1 = Maplatlng.lat();
}
else{
Maplatlng = polygon.getOuterVertex(0);
y1 = Maplatlng.lat();
};
s = x * (y0-y1);
sz+=s;
};
//경위도시 1도의 m값을 곱한다(대략 면적 환산)
Maplatlng = polygon.getOuterVertex(0);
var Maplatlng1:LatLng = new
com.google.maps.LatLng(Maplatlng.lat()+1, Maplatlng.lng()+1);
var TempDISTANCE:Number =
Maplatlng.distanceFrom(Maplatlng1) / Math.sqrt(2);
return Math.abs((sz/2.0) * Math.pow(TempDISTANCE, 2));
};
return 0.0;
}
I was also playing around with the area calculator at http://www.freemaptools.com/area-calculator.htm .
These functions produce slightly different results. I'm trying to figure out which one is more accurate. It seems that hong007's function produces results that are on average slightly larger than freemaptools' function. However, I don't know which one is more accurate. Any advice?
I added some string to this algorithm.
For google maps experimentally I found this numbers:
are = area-(area*0.2187);
and it works for me for maximum (scale = 5 meters) and minimum (500 km) zoom levels.
The method implemented here is pretty quick and dirty. It makes a couple of assumptions that can lead to incorrect results.
The first thing to know is that Lat/Long space is non-uniformly scaled with respect to measured distance on the ground. This means that a vector of length one meter has a different length in lat/long space depending on if the vector is pointing roughly east-west or north-south. Also, the magnitude of the difference between how the lat/long axes map to ground units changes depending on where you are on the globe (it's much more different at the poles than at the equator.)
The algorithm above does a very quick and dirty workaround for this, which is to generate a scale value based on the distance calculated for the hypotenuse of a unit right triangle. This tries to basically average the scales of the two lat/long axes for that point on the globe.
There are a couple of problems with this. If the polygon is very large (multiple geocells), then this average scale value will be off because it's only calculated for the local geocell around the 0 vertex. Secondly, the average scale approximation is really very coarse and will break down significantly if you have polygons that vary greatly in one dimension but not the other (a long skinny polygon oriented along one of the axes). This is because the scale will be computed as an average of the scale of the two axes but one of the axes should have very little influence because of the distribution of the vertices.
I didn't look at the other area calculator but I would guess if you're seeing discrepancies that this code is the less accurate version.

Resources