I'm working on processing some geo-spatial raster data, JOG-A raster to be exact and know the imagery is supposed to be to a scale of 1:250000, however I would like to calculate this in code but am not coming up with the correct values so thought I would ask for some help.
Here are the bounds of a single image and other values needed for calculations.
(coordinates are in degrees, EPSG:4326)
Meters Per Degree: 111319.49079327358
Image Size: x: 1536 y: 1536
DPI: 90
Upper Left:
Longitude: -89.54314720812182 Latitude: 33.6529242569511
Upper Right:
Longitude: -88.8578680230458 Latitude: 33.6529242569511
Bottom Right:
Longitude: -88.8578680230458 Latitude: 33.13518696069031
Bottom Left:
Longitude: -89.54314720812182 Latitude: 33.13518696069031
I thought I could say:
(degLatA - degLatB) * meterPerDeg / imageSizeY * dpi / 0.0254
33.6529242569511deg - 33.13518696069031deg = 0.51773729626079deg
$1 deg * 111319.49079327358m/deg = 57634.2521844373683014174969282m
$2 m / 1536px = 37.52m/px
$3 m/px * (90px/in * 1in/0.0254m) = 132953.03
which gives a scale of 1:132953... not even close!
units all cancel great, but as you can see from the calculation the value is not even close to 250k.
Can anyone explain where I'm wrong?
EDIT
In case anyone else stumbles upon this the answer was hidden in the spec documents for CADRG. The DPI to use for CADRG is 169. Plug that in to the above equation and it works great for CADRG. You just need to know the correct DPI to use for the imagery you are trying to find the scale of and the above will work.
You need also the longitude because otherwise the map is distorted. IMO try the same formula with the longitude.
When dealing with CADRG the correct DPI to use is 169. This can be found on page 10 of the MIL-SPEC documentation. The math in the question is correct just plug in 169 instead of 90 for dpi.
Related
I would like to reclassify Global Forest Data values i.e like
0 - 20 % --> 1
21 - 49 % --> 0.5
50 - 100 % --> 0
However, I wasn`t able to find how to do this for ranges in GEE. Explanation for reclassifying individual numbers can be found here:
https://sites.google.com/site/globalsnowobservatory/home/Presentations-and-Tutorials/short-tutorial/remap
but simple procedure for ranges (without decision trees) is hard to find.
Could someone provide a simple solution for this?
// Example from https://developers.google.com/earth-engine/resample
// Load a MODIS EVI image.
var modis = ee.Image(ee.ImageCollection('MODIS/006/MOD13A1').first())
.select('EVI');
// Get information about the MODIS projection.
var modisProjection = modis.projection();
// Load and display forest cover data at 30 meters resolution.
var forest = ee.Image('UMD/hansen/global_forest_change_2015')
.select('treecover2000');
// Get the forest cover data at MODIS scale and projection.
var forestMean = forest
// Force the next reprojection to aggregate instead of resampling.
.reduceResolution({
reducer: ee.Reducer.mean(),
maxPixels: 1024,
bestEffort:true
})
// Request the data at the scale and projection of the MODIS image.
.reproject({
crs: modisProjection
});
If you want to make binary decisions about pixel values, you can use the ee.Image.where() algorithm. It takes an image of boolean values to specify where in an image to replace pixels with another image. The tidiest way to use it for this application is to use the ee.Image.expression() syntax (rather than specifying several boolean and constant images):
var reclassified = forestMean.expression('b(0) <= 20 ? 1 : b(0) < 50 ? 0.5 : 0');
b(0) refers to the value of the first band of the input image, and ? ... : is the ?: conditional operator which returns the part between ? and : if the condition to the left is true, and the part to the right of the : if the condition is false. So, you can use a series of ? ... : to write out several conditions concisely.
Runnable example with this line.
I have a 3D model in a coordinate system that is defined in metres. The coordinates have been transformed to have the centre of the bounding box of the model as the origin. A vertex with the coordinates (1, 0, 0) would thus lie 1 metre from the origin.
When trying to add the geometries to the map, with the actual latitude/longitude of the origin as geoPosition, they don't get placed at the exact location and appear smaller than they are. How could I solve this?
Thanks.
You can center the map at whatever point you would like in the world with this method:
map.setCameraGeolocationAndZoom(
//Singapore coordinates and zoom level 16:
new harp.GeoCoordinates(1.278676, 103.850216), 16
);
You can specify the projection type in the MapView's constructor.
To implement a globe projection:
const map = new harp.MapView({
canvas,
theme: "https://unpkg.com/#here/harp-map-theme#latest/resources/berlin_tilezen_base_globe.json",
projection: harp.sphereProjection,
//For tile cache optimization:
maxVisibleDataSourceTiles: 40,
tileCacheSize: 100
});
//And set it to a view where you can see the whole world:
map.setCameraGeolocationAndZoom(new harp.GeoCoordinates(1.278676, 103.850216), 4);
Please refer documentation for more reference:
https://developer.here.com/tutorials/harpgl/#modify-the-map
I'd like to reduce the resolution for a binary layer of the max extent of water, and I'd like the resulting layer to represent percent of water pixels. However when I use ee.Reducer.mean() as seen below, the resulting layer is only has binary values still. How can I get a float instead?
//The image in question
var water = ee.Image('JRC/GSW1_1/GlobalSurfaceWater').clip(roi);
//Current code
var watermodis = water.select(['max_extent'])
.reproject({
crs:modisproj
}).reduceResolution({
reducer:ee.Reducer.mean(),
});
Turns out I just had the order of the reproject and the reduceResolution wrong, should be
var watermodis = water.select(['max_extent'])
.reduceResolution({
reducer:ee.Reducer.mean(),
maxPixels:1310
}).reproject({
crs:modisproj
});
I want to ask for help to deal with the possible use of non-standard coordinates on the map Leaflet.
I want to use Leaflet to display custom maps with my own tile generator. Tiles are generated on the fly by script, depending on where it is planned to display (parameters {x}, {y}, {z} in the URL request to the script)
Map will be zoomable (from 0 to 10), size of ~16000 * 16000 tiles in maximum zoom, and 16 * 16 tiles in a minimum) and it will display a variety of objects, each object in a separate tile.
Each tile of 64 * 64 pixels is the object on map.
For each object (a square-tile) I want to display it`s information on mouse click, by sending via AJAX request to the server. I did not want to pre-load all information about all objects for the goal of optimization.
My main issue - I cannot understand how to correctly get the X Y Я coordinates of the tile on which mouse clicked.
Essentially because each tile when it is loaded from the server is bound to the grid {x}, {y}, {z}, and so I want to get these {x}, {y}, {z} from clicks on the map and send them for further AJAX request for getting information about the object.
Now it is possible to get click point as Latlng coordinates or coordinates in pixels relative to the upper-left corner of the map, which I cannot reference to tiles grid.
And also I wanted to know the possibility to get the coordinates of the click relative to the tile. For example, If the tile has dimensions of 64 * 64, and click was in the center of the tile, so how can I get relative coordinate of click [32, 32]?
Because If we knowing {X}, {Y}, {Z} coordinates of the tile and relative X* and Y* coordinates of click inside the tile, then we can do “universal alternative coordinate grid”.
May be this is not a problem and it can be solved easily, but I've never worked with any Maps API, and therefore I want to know the answer to this question.
Thanks in advance for your help!
Here is a working example for getting Zoom, X, and Y coordinates of clicked tile (using openstreet map): http://jsfiddle.net/84P9r/
function getTileURL(lat, lon, zoom) {
let xtile = parseInt(Math.floor((lon + 180) / 360 * (1 << zoom)));
let ytile = parseInt(Math.floor((1 - Math.log(Math.tan(lat.toRad()) + 1 / Math.cos(lat.toRad())) / Math.PI) / 2 * (1 << zoom)));
return zoom + "/" + xtile + "/" + ytile;
}
based on an answer https://stackoverflow.com/a/19197572
You can use the mouseEventToLayerPoint and mouseEventToContainerPoint methods in the Leaflet API to convert pixels onscreen to pixels relative to the top-left of the map, and then using a little math, you can derive the location within a tile.
This is what Leaflet does internally:
const tileSize = [256, 256]
let pixelPoint = map.project(e.latlng, map.getZoom()).floor()
let coords = pixelPoint.unscaleBy(tileSize).floor()
coords.z = map.getZoom() // { x: 212, y: 387, z: 10 }
I am making a map of the boroughs in NYC and I can't seem to get the projection right: the only thing I get is a tiny map.
I tried recreating this example, which only made the console go crazy with errors, I suspect because there was something off about the equation.
When I was trying to get albers to work, I tried out the answer to this question, and still I could not get the map to work.
With 960/500 height and width, I used: var projection = d3.geo.albers().center([40.71, 73.98]).parallels([29.5, 45.5]).scale(10000).translate([(width) / 2, (height)/2]);
Right now, I am using a transverse Mercator, with the code below, and the topojson I created using one of these files.
var width = 960,
height = 500;
var svg = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);
d3.json("nyc-borough.json", function(error, nyb) {
var boroughs = topojson.feature(nyb, nyb.objects["nyc-borough-boundaries-polygon"]).features;
var projection = d3.geo.transverseMercator()
.scale(1600)
.rotate([73 + 58 / 60, -48 - 47 / 60]);
var path = d3.geo.path()
.projection(projection);
var g = svg.append("g");
g.append("g")
.attr("id", "boroughs")
.selectAll(".state")
.data(boroughs)
.enter().append("path")
.attr("class", function(d){ return d.properties.name; })
.attr("d", path);
});
Please, please halp :(
Thanks in advance!
I created a bl.ock with the NYC boroughs here. You were on the right path with WSG84/Mercator as that was what the original data was in, as a quick check in QGIS demonstrated.
QGIS was also good checking the centre of the data, again which came out to be [-73.94, 40.70]. Note that these are the opposite way round to your co-ordinates which were lat and long, but d3 needs long and lat as discussed here in the API. The other thing to look out for is the negatives. North is positive so latitude in the northern hemisphere is positive and negative in the southern hemisphere. For the east and west hemispheres its east that's positive. Of course this only matters for the global projections the USA Albers wouldn't have negatives for the USA. Oh, and no discussion on Projections would be complete without a look at Jason Davies work.
If you want to use a different projection to the projection your data is in I generally think it's better to preprocess your data into that projection and QGIS is a great tool for that but there are many others such as gdal.