WMS as a single tile image in Google Maps v3 - google-maps-api-3

I am following the code at http://www.gisdoctor.com/v3/mapserver.html to overlay a WMS as an image on Google Maps using API v3. The js code at the above link is as follows
"WMSGetTileUrl" : function(tile, zoom) {
var projection = map.getProjection();
var zpow = Math.pow(2, zoom);
var ul = new google.maps.Point(
tile.x * 256.0 / zpow,
(tile.y + 1) * 256.0 / zpow
);
var lr = new google.maps.Point(
(tile.x + 1) * 256.0 / zpow,
tile.y * 256.0 / zpow
);
var ulw = projection.fromPointToLatLng(ul);
var lrw = projection.fromPointToLatLng(lr);
var bbox = ulw.lng() + "," + ulw.lat() + "," + lrw.lng() + "," + lrw.lat();
return url = "http://url/to/mapserver?" +
"version=1.1.1&" +
"request=GetMap&" +
"Styles=default&" +
"SRS=EPSG:4326&" +
"Layers=wmsLayers&" +
"BBOX=" + bbox + "&" +
"width=256&" +
"height=256&" +
"format=image/png&" +
"TRANSPARENT=TRUE";
},
"addWmsLayer" : function() {
/*
Creating the WMS layer options. This code creates the Google
imagemaptype options for each wms layer. In the options the function
that calls the individual wms layer is set
*/
var wmsOptions = {
alt: "MapServer Layers",
getTileUrl: WMSGetTileUrl,
isPng: false,
maxZoom: 17,
minZoom: 1,
name: "MapServer Layer",
tileSize: new google.maps.Size(256, 256)
};
/*
Creating the object to create the ImageMapType that will call the WMS
Layer Options.
*/
wmsMapType = new google.maps.ImageMapType(wmsOptions);
map.overlayMapTypes.insertAt(0, wmsMapType);
},
Everything works fine, but, of course, the WMS is returned as 256 x 256 tiles. No surprises, because that is what I requested. However, following the discussion at http://groups.google.com/group/google-maps-js-api-v3/browse_thread/thread/c22837333f9a1812/d410a2a453025b38 it seems that I might be better off requesting an untiled (single) image from mapserver. This would tax my server less. In any case, I would like to experiment with a single image, but I am unable to construct a request for one properly.
Specifically, I changed the size of the tile to something large; for example, I tried 1024 x 1024 tiles. I did get fewer tiles, but the returned images didn't match the Google Maps base layer boundaries.
What I would like is to not specify the tile size at all. Instead, I should dynamically figure out the tile size to be, say, 256 pixels bigger than the current map size. That way, a single image would be returned, no matter what the map size. Also, a seamless pan would be implemented with the help of the extra 256px along the map edges.
Suggestions?

1) it works for me - with 512 and 1024 also. You just have to make the change at all appropriate places (corner coordinate computation, tileSize setting, WMS parameters width & height setting). Any tile setting should be possible if you just handle it correctly at all places in code.
2) I think you are trying to optimize at wrong place. The 256x256 tiles have been established for a reason. Remember that google is caching the tiles, so with larger tiles the whole solution might be much slower. Also the map will be loaded more fast for the user.
3) The tile concept is a right thing and should be preserved. Things like receiving the whole map wouldn't work - imagine map panning then. DOn't try to reinvent the wheel and stick with the current google tile concept - it is optimized for this type of application. IF you need to receive the map as a single image, you can use the static google maps api.

See this same question I just answered for someone else. It's likely to be your projection. EPSG:4326 is not the correct projection for Google Maps. Changing the projection means you need to change the way coordinates are calculated and referenced.

Related

image.filter is not a function in google earth engine

As a newbie to the google earth engine, I have been trying something (https://code.earthengine.google.com/6f45059a59b75757c88ce2d3869fc9fd) following a NASA tutorial (https://www.youtube.com/watch?v=JFvxudueT_k&ab_channel=NASAVideo). My last line (line 60) shows image.filter is not a function, while the one in the tutorial (line 34) is working. I am not sure what happened and how to sort this out?
//creating a new variable 'image' from the L8 collection data imported
var image = ee.Image (L8_tier1 //the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
//the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
.filterBounds (ROI) //for the region of interest we are interested in
//.sort ("COLUD_COVER") //for sorting the data between the range with a cloud cover, the metadata property we are interested in. Other way to do this is using the function below.
//.first() //this will make the image choose the first image with the least amount of cloud cover for the area. Other way to do this is using the function below.
);
//print ("Hague and Rotterdam", image); //printing the image in the console
//console on the right hand side will explain everything from the data
//id will show the image deatils and date of the image, for this case 29th July 2019
//under the properties tab cloud cover can be found, this is the least we can get for this area during this period
// //vizualisation of the data in the map with true color rendering
// var trueColour = {
// bands:["SR_B4","SR_B3","SR_B2"],
// min: 5000,
// max: 12000
// };
// Map.centerObject (ROI, 12); //for the centering the area in the center of the map with required zoom level
// Map.addLayer (image, trueColour, "Hague and Rotterdam"); //for adding the image with the variable of bands we made and naming the image
//Alternate way
//Function to cloud mask from the qa_pixel band of Landsat 8 SR data. In this case bits 3 and 4 are clouds and cloud shadow respectively. This can be different for different image sets.
function maskL8sr(image) {
var cloudsBitMask = 1 << 3; //remember to check this with the source
var cloudshadowBitMask = 1 << 4; //remember to check this with the source
var qa = image.select ('qa_pixel'); //creating the new variable from the band of the source image
var mask = qa.bitwiseAnd(cloudsBitMask).eq(0) //making the cloud equal to zero to mask them out
.and(qa.bitwiseAnd(cloudshadowBitMask).eq(0)); //making the cloud shadow equal to zero to mask them out
return image.updateMask(mask).divide(10000)
.select("SR_B[0-9]*")
.copyProperties(image, ["system:time_start"]);
}
// print ("Hague and Rotterdam", image);// look into the console now. How many images the code have downloaded!!!
//filtering imagery for 2015 to 2021 summer date ranges
//creating joint filter and applying to image collection
var sum21 = ee.Filter.date ('2021-06-01','2021-09-30');
var sum20 = ee.Filter.date ('2020-06-01','2020-09-30');
var sum19 = ee.Filter.date ('2019-06-01','2019-09-30');
var sum18 = ee.Filter.date ('2018-06-01','2018-09-30');
var sum17 = ee.Filter.date ('2017-06-01','2017-09-30');
var sum16 = ee.Filter.date ('2016-06-01','2016-09-30');
var sum15 = ee.Filter.date ('2015-06-01','2015-09-30');
var SumFilter = ee.Filter.or(sum21, sum20, sum19, sum18, sum17, sum16, sum15);
var allsum = image.filter(SumFilter);
Filtering is an operation you can do on ImageCollections, not individual Images, because all filtering does is choose a subset of the images. Then, in your script, you have (with the comments removed):
var image = ee.Image (L8_tier1
.filterBounds (ROI)
);
The result of l8_tier1.filterBounds(ROI) is indeed an ImageCollection. But in this case, you have told the Earth Engine client that it should be treated as an Image, and it believed you. So, then, the last line
var allsum = image.filter(SumFilter);
fails with the error you saw because there is no filter() on ee.Image.
The script will successfully run if you change ee.Image(...) to ee.ImageCollection(...), or even better, remove the cast because it's not necessary — that is,
var image = L8_tier1.filterBounds(ROI);
You should probably also change the name of var image too, since it is confusing to call an ImageCollection by the name image. Naming things accurately helps avoid mistakes, while you are working on the code and also when others try to read it or build on it.

How can I merge geometries in A-Frame without losing material information?

I have a large set of block objects using a custom geometry, that I am hoping to merge into a smaller number of larger geometries, as I believe this will reduce rendering costs.
I have been following guidance here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance which has led me to the geometry-merger component here:
https://github.com/supermedium/superframe/tree/master/components/geometry-merger/
The A-Frame docs say:
"You can use geometry-merger and then make use a three.js material with vertex colors enabled. three.js geometries keep data such as color, uvs per vertex."
The geometry-merger component also says:
"Useful if using vertex or face coloring as individual geometries' colors can still be manipulated individually since this component keeps a faceIndex and vertexIndex."
However I have a couple of problems.
If I set vertexColors on my material (as suggested by the A-Frame docs), then this ruins the appearance of my blocks.
Whether or not I set vertexColors on my material, all material information seems to be lost when the geometries are merged, and everything just ends up white.
See this glitch for a demonstration of both problems.
https://tundra-mercurial-garden.glitch.me/
My suspicion is that the A-Frame geometry-merger component just won't do what I need here, and I need to implement something myself using the underlying three.js functions.
Is that right, or is there a way that I could make this work using geometry-merger?
For the vertexColors to work, you need to have your vertices coloured :)
More specifically - the BufferGeometry expects an array of rgb values for each vertex - which will be used as color for the material.
In this bit of code:
var geometry = new THREE.BoxGeometry();
var mat = new THREE.MeshStandardMaterial({color: 0xffffff, vertexColors: THREE.FaceColors});
var mesh = new THREE.Mesh(geometry, mat);
The mesh will be be black unless the geometry contains information about the vertex colors:
// create a color attribute in the geometry
geometry.setAttribute('color', new THREE.BufferAttribute(new Float32Array(vertices_count), 3));
// grab the array
const colors = this.geometry.attributes.color.array;
// fill the array with rgb values
const faceColor = new THREE.Color(color_hex);
for (var i = 0; i < vertices_count / 3; i += 3) {
colors[i + 0] = faceColor.r; // lol +0
colors[i + 1] = faceColor.g;
colors[i + 2] = faceColor.b;
}
// tell the geometry to update the color attribute
geometry.attributes.color.needsUpdate = true;
I can't make the buffer-geometry-merger component work for some reason, but It's core seems to be valid:
AFRAME.registerComponent("merger", {
init: function() {
// replace with an event where all child entities are ready
setTimeout(this.mergeChildren.bind(this), 500);
},
mergeChildren: function() {
const geometries = [];
// traverse the child and store all geometries.
this.el.object3D.traverse(node => {
if (node.type === "Mesh") {
const geometry = node.geometry.clone();
geometry.applyMatrix4(node.parent.matrix);
geometries.push(geometry)
// dispose the merged meshes
node.parent.remove(node);
node.geometry.dispose();
node.material.dispose();
}
});
// create a mesh from the "merged" geometry
const mergedGeo = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries);
const mergedMaterial = new THREE.MeshStandardMaterial({color: 0xffffff, roughness: 0.3, vertexColors: THREE.FaceColors});
const mergedMesh = new THREE.Mesh(mergedGeo, mergedMaterial);
this.el.object3D.add(mergedMesh)
}
})
You can check it out in this glitch. There is also an example on using the vertex colors here (source).
I agree it sounds like you need to consider other solutions. Here are two different instances of instancing with A-Frame:
https://github.com/takahirox/aframe-instancing
https://github.com/EX3D/aframe-InstancedMesh
Neither are perfect or even fully finished, but can hopefully get you started as a guide.
Although my original question was about geometry merging, I now believe that Instanced Meshes were a better solution in this case.
Based on this suggestion I implemented this new A-Frame Component:
https://github.com/diarmidmackenzie/instanced-mesh
This glitch shows the scene from the original glitch being rendered with just 19 calls using this component. That compares pretty well with > 200 calls that would have been required if every object were rendered individually.
https://dull-stump-psychology.glitch.me/
A key limitation is that I was not able to use a single mesh for all the different block colors, but had to use one mesh per color (7 meshes total).
InstancedMesh can support different colored elements, but each element must have a single color, whereas the elements in this scene had 2 colors each (black frame + face color).

Automatic bezier edges in Cytoscape.js

I would like to create nice curved edges in my Cotoscape.js graph using the unbundled-bezier style. According to the database I have to set the control-point-distance(s) automatically, so I came up with following code:
{
selector: 'edge',
css: {
'curve-style': 'unbundled-bezier',
'target-arrow-shape': 'triangle',
'control-point-weights': '0.25 0.75.',
'control-point-distance': function( ele ){
console.log(ele.source().position());
var pos1 = ele.source().position().y;
var pos2 = ele.target().position().y;
var str = '' + Math.abs(pos2-pos1) + 'px -' + Math.abs(pos2-pos1) + 'px';
console.log(pos1, pos2, str);
return str;
}
}
}
My problem is, that the graph is rendered with straight lines ant the curvy line appears only when I click on some. Also, when I move the nodes the curve moves nicely with the node, but the node positions (ele.source().position().y) does not change
A style function ought to be a pure function. Yours is technically not: It depends on state outside of the edge's data.
The only way an arbitrary function could be used to specify style is if the function is continuously polled. That would be hacky and prohibitively expensive.
You must use a pure function if you want to use a custom function. Either rewrite your function to rely on only the edge's data or use a passthrough data() mapping and change the edge's data whenever you want to modify the edge.

how to generate a map based on a given scale [Bing Map]?

I recently started working with [Bing Api] in my webService [wcf] in c #.
I would like to recover a satellite image of a given scale with Bing!
for example
Scale 1:200 (1 centimeter on the map equal 200 centimeters on the world)
Of course I found this function that explains how to calculate the image resolution satellite bing but this is not what I'm looking for ..
Map resolution = 156543.04 meters/pixel * cos(latitude) / (2 ^ zoomlevel)
Here is my function used to generate my bing map, but I do not know what to send parameter to retrieve an image scale of 1:200.
I need :
Scale = 1:200
I search :
int mapSizeHeight = ?
int mapSizeWidth = ?
int zoomLevel = ?
public string GetImageMap(double latitude,double longitude,int mapSizeHeight, int mapSizeWidth, int zoomLevel)
{
string key = "ddsaAaasm5vwsdfsfd2ySYBxfEFsdfsdfcFh6iUO5GI4v";
MapUriRequest mapUriRequest = new MapUriRequest();
// Set credentials using a valid Bing Maps key
mapUriRequest.Credentials = new ImageryService.Credentials();
mapUriRequest.Credentials.ApplicationId = key;
// Set the location of the requested image
mapUriRequest.Center = new ImageryService.Location();
mapUriRequest.Center.Latitude = latitude;
mapUriRequest.Center.Longitude = longitude;
// Set the map style and zoom level
MapUriOptions mapUriOptions = new MapUriOptions();
mapUriOptions.Style = MapStyle.Aerial;
mapUriOptions.ZoomLevel = zoomLevel;
mapUriOptions.PreventIconCollision = true;
// Set the size of the requested image in pixels
mapUriOptions.ImageSize = new ImageryService.SizeOfint();
mapUriOptions.ImageSize.Height = mapSizeHeight;
mapUriOptions.ImageSize.Width = mapSizeWidth;
mapUriRequest.Options = mapUriOptions;
//Make the request and return the URI
ImageryServiceClient imageryService = new ImageryServiceClient();
MapUriResponse mapUriResponse = imageryService.GetMapUri(mapUriRequest);
return mapUriResponse.Uri;
}
If you haven't already, you might want to check out this article on the Bing Maps tile system calculations, within you will find a section discussing ground resolution and map scale. From that article:
map scale = 1 : ground resolution * screen dpi / 0.0254 meters/inch
Depending on which implementation of Bing Maps you use, specifying the view via a precise map scale might not be possible. I think this is due to the fact that you don't have precise control over the zoom level. For example, in the javascript ajax version, you can only specify zoom levels in integer values, so the ground resolution part of the above equation will jump in discreet steps. At the equator, using a zoom level of 21 will give you a scale of 1: 282, and a zoom level of 22 will give you 1:141. Since you can't specify a decimal value for zoom level, it is not possible to get an exact 1:200 scale using the ajax control. I don't have extensive experience with the .net Bing Maps control, so you might want to investigate that API to see if you can specify an arbitrary zoom level.
If you can precisely control the zoom level and know the dpi value, then the 1:200 scale is achievable using the equation described in the above linked article.

Find which tiles are currently visible in the viewport of a Google Maps v3 map

I am trying to build support for tiled vector data into some of our Google Maps v3 web maps, and I'm having a hard time figuring out how to find out which 256 x 256 tiles are visible in the current map viewport. I know that the information needed to figure this out is available if you create a google.maps.ImageMapType like here: Replacing GTileLayer in Google Maps v3, with ImageMapType, Tile bounding box?, but I'm obviously not doing this to bring in traditional pre-rendered map tiles.
So, a two part question:
What is the best way to find out which tiles are visible in the current viewport?
Once I have this information, what is the best way to go about converting it into lat/lng bounding boxes that can be used to request the necessary data? I know I could store this information on the server, but if there is an easy way to convert on the client it would be nice.
Here's what I came up with, with help from the documentation (http://code.google.com/apis/maps/documentation/javascript/maptypes.html, especially the "Map Coordinates" section) and a number of different sources:
function loadData() {
var bounds = map.getBounds(),
boundingBoxes = [],
boundsNeLatLng = bounds.getNorthEast(),
boundsSwLatLng = bounds.getSouthWest(),
boundsNwLatLng = new google.maps.LatLng(boundsNeLatLng.lat(), boundsSwLatLng.lng()),
boundsSeLatLng = new google.maps.LatLng(boundsSwLatLng.lat(), boundsNeLatLng.lng()),
zoom = map.getZoom(),
tiles = [],
tileCoordinateNw = pointToTile(boundsNwLatLng, zoom),
tileCoordinateSe = pointToTile(boundsSeLatLng, zoom),
tileColumns = tileCoordinateSe.x - tileCoordinateNw.x + 1;
tileRows = tileCoordinateSe.y - tileCoordinateNw.y + 1;
zfactor = Math.pow(2, zoom),
minX = tileCoordinateNw.x,
minY = tileCoordinateNw.y;
while (tileRows--) {
while (tileColumns--) {
tiles.push({
x: minX + tileColumns,
y: minY
});
}
minY++;
tileColumns = tileCoordinateSe.x - tileCoordinateNw.x + 1;
}
$.each(tiles, function(i, v) {
boundingBoxes.push({
ne: projection.fromPointToLatLng(new google.maps.Point(v.x * 256 / zfactor, v.y * 256 / zfactor)),
sw: projection.fromPointToLatLng(new google.maps.Point((v.x + 1) * 256 / zfactor, (v.y + 1) * 256 / zfactor))
});
});
$.each(boundingBoxes, function(i, v) {
var poly = new google.maps.Polygon({
map: map,
paths: [
v.ne,
new google.maps.LatLng(v.sw.lat(), v.ne.lng()),
v.sw,
new google.maps.LatLng(v.ne.lat(), v.sw.lng())
]
});
polygons.push(poly);
});
}
function pointToTile(latLng, z) {
var projection = new MercatorProjection();
var worldCoordinate = projection.fromLatLngToPoint(latLng);
var pixelCoordinate = new google.maps.Point(worldCoordinate.x * Math.pow(2, z), worldCoordinate.y * Math.pow(2, z));
var tileCoordinate = new google.maps.Point(Math.floor(pixelCoordinate.x / MERCATOR_RANGE), Math.floor(pixelCoordinate.y / MERCATOR_RANGE));
return tileCoordinate;
};
An explanation: Basically, everytime the map is panned or zoomed, I call the loadData function. This function calculates which tiles are in the map view, then iterates through the tiles that are already loaded and deletes the ones that are no longer in the view (I took this portion of code out, so you won't see it above). I then use the LatLngBounds stored in the boundingBoxes array to request data from the server.
Hope this helps someone else...
For more recent users, it's possible to get tile images from the sample code in the documentation on this page of the Google Maps Javascript API documentation.
Showing Pixel and Tile Coordinates

Resources