I recently started working with [Bing Api] in my webService [wcf] in c #.
I would like to recover a satellite image of a given scale with Bing!
for example
Scale 1:200 (1 centimeter on the map equal 200 centimeters on the world)
Of course I found this function that explains how to calculate the image resolution satellite bing but this is not what I'm looking for ..
Map resolution = 156543.04 meters/pixel * cos(latitude) / (2 ^ zoomlevel)
Here is my function used to generate my bing map, but I do not know what to send parameter to retrieve an image scale of 1:200.
I need :
Scale = 1:200
I search :
int mapSizeHeight = ?
int mapSizeWidth = ?
int zoomLevel = ?
public string GetImageMap(double latitude,double longitude,int mapSizeHeight, int mapSizeWidth, int zoomLevel)
{
string key = "ddsaAaasm5vwsdfsfd2ySYBxfEFsdfsdfcFh6iUO5GI4v";
MapUriRequest mapUriRequest = new MapUriRequest();
// Set credentials using a valid Bing Maps key
mapUriRequest.Credentials = new ImageryService.Credentials();
mapUriRequest.Credentials.ApplicationId = key;
// Set the location of the requested image
mapUriRequest.Center = new ImageryService.Location();
mapUriRequest.Center.Latitude = latitude;
mapUriRequest.Center.Longitude = longitude;
// Set the map style and zoom level
MapUriOptions mapUriOptions = new MapUriOptions();
mapUriOptions.Style = MapStyle.Aerial;
mapUriOptions.ZoomLevel = zoomLevel;
mapUriOptions.PreventIconCollision = true;
// Set the size of the requested image in pixels
mapUriOptions.ImageSize = new ImageryService.SizeOfint();
mapUriOptions.ImageSize.Height = mapSizeHeight;
mapUriOptions.ImageSize.Width = mapSizeWidth;
mapUriRequest.Options = mapUriOptions;
//Make the request and return the URI
ImageryServiceClient imageryService = new ImageryServiceClient();
MapUriResponse mapUriResponse = imageryService.GetMapUri(mapUriRequest);
return mapUriResponse.Uri;
}
If you haven't already, you might want to check out this article on the Bing Maps tile system calculations, within you will find a section discussing ground resolution and map scale. From that article:
map scale = 1 : ground resolution * screen dpi / 0.0254 meters/inch
Depending on which implementation of Bing Maps you use, specifying the view via a precise map scale might not be possible. I think this is due to the fact that you don't have precise control over the zoom level. For example, in the javascript ajax version, you can only specify zoom levels in integer values, so the ground resolution part of the above equation will jump in discreet steps. At the equator, using a zoom level of 21 will give you a scale of 1: 282, and a zoom level of 22 will give you 1:141. Since you can't specify a decimal value for zoom level, it is not possible to get an exact 1:200 scale using the ajax control. I don't have extensive experience with the .net Bing Maps control, so you might want to investigate that API to see if you can specify an arbitrary zoom level.
If you can precisely control the zoom level and know the dpi value, then the 1:200 scale is achievable using the equation described in the above linked article.
Related
I am stuck trying to figure out the magic Object for working with Xamarin.TensorFlow.Lite Interpreter.RunForMultipleInputsOutputs().
My object detection model outputs the following (exported from Azure Custom Vision)
detected_boxes The detected bounding boxes. Each bounding box is represented as [x1, y1, x2, y2] >where (x1, y1) and (x2, y2) are the coordinates of box corners.
detected_scores Probability for each detected boxes.
detected_classes The class index for the detected boxes.
I know it needs to fit something like how the tensorflow lite docs describe the output (except the number of detections):
Most of the examples I have found only work with image classification models which only have 1 output: blog1 blog2 However I did find an Issue that is that exact same issue I am having: https://github.com/xamarin/XamarinComponents/issues/565
I tried following the tensorflow api, and the example code of the GitHub issue(appreciated the sharing of that), but still crashing in the oblivion (Object not set to instance of object).
int detectedBoxesOutputIndex = Interpreter.GetOutputIndex("detected_boxes"); // 0
int detectedClassesOutputIndex = Interpreter.GetOutputIndex("detected_classes"); // 1
int detectedScoresOutputIndex = Interpreter.GetOutputIndex("detected_scores"); // 2
int numDetections = Interpreter
.GetOutputTensor(detectedClassesOutputIndex)
.NumElements();
var outputDict = new Dictionary<Java.Lang.Integer, Java.Lang.Object>();
int batchSize = 1;
// new float [][][]
_OutputBoxes = CreateJaggedArray(batchSize, numDetections, 4);
_OutputClasses = CreateJaggedArray(batchSize, numDetections);
_OutputScores = CreateJaggedArray(batchSize, numDetections);
var mOutputBoxes = OutputBoxes;
var mOutputClasses = OutputClasses;
var mOutputScores = OutputScores;
Java.Lang.Object[] inputArray = { imageByteBuffer };
var outputMap = new Dictionary<Java.Lang.Integer, Java.Lang.Object>();
outputMap.Add(new Java.Lang.Integer(detectedBoxesOutputIndex), mOutputBoxes);
outputMap.Add(new Java.Lang.Integer(detectedClassesOutputIndex), mOutputClasses);
outputMap.Add(new Java.Lang.Integer(detectedScoresOutputIndex), mOutputScores);
//stuck here
Interpreter.RunForMultipleInputsOutputs(inputArray, outputMap);
I can get the output of the first tensor via Interpreter.Run(img,output) where output is a float[64][4], so I know the model and all the image byte buffer prep is done correctly.
My Xamarin Forms Project: https://github.com/twofingerrightclick/GardenDefenseSystem/blob/master/GardenDefenseSystem/GardenDefenseSystem.Android/TensorflowObjectDetector.cs
I created an Issue on github, and solved the problem there in detail : https://github.com/xamarin/GooglePlayServicesComponents/issues/668
As a newbie to the google earth engine, I have been trying something (https://code.earthengine.google.com/6f45059a59b75757c88ce2d3869fc9fd) following a NASA tutorial (https://www.youtube.com/watch?v=JFvxudueT_k&ab_channel=NASAVideo). My last line (line 60) shows image.filter is not a function, while the one in the tutorial (line 34) is working. I am not sure what happened and how to sort this out?
//creating a new variable 'image' from the L8 collection data imported
var image = ee.Image (L8_tier1 //the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
//the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
.filterBounds (ROI) //for the region of interest we are interested in
//.sort ("COLUD_COVER") //for sorting the data between the range with a cloud cover, the metadata property we are interested in. Other way to do this is using the function below.
//.first() //this will make the image choose the first image with the least amount of cloud cover for the area. Other way to do this is using the function below.
);
//print ("Hague and Rotterdam", image); //printing the image in the console
//console on the right hand side will explain everything from the data
//id will show the image deatils and date of the image, for this case 29th July 2019
//under the properties tab cloud cover can be found, this is the least we can get for this area during this period
// //vizualisation of the data in the map with true color rendering
// var trueColour = {
// bands:["SR_B4","SR_B3","SR_B2"],
// min: 5000,
// max: 12000
// };
// Map.centerObject (ROI, 12); //for the centering the area in the center of the map with required zoom level
// Map.addLayer (image, trueColour, "Hague and Rotterdam"); //for adding the image with the variable of bands we made and naming the image
//Alternate way
//Function to cloud mask from the qa_pixel band of Landsat 8 SR data. In this case bits 3 and 4 are clouds and cloud shadow respectively. This can be different for different image sets.
function maskL8sr(image) {
var cloudsBitMask = 1 << 3; //remember to check this with the source
var cloudshadowBitMask = 1 << 4; //remember to check this with the source
var qa = image.select ('qa_pixel'); //creating the new variable from the band of the source image
var mask = qa.bitwiseAnd(cloudsBitMask).eq(0) //making the cloud equal to zero to mask them out
.and(qa.bitwiseAnd(cloudshadowBitMask).eq(0)); //making the cloud shadow equal to zero to mask them out
return image.updateMask(mask).divide(10000)
.select("SR_B[0-9]*")
.copyProperties(image, ["system:time_start"]);
}
// print ("Hague and Rotterdam", image);// look into the console now. How many images the code have downloaded!!!
//filtering imagery for 2015 to 2021 summer date ranges
//creating joint filter and applying to image collection
var sum21 = ee.Filter.date ('2021-06-01','2021-09-30');
var sum20 = ee.Filter.date ('2020-06-01','2020-09-30');
var sum19 = ee.Filter.date ('2019-06-01','2019-09-30');
var sum18 = ee.Filter.date ('2018-06-01','2018-09-30');
var sum17 = ee.Filter.date ('2017-06-01','2017-09-30');
var sum16 = ee.Filter.date ('2016-06-01','2016-09-30');
var sum15 = ee.Filter.date ('2015-06-01','2015-09-30');
var SumFilter = ee.Filter.or(sum21, sum20, sum19, sum18, sum17, sum16, sum15);
var allsum = image.filter(SumFilter);
Filtering is an operation you can do on ImageCollections, not individual Images, because all filtering does is choose a subset of the images. Then, in your script, you have (with the comments removed):
var image = ee.Image (L8_tier1
.filterBounds (ROI)
);
The result of l8_tier1.filterBounds(ROI) is indeed an ImageCollection. But in this case, you have told the Earth Engine client that it should be treated as an Image, and it believed you. So, then, the last line
var allsum = image.filter(SumFilter);
fails with the error you saw because there is no filter() on ee.Image.
The script will successfully run if you change ee.Image(...) to ee.ImageCollection(...), or even better, remove the cast because it's not necessary — that is,
var image = L8_tier1.filterBounds(ROI);
You should probably also change the name of var image too, since it is confusing to call an ImageCollection by the name image. Naming things accurately helps avoid mistakes, while you are working on the code and also when others try to read it or build on it.
I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video.
My code, updated to use the new CoordinateMapper classe is here
...
CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor);
ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30);
Canvas.SetLeft(elipseHead, (handColorPoint.X) - (elipseHead.Width / 2)); // center of the ellipse in center of the joint
Canvas.SetTop(elipseHead, (handColorPoint.Y) - (elipseHead.Height / 2));
This works. The question is:
How to do joint tracking in a scaled image, 540x380 for example?
The solution for this is pretty simple, i fugured it out.
What a need to do is find some factor to apply to the position.
This factor can be found takin the atual ColorImageFormat of the Kinect and dividing by the desired size, example:
Lets say i am working with the RgbResolution640x480Fps30 format and my Image (ColorViewer) have 220x240. So, lets find the factor for X:
double factorX = (640 / 220); // the factor is 2.90909090...
And the factor for y:
double factorY = (480/ 240); // the factor is 2...
Now, i adjust the position of the ellipse using this factor.
Canvas.SetLeft(elipseHead, (handColorPoint.X / (2.909090)) - (elipseHead.Width / 2));
Canvas.SetTop(elipseHead, (handColorPoint.Y / (2)) - (elipseHead.Height / 2));
I've not used the CoordinateMapper yet, and am not in front on my Kinect at the moment, so I'll toss out this first. I'll see about an update when I get working with the Kinect again.
The Coding4Fun Kinect Toolkit has a ScaleTo extension as part of the library. This adds the ability to take a joint and scale it to any display resolution.
The scaling function looks like this:
private static float Scale(int maxPixel, float maxSkeleton, float position)
{
float value = ((((maxPixel / maxSkeleton) / 2) * position) + (maxPixel/2));
if(value > maxPixel)
return maxPixel;
if(value < 0)
return 0;
return value;
}
maxPixel = the width or height, depending on which coordinate your scaling.
maxSkeleton = set this to 1.
position = the X or Y coordinate of the joint you want to scale.
If you were to just include the above function you could call it like so:
Canvas.SetLeft(e, Scale(640, 1, joint.Position.X));
Canvas.SetTop(e, Scale(480, 1, -joint.Position.Y));
... replacing your 640 & 480 with a different scale.
If you include the Coding4Fun Kinect Toolkit, instead of re-writing code, you could just call it like so:
scaledJoin = rawJoint.ScaleTo(640, 480);
... then plug in what you need.
I have an application in Flex 4 with a map, a database of points and a search tool.
When the user types something and does the search it returns name, details and coordinates of the objects in my database.
I have a function that, when i click one of the results of my search, it zooms the selected point of the map.
The question is, i want a function that zooms all the result points at once. For example if i search "tall trees" and it returns 10 points, i want that the map zooms to a position where i can see the 10 points at once.
Below is the code im using to zoom one point at a time, i thought flex would have some kind of function "zoom to group of points", but i cant find anything like this.
private function ResultDG_Click(event:ListEvent):void
{
if (event.rowIndex < 0) return;
var obj:Object = ResultDG.selectedItem;
if (lastIdentifyResultGraphic != null)
{
graphicsLayer.remove(lastIdentifyResultGraphic);
}
if (obj != null)
{
lastIdentifyResultGraphic = obj.graphic as Graphic;
switch (lastIdentifyResultGraphic.geometry.type)
{
case Geometry.MAPPOINT:
lastIdentifyResultGraphic.symbol = objPointSymbol
_map.extent = new Extent((lastIdentifyResultGraphic.geometry as MapPoint).x-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).x+0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y+0.05,new SpatialReference(29101)).expand(0.001);
break;
case Geometry.POLYLINE:
lastIdentifyResultGraphic.symbol = objPolyLineSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
case Geometry.POLYGON:
lastIdentifyResultGraphic.symbol = objPolygonSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
}
graphicsLayer.add(lastIdentifyResultGraphic);
}
}
See the GraphicUtil class from com.esri.ags.Utils package. You can use the method "getGraphicsExtent" to generate an extent from an array of Graphics. You then use the extent to set the zoom factor of your map :
var graphics:ArrayCollection = graphicsLayer.graphicProvider as ArrayCollection;
var graphicsArr:Array = graphics.toArray();
// Create an extent from the currently selected graphics
var uExtent:Extent;
uExtent = GraphicUtil.getGraphicsExtent(graphicsArr);
// Zoom to extent created
if (uExtent)
{
map.extent = uExtent;
}
In this case, it would zoom to the full content of your graphics layer. You can always create an array containing only the features you want to zoom to. If you find that the zoom is too close to your data, you can also use map.zoomOut() after setting the extent.
Note: Be careful if you'Ve got TextSymbols in your graphics, it will break the GraphicUtil. In this case you need to filter out the Graphics with TextSymbols
Derp : Did not see the thread was 5 months old... Hope my answer helps other people
I am following the code at http://www.gisdoctor.com/v3/mapserver.html to overlay a WMS as an image on Google Maps using API v3. The js code at the above link is as follows
"WMSGetTileUrl" : function(tile, zoom) {
var projection = map.getProjection();
var zpow = Math.pow(2, zoom);
var ul = new google.maps.Point(
tile.x * 256.0 / zpow,
(tile.y + 1) * 256.0 / zpow
);
var lr = new google.maps.Point(
(tile.x + 1) * 256.0 / zpow,
tile.y * 256.0 / zpow
);
var ulw = projection.fromPointToLatLng(ul);
var lrw = projection.fromPointToLatLng(lr);
var bbox = ulw.lng() + "," + ulw.lat() + "," + lrw.lng() + "," + lrw.lat();
return url = "http://url/to/mapserver?" +
"version=1.1.1&" +
"request=GetMap&" +
"Styles=default&" +
"SRS=EPSG:4326&" +
"Layers=wmsLayers&" +
"BBOX=" + bbox + "&" +
"width=256&" +
"height=256&" +
"format=image/png&" +
"TRANSPARENT=TRUE";
},
"addWmsLayer" : function() {
/*
Creating the WMS layer options. This code creates the Google
imagemaptype options for each wms layer. In the options the function
that calls the individual wms layer is set
*/
var wmsOptions = {
alt: "MapServer Layers",
getTileUrl: WMSGetTileUrl,
isPng: false,
maxZoom: 17,
minZoom: 1,
name: "MapServer Layer",
tileSize: new google.maps.Size(256, 256)
};
/*
Creating the object to create the ImageMapType that will call the WMS
Layer Options.
*/
wmsMapType = new google.maps.ImageMapType(wmsOptions);
map.overlayMapTypes.insertAt(0, wmsMapType);
},
Everything works fine, but, of course, the WMS is returned as 256 x 256 tiles. No surprises, because that is what I requested. However, following the discussion at http://groups.google.com/group/google-maps-js-api-v3/browse_thread/thread/c22837333f9a1812/d410a2a453025b38 it seems that I might be better off requesting an untiled (single) image from mapserver. This would tax my server less. In any case, I would like to experiment with a single image, but I am unable to construct a request for one properly.
Specifically, I changed the size of the tile to something large; for example, I tried 1024 x 1024 tiles. I did get fewer tiles, but the returned images didn't match the Google Maps base layer boundaries.
What I would like is to not specify the tile size at all. Instead, I should dynamically figure out the tile size to be, say, 256 pixels bigger than the current map size. That way, a single image would be returned, no matter what the map size. Also, a seamless pan would be implemented with the help of the extra 256px along the map edges.
Suggestions?
1) it works for me - with 512 and 1024 also. You just have to make the change at all appropriate places (corner coordinate computation, tileSize setting, WMS parameters width & height setting). Any tile setting should be possible if you just handle it correctly at all places in code.
2) I think you are trying to optimize at wrong place. The 256x256 tiles have been established for a reason. Remember that google is caching the tiles, so with larger tiles the whole solution might be much slower. Also the map will be loaded more fast for the user.
3) The tile concept is a right thing and should be preserved. Things like receiving the whole map wouldn't work - imagine map panning then. DOn't try to reinvent the wheel and stick with the current google tile concept - it is optimized for this type of application. IF you need to receive the map as a single image, you can use the static google maps api.
See this same question I just answered for someone else. It's likely to be your projection. EPSG:4326 is not the correct projection for Google Maps. Changing the projection means you need to change the way coordinates are calculated and referenced.