getScaleFromZoomLevel equivalent in Android Here-lite SDK - here-api

I want to switch from HERE Android SDK (starter/premium) to the new Lite SDK. In the previous libraries I had the function
public double getScaleFromZoomLevel(double level)
How would I get such a scale value in the Here-lite edition?

Yes, I agree with Datasun. As in your previous version, the scale gives the geo distance in cm per display inch, my proposal for a function in the lite API would be something like:
DisplayMetrics appMetrics = new DisplayMetrics();
map.getDisplay().getMetrics(appMetrics);
GeoCoordinates southWest = map.getCamera().getBoundingRect().southWestCorner;
Point2D lowerright = map.getCamera().geoToViewCoordinates(southWest);
float screenPixelPerInch = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_IN, 1, appMetrics);
Point2D yRef = new Point2D(lowerright.x, lowerright.y + screenPixelPerInch);
GeoCoordinates yRefGeo = map.getCamera().viewToGeoCoordinates(yRef);
return yRefGeo.distanceTo(southWest);

Related

How to create output object for Xamarin.TensorFlow.Lite Interpreter.RunForMultipleInputsOutputs() when model is objection detection type

I am stuck trying to figure out the magic Object for working with Xamarin.TensorFlow.Lite Interpreter.RunForMultipleInputsOutputs().
My object detection model outputs the following (exported from Azure Custom Vision)
detected_boxes The detected bounding boxes. Each bounding box is represented as [x1, y1, x2, y2] >where (x1, y1) and (x2, y2) are the coordinates of box corners.
detected_scores Probability for each detected boxes.
detected_classes The class index for the detected boxes.
I know it needs to fit something like how the tensorflow lite docs describe the output (except the number of detections):
Most of the examples I have found only work with image classification models which only have 1 output: blog1 blog2 However I did find an Issue that is that exact same issue I am having: https://github.com/xamarin/XamarinComponents/issues/565
I tried following the tensorflow api, and the example code of the GitHub issue(appreciated the sharing of that), but still crashing in the oblivion (Object not set to instance of object).
int detectedBoxesOutputIndex = Interpreter.GetOutputIndex("detected_boxes"); // 0
int detectedClassesOutputIndex = Interpreter.GetOutputIndex("detected_classes"); // 1
int detectedScoresOutputIndex = Interpreter.GetOutputIndex("detected_scores"); // 2
int numDetections = Interpreter
.GetOutputTensor(detectedClassesOutputIndex)
.NumElements();
var outputDict = new Dictionary<Java.Lang.Integer, Java.Lang.Object>();
int batchSize = 1;
// new float [][][]
_OutputBoxes = CreateJaggedArray(batchSize, numDetections, 4);
_OutputClasses = CreateJaggedArray(batchSize, numDetections);
_OutputScores = CreateJaggedArray(batchSize, numDetections);
var mOutputBoxes = OutputBoxes;
var mOutputClasses = OutputClasses;
var mOutputScores = OutputScores;
Java.Lang.Object[] inputArray = { imageByteBuffer };
var outputMap = new Dictionary<Java.Lang.Integer, Java.Lang.Object>();
outputMap.Add(new Java.Lang.Integer(detectedBoxesOutputIndex), mOutputBoxes);
outputMap.Add(new Java.Lang.Integer(detectedClassesOutputIndex), mOutputClasses);
outputMap.Add(new Java.Lang.Integer(detectedScoresOutputIndex), mOutputScores);
//stuck here
Interpreter.RunForMultipleInputsOutputs(inputArray, outputMap);
I can get the output of the first tensor via Interpreter.Run(img,output) where output is a float[64][4], so I know the model and all the image byte buffer prep is done correctly.
My Xamarin Forms Project: https://github.com/twofingerrightclick/GardenDefenseSystem/blob/master/GardenDefenseSystem/GardenDefenseSystem.Android/TensorflowObjectDetector.cs
I created an Issue on github, and solved the problem there in detail : https://github.com/xamarin/GooglePlayServicesComponents/issues/668

How to add fontawasome icon as a layer in geotools using javafx (Desktop App)?

I have been using java 17 and I'm unable to add icons into the map as a layer. please help me.
void drawTarget(double x, double y) {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("MyFeatureType");
builder.setCRS( DefaultGeographicCRS.WGS84 ); // set crs
builder.add("location", LineString.class); // add geometry
// build the type
SimpleFeatureType TYPE = builder.buildFeatureType();
// create features using the type defined
SimpleFeatureBuilder featureBuilder = new SimpleFeatureBuilder(TYPE);
// GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
// Coordinate[] coords =
// new Coordinate[] {new Coordinate(79,25.00), new Coordinate(x, y)};
// line = geometryFactory.createLineString(coords);
// ln = new javafx.scene.shape.Line();
FontAwesomeIcon faico = new FontAwesomeIcon();
faico.setIconName("FIGHTER_JET");
faico.setX(76);
faico.setY(25);
faico.setVisible(true);
// TranslateTransition trans = new TranslateTransition();
// trans.setNode(faico);
featureBuilder.add(faico);
SimpleFeature feature = featureBuilder.buildFeature("FeaturePoint");
DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("external", TYPE);
featureCollection.add(feature); // Add feature 1, 2, 3, etc
Style style5 = SLD.createLineStyle(Color.YELLOW, 2f);
Layer layer5 = new FeatureLayer(featureCollection, style5);
map.addLayer(layer5);
// mapFrame.getMapPane().repaint();
}
I want to add a font-awesome icon to the map
Currently, your code is attempting to use an Icon as a Geometry in your feature. I'm guessing that's what isn't working since you don't say.
If you want to use an Icon to display the location of a Feature then you will need two things.
A valid geometry in your feature, probably a point (since an Icon is normally a point)
A valid Style to be used by the Renderer to draw your feature(s) on the map. Currently, you are asking for the line in your feature to be drawn using a yellow line (style5 = SLD.createLineStyle(Color.YELLOW, 2f);)
I can't really help with step 1, since I don't know where your fighter jet currently is.
For step 2 I suggest you look at the SLD resources to give you some clues of how the styling system works before going on the manual to see how GeoTools implements that.
Since you are trying to add an Icon I suggest you'd need something like:
List<GraphicalSymbol> symbols = new ArrayList<>();
symbols.add(sf.externalGraphic(svg, "svg", null)); // svg preferred
symbols.add(sf.externalGraphic(png, "png", null)); // png preferred
symbols.add(sf.mark(ff.literal("circle"), fill, stroke)); // simple circle backup plan
Expression opacity = null; // use default
Expression size = ff.literal(10);
Expression rotation = null; // use default
AnchorPoint anchor = null; // use default
Displacement displacement = null; // use default
// define a point symbolizer of a small circle
Graphic city = sf.graphic(symbols, opacity, size, rotation, anchor, displacement);
PointSymbolizer pointSymbolizer =
sf.pointSymbolizer("point", ff.property("the_geom"), null, null, city);
rule1.symbolizers().add(pointSymbolizer);
featureTypeStyle.rules().add(rule1);
But that assumes that you can convert your FontAwesomeIcon into a static representation that the renderer can draw (png, svg). If it doesn't work like that (I don't use JavaFX) then you may need to add a new MarkFactory to handle them.

how to generate a map based on a given scale [Bing Map]?

I recently started working with [Bing Api] in my webService [wcf] in c #.
I would like to recover a satellite image of a given scale with Bing!
for example
Scale 1:200 (1 centimeter on the map equal 200 centimeters on the world)
Of course I found this function that explains how to calculate the image resolution satellite bing but this is not what I'm looking for ..
Map resolution = 156543.04 meters/pixel * cos(latitude) / (2 ^ zoomlevel)
Here is my function used to generate my bing map, but I do not know what to send parameter to retrieve an image scale of 1:200.
I need :
Scale = 1:200
I search :
int mapSizeHeight = ?
int mapSizeWidth = ?
int zoomLevel = ?
public string GetImageMap(double latitude,double longitude,int mapSizeHeight, int mapSizeWidth, int zoomLevel)
{
string key = "ddsaAaasm5vwsdfsfd2ySYBxfEFsdfsdfcFh6iUO5GI4v";
MapUriRequest mapUriRequest = new MapUriRequest();
// Set credentials using a valid Bing Maps key
mapUriRequest.Credentials = new ImageryService.Credentials();
mapUriRequest.Credentials.ApplicationId = key;
// Set the location of the requested image
mapUriRequest.Center = new ImageryService.Location();
mapUriRequest.Center.Latitude = latitude;
mapUriRequest.Center.Longitude = longitude;
// Set the map style and zoom level
MapUriOptions mapUriOptions = new MapUriOptions();
mapUriOptions.Style = MapStyle.Aerial;
mapUriOptions.ZoomLevel = zoomLevel;
mapUriOptions.PreventIconCollision = true;
// Set the size of the requested image in pixels
mapUriOptions.ImageSize = new ImageryService.SizeOfint();
mapUriOptions.ImageSize.Height = mapSizeHeight;
mapUriOptions.ImageSize.Width = mapSizeWidth;
mapUriRequest.Options = mapUriOptions;
//Make the request and return the URI
ImageryServiceClient imageryService = new ImageryServiceClient();
MapUriResponse mapUriResponse = imageryService.GetMapUri(mapUriRequest);
return mapUriResponse.Uri;
}
If you haven't already, you might want to check out this article on the Bing Maps tile system calculations, within you will find a section discussing ground resolution and map scale. From that article:
map scale = 1 : ground resolution * screen dpi / 0.0254 meters/inch
Depending on which implementation of Bing Maps you use, specifying the view via a precise map scale might not be possible. I think this is due to the fact that you don't have precise control over the zoom level. For example, in the javascript ajax version, you can only specify zoom levels in integer values, so the ground resolution part of the above equation will jump in discreet steps. At the equator, using a zoom level of 21 will give you a scale of 1: 282, and a zoom level of 22 will give you 1:141. Since you can't specify a decimal value for zoom level, it is not possible to get an exact 1:200 scale using the ajax control. I don't have extensive experience with the .net Bing Maps control, so you might want to investigate that API to see if you can specify an arbitrary zoom level.
If you can precisely control the zoom level and know the dpi value, then the 1:200 scale is achievable using the equation described in the above linked article.

Missing StrokeThickness on MapPolyline of Bing.Maps Windows 8

I use polyline to draw circle on Bing Maps (Metro App) but it appears aliasing on Maps, it's not smooth.
I think it does not have strokethickness.
How can I solve?
Thanks
MapShapeLayer shapeLayer = new MapShapeLayer();
MapPolyline polyline = new MapPolyline();
polyline.Locations = DrawMapsCircle(location, 1000);
polyline.Color = Windows.UI.Colors.Red;
polyline.Width = 1;
shapeLayer.Shapes.Add(polyline);
maps.ShapeLayers.Add(shapeLayer);
Try increasing the width of the polyline
polyline.Width = 5;

Find which tiles are currently visible in the viewport of a Google Maps v3 map

I am trying to build support for tiled vector data into some of our Google Maps v3 web maps, and I'm having a hard time figuring out how to find out which 256 x 256 tiles are visible in the current map viewport. I know that the information needed to figure this out is available if you create a google.maps.ImageMapType like here: Replacing GTileLayer in Google Maps v3, with ImageMapType, Tile bounding box?, but I'm obviously not doing this to bring in traditional pre-rendered map tiles.
So, a two part question:
What is the best way to find out which tiles are visible in the current viewport?
Once I have this information, what is the best way to go about converting it into lat/lng bounding boxes that can be used to request the necessary data? I know I could store this information on the server, but if there is an easy way to convert on the client it would be nice.
Here's what I came up with, with help from the documentation (http://code.google.com/apis/maps/documentation/javascript/maptypes.html, especially the "Map Coordinates" section) and a number of different sources:
function loadData() {
var bounds = map.getBounds(),
boundingBoxes = [],
boundsNeLatLng = bounds.getNorthEast(),
boundsSwLatLng = bounds.getSouthWest(),
boundsNwLatLng = new google.maps.LatLng(boundsNeLatLng.lat(), boundsSwLatLng.lng()),
boundsSeLatLng = new google.maps.LatLng(boundsSwLatLng.lat(), boundsNeLatLng.lng()),
zoom = map.getZoom(),
tiles = [],
tileCoordinateNw = pointToTile(boundsNwLatLng, zoom),
tileCoordinateSe = pointToTile(boundsSeLatLng, zoom),
tileColumns = tileCoordinateSe.x - tileCoordinateNw.x + 1;
tileRows = tileCoordinateSe.y - tileCoordinateNw.y + 1;
zfactor = Math.pow(2, zoom),
minX = tileCoordinateNw.x,
minY = tileCoordinateNw.y;
while (tileRows--) {
while (tileColumns--) {
tiles.push({
x: minX + tileColumns,
y: minY
});
}
minY++;
tileColumns = tileCoordinateSe.x - tileCoordinateNw.x + 1;
}
$.each(tiles, function(i, v) {
boundingBoxes.push({
ne: projection.fromPointToLatLng(new google.maps.Point(v.x * 256 / zfactor, v.y * 256 / zfactor)),
sw: projection.fromPointToLatLng(new google.maps.Point((v.x + 1) * 256 / zfactor, (v.y + 1) * 256 / zfactor))
});
});
$.each(boundingBoxes, function(i, v) {
var poly = new google.maps.Polygon({
map: map,
paths: [
v.ne,
new google.maps.LatLng(v.sw.lat(), v.ne.lng()),
v.sw,
new google.maps.LatLng(v.ne.lat(), v.sw.lng())
]
});
polygons.push(poly);
});
}
function pointToTile(latLng, z) {
var projection = new MercatorProjection();
var worldCoordinate = projection.fromLatLngToPoint(latLng);
var pixelCoordinate = new google.maps.Point(worldCoordinate.x * Math.pow(2, z), worldCoordinate.y * Math.pow(2, z));
var tileCoordinate = new google.maps.Point(Math.floor(pixelCoordinate.x / MERCATOR_RANGE), Math.floor(pixelCoordinate.y / MERCATOR_RANGE));
return tileCoordinate;
};
An explanation: Basically, everytime the map is panned or zoomed, I call the loadData function. This function calculates which tiles are in the map view, then iterates through the tiles that are already loaded and deletes the ones that are no longer in the view (I took this portion of code out, so you won't see it above). I then use the LatLngBounds stored in the boundingBoxes array to request data from the server.
Hope this helps someone else...
For more recent users, it's possible to get tile images from the sample code in the documentation on this page of the Google Maps Javascript API documentation.
Showing Pixel and Tile Coordinates

Resources