Retrieval of Area Shapes - Limit Resolution - HERE Maps API [here-api] - here-api

Is there a way to limit the size/resolution when retrieving area shapes from the here.com API?
I love the highly detailed resolution, but unfortunately the size of the shapes is killing my application performance and taking forever to load on the map.

Here currently doesn't provide any means to limit the resolution of the WKTShapeResponse. However, you can limit the resolution at your end by writing an algorithm to smooth the points before rendering it on the map.

Related

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

Here api speed limit for trucks

We're currently using the reverse geocoder to get the speed limit on the road of a specified lat/long position. However that speed limit is always for cars, but we have to consider trucks as well. Anyone knows a way to get the speed limit meant for trucks?
This is offered in the HERE "Platform Data Extension" service: https://developer.here.com/documentation/platform-data/topics/introduction.html
Found a general speed limit example for PHP here (probably you have to adapt the layer to the right one for truck speed limits instead of car speed limits):
https://github.com/seaBass3/here-pde-speed-limit/blob/master/Here_PDE_Demo.php

Maptiler Pro Demo 12 Core using only 12%

Using MapTiler Pro Demo. Testing zoom levels 1-21 for Google Maps export from a tiff image (about 21mb file covering polygons over 2000km).
At the moment its been running an hour with constant usage at 12% of 12 vcores (about 1.5 of 12) maxed to about 2.5ghz. No tiles has been exported yet, only the html's associated.
Am I too quick to judge performance?
Edit: Progressbar at 0%
Edit2: Hour 8 still 0%. Memory usage increased from 400mb to 2gb
You are trying to generate from your input 21 MBytes file about 350 GBytes of tiles (approx. 10 billion of map tiles at zoom level 21) by the options you have set in the software. Is this really what you want to do?
It sounds like a nonsense to render the very low-res image (2600 x 2000 pixel) covering a large area (such as the South Africa) down to zoom level 21!
The software has suggested you the default maxzoom 6. If your data are coverage maps or similar dataset it makes sense to render it maybe down to zoom level 12 or similar, definitely not deeper than 14. For standard input data (aerial photo) the native suggested maxzoom +1 or +2 is the max which really makes sense. Deeper zoom levels do not add any visual advantage.
The user can always zoom deeper - but the upper tiles can be displayed on the client side - so you don't really need to generate and save all these images at all...
MapTiler automatically provides you with a Google Maps V3 viewer, which is doing the client-side over zooming out of the box.
See a preview here:
http://tileserver.maptiler.com/#weather/gmapsmaptiler.embed
If you are interested in the math behind the map tiles, check:
http://tools.geofabrik.de/calc/#type=geofabrik_standard&bbox=16.44,-34.85,32.82,-22.16
Thanks for providing the report (with http://www.maptiler.com/how-to/submit-report/) to us. Your original email to our support did not contain any technical data at all (not even the data you write here on the stackoverflow).
Please, before you publicly rant on the performance of a software - double check you know what you do. MapTiler Pro is a powerful tool, but the user must know what he does.
Based on your feedback - we have decided to implement for a future version of the MapTiler software an estimated final output size - and warn the user in the graphical user interface if he chooses options which are probably unwanted.

Why is there a limitation on the number of points a polygon can have on ST_WITHIN?

We are at a cross roads where we need to decide if we are going to store our GeoSpatial data in DocumentDB or SQL Azure. According to this article, the polygon parameter of the ST_WITHIN function in a query can contain a maximum of 256 points. Our data will potentially contain polygons with millions of points as we are mapping continents, countries, states/provinces, etc. We need to be able to use ST_WITHIN against all of these polygons. The article also mentions that we can adjust that limitation by contacting Azure Support.
Why is this limitation in the first place? If Support does remove the limitation, are we going to bring DocumentDB down with so many points?
If you want to do it all in DocumentDB (as opposed to adding something like SQL Azure), you can use an approach of narrowing down the list by using ST_DISTANCE to get candidates and then running the equivalent to ST_WITHIN client side (ray casting algorithm is simple and fast). The trick involves storing denormalized meta-data about each polygon, namely a center point (accuracy of center point not critical) and the maximum radius using that center point. Then if the distance between your point and the center minus the maximum radius is less than zero, it's in the candidate list. It works like a charm and is performant with some careful index design.
One thing to worry about is the condition where the polygon intersects itself. Do you treat the intersecting space as outside the polygon or within it? We had a nasty bug that took forever to figure out and it boiled down to a self-intersecting polygon. This problem exists whether you implement your own algorithm or use the database's native "within" function.
The short answer to your question is yes, they are worried you will bring DocumentDB down with more than 256 points. It used to be limited to just 16 points, but they changed it to 256 recently. Perhaps they will raise it again in the future. We ran into a similar problem with polygons having more than 1,000 points. In the end, we decided to use Sql Server for our polygon searches and then use the data refined from Sql Server to pull the related data from DocumentDB.
The problem is that DocumentDB resources are shared between customers so all of the operations that you run against DocumentDB have to be governed by request units. That way, no one customer can bring the system down with massive queries. I don't know how to calculate the request units from using ST_WITHIN on millions of points, but my guess is that even on the S3 tier, it would probably push the limit of the allowable 2500 Request Units. So even if they lifted the 256 points to a one million points, your query might not be able to finish because it would be too expensive. So I suggest you go with Sql Azure. That is what we settled on and it performs great.

scaling an azure website

I have a Standard website in Azure with a small instance, (1 core and 1.75 GB memory). It seems to be coping fine and handling the requests smoothly, although I am expecting a lot more within the week.
It is unclear though under what circumstances I should be looking to scale the instance size to the next level ie to Medium. (Besides MemoryWorkingSet of course, rather obvious :))
ie. Will moving up to a Medium instance resolve high CPU time ?
What other telltales should I be watching for ?
I am NOT comfortable scaling the number of instances to more than one at the moment until I resolve some cache issues.
I think the key point I am trying to understand is the link between the metrics provided and the means of scaling available regardless of it being scaled horizontally or vertically.
I am trying to keep the average response time as low as possible as the number of users that interact with the website increase.
Which of the other metrics will alert me when the load on the server is getting to its limits & I will need to scale Vertically ?
The idea behind scaling in Azure is to scale horizontally, i.e. add more instances. Azure can do this for you automatically. If you can't add more instances, Azure can't do the scaling for you automatically.
You can move to Medium instance, overall capacity will increase, but it is impossible to say what your application will require under heavy load. I suggest you run profiler and load test to find out the weak parts of your app and improve these before you have an actual increase in useage.

Resources