Custom Styling water and roads using HERE Android SDK - here-api

This seems pretty straightforward, but it isn't working. I am able to custom style certain things using the Here SDK (such as airports.) But trying to change the color of something like water is not working.
ZoomRange range = new ZoomRange(0.0, 100.0);
scheme.setVariableValue(CustomizableVariables.Water.COLOR_0M, Color.BLACK, range);
Thanks in advance for the help!

Summary:
Solved by setting all the DISPLAYCLASS[X] colors for all the relevant water structures.
Two comments:
CustomizableVariables.Water.COLOR_0M is only applicable for oceans, the displayclasses you mentioned are correct for everything else.
Zoomlevels in the HERE Mobile SDK range from 1.0 (globe view) to 20.0 (close to ground). It doesn't harm that you set 1 to 100, but for your note.

Related

Grafana graph jumping / flickering on refresh

I am using the latest Grafana 8.2.1 with Amazon Timestream 3.1.1 datasource plugin.
I have noticed that when I use more than one query, the graph will jump / flicker on refresh.
I have the details reported in issue 40424
Just wondering if anyone else has experienced the same thing and if there is a workaround for this?
Panel is trying to optimize axe min/max values based on graphed values. Panel is getting those 2 timeseries in different times (executionFinishTime difference is 6 ms), so that is causing flickering.
I would set static Y-min and Y-max values (e.g. 20, 140 for showed values) to minize that auto optimization of axes. Or you can play also with soft Y-min/max.
Yes, I have flickering problems too. Very bothersome. In my case I perform various readings alternatively every fraction of a second to avoid serial port congestion. Maybe grafana doesn't like the slight difference in time between different measurement series.

Cropping YUV_420_888 images for Firebase barcode decoding

I'm using the Firebase-ML barcode decoder in a streaming (live) fashion using the Camera2 API. The way I do it is to set up an ImageReader that periodically gives me Images. The full image is the resolution of my camera, so it's big - it's a 12MP camera.
The barcode scanner takes about 0.41 seconds to process an image on a Samsung S7 Edge, so I set up the ImageReaderListener to decode one at a time and throw away any subsequent frames until the decoder is complete.
The image format I'm using is YUV_420_888 because that's what the documentation recommends, and because if you try to feed the ML Barcode decoder anything else it complains (run time message to debug log).
All this is working but I think if I could crop the image it would work better. I'd like to leave the camera resolution the same (so that I can display a wide SurfaceView to help the user align his camera to the barcode) but I want to give Firebase a cropped version (basically a center rectangle). By "work better" I mean mostly "faster" but I'd also like to eliminate distractions (especially other barcodes that might be on the edge of the image).
This got me trying to figure out the best way to crop a YUV image, and I was surprised to find very little help. Most of the examples I have found online do a multi step process where you first convert the YUV image into a JPEG, then render the JPEG into a Bitmap, then scale that. This has a couple of problems in my mind
It seems like that would have significant performance implications (in real time). This would help me accomplish a few things including reducing some power consumption, improving the response time, and allowing me to return Images to the ImageReader via image.close() more quickly.
This approach doesn't get you back to Image, so you have to feed firebase a Bitmap instead, and that doesn't seem to work as well. I don't know what firebase is doing internally but I kind of suspect it's working mostly (maybe entirely) off of the Y plane and that the translation if Image -> JPEG -> Bitmap muddies that up.
I've looked around for YUV libraries that might help. There is something in the wild called libyuv-android but it doesn't work exactly in the format firebase-ml wants, and it's a bunch of JNI which gives me cross-platform concerns.
I'm wondering if anybody else has thought about this and come up with a better solution for croppying YUV_420_488 images in Android. Am I not able to find this because it's a relatively trivial operation? There's stride and padding to be concerned with among other things. I'm not an image/color expert, and i kind of feel like I shouldn't attempt this myself, my particular concern being I figure out something that works on my device but not others.
Update: this may actually be kind of moot. As an experiment I looked at the Image that comes back from ImageReader. It's an instance of ImageReader.SurfaceImage which is a private (to ImageReader) class. It also has a bunch of native tie-ins. So it's possible that the only choice is to do the compress/decompress method, which seems lame. The only other thing I can think of is to make the decision myself to only use the Y plane and make a bitmap from that, and see if Firebase-ML is OK with that. That approach still seems risky to me.
I tried to scale down the YUV_420_888 output image today. I think it quite similar to the cropping image.
In my case, I will put the 3 byte arrays from the Image.plane for representing the Y,U and V.
yBytes = Image.plane[0]
uBytes = Image.plane[1]
vBytes = Image.plane[2]
Then I convert it to the RGB array for bitmap converting. I found that if I read the YUV array with the original Image width and height by step 2 Then I can scale half of my Bitmap Image.
What should you prepare:
yBytes,
uBytes,
vBytes,
width of Image,
height of Image,
y row stride,
uv RowStride,
uv pixelStride,
output array (The length of output array length should be equal to the output image width * height. For me the size is 1/4 original width and height)
It means if you can find the cropping area positions(The four corner of the Image) in your image, you can just fill the new RGB array with the YUV data.
Hope it can help you to solve the problem.

Maptiler Pro Demo 12 Core using only 12%

Using MapTiler Pro Demo. Testing zoom levels 1-21 for Google Maps export from a tiff image (about 21mb file covering polygons over 2000km).
At the moment its been running an hour with constant usage at 12% of 12 vcores (about 1.5 of 12) maxed to about 2.5ghz. No tiles has been exported yet, only the html's associated.
Am I too quick to judge performance?
Edit: Progressbar at 0%
Edit2: Hour 8 still 0%. Memory usage increased from 400mb to 2gb
You are trying to generate from your input 21 MBytes file about 350 GBytes of tiles (approx. 10 billion of map tiles at zoom level 21) by the options you have set in the software. Is this really what you want to do?
It sounds like a nonsense to render the very low-res image (2600 x 2000 pixel) covering a large area (such as the South Africa) down to zoom level 21!
The software has suggested you the default maxzoom 6. If your data are coverage maps or similar dataset it makes sense to render it maybe down to zoom level 12 or similar, definitely not deeper than 14. For standard input data (aerial photo) the native suggested maxzoom +1 or +2 is the max which really makes sense. Deeper zoom levels do not add any visual advantage.
The user can always zoom deeper - but the upper tiles can be displayed on the client side - so you don't really need to generate and save all these images at all...
MapTiler automatically provides you with a Google Maps V3 viewer, which is doing the client-side over zooming out of the box.
See a preview here:
http://tileserver.maptiler.com/#weather/gmapsmaptiler.embed
If you are interested in the math behind the map tiles, check:
http://tools.geofabrik.de/calc/#type=geofabrik_standard&bbox=16.44,-34.85,32.82,-22.16
Thanks for providing the report (with http://www.maptiler.com/how-to/submit-report/) to us. Your original email to our support did not contain any technical data at all (not even the data you write here on the stackoverflow).
Please, before you publicly rant on the performance of a software - double check you know what you do. MapTiler Pro is a powerful tool, but the user must know what he does.
Based on your feedback - we have decided to implement for a future version of the MapTiler software an estimated final output size - and warn the user in the graphical user interface if he chooses options which are probably unwanted.

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

multinote instruments and chords

I am new in ableton live software. I would like to understand why for some instruments I can play several notes at the same time (and create chord progression) and for the others I can hear only one note of a chord.
For example, there are two guitars: 'Power Chords Guitar' and 'Please Rise For Jimi Guitar'. Both of them are basing on an operator. For the first one I am able to press several buttons on midi keyboard and hear a sound, for the second I can hear only one note of a chord.
I was trying to compare operator options, but I was unable to find the setting which causes this multinote/mononote functionality.
Thank you very much for your help.
J
Ableton's Operator consists of 4 monophonic oscillators. Depending on a preset, it can be setup to play polyphony. It seems that the Glide function is responsible for that effect. Make sure you have at least 2 oscillators (the more the more poly) with Glide "on" set.
Check out Ableton's webpage for reference on how to use its instruments.

Resources