Maptiler Pro Demo 12 Core using only 12% - google-maps-api-3

Using MapTiler Pro Demo. Testing zoom levels 1-21 for Google Maps export from a tiff image (about 21mb file covering polygons over 2000km).
At the moment its been running an hour with constant usage at 12% of 12 vcores (about 1.5 of 12) maxed to about 2.5ghz. No tiles has been exported yet, only the html's associated.
Am I too quick to judge performance?
Edit: Progressbar at 0%
Edit2: Hour 8 still 0%. Memory usage increased from 400mb to 2gb

You are trying to generate from your input 21 MBytes file about 350 GBytes of tiles (approx. 10 billion of map tiles at zoom level 21) by the options you have set in the software. Is this really what you want to do?
It sounds like a nonsense to render the very low-res image (2600 x 2000 pixel) covering a large area (such as the South Africa) down to zoom level 21!
The software has suggested you the default maxzoom 6. If your data are coverage maps or similar dataset it makes sense to render it maybe down to zoom level 12 or similar, definitely not deeper than 14. For standard input data (aerial photo) the native suggested maxzoom +1 or +2 is the max which really makes sense. Deeper zoom levels do not add any visual advantage.
The user can always zoom deeper - but the upper tiles can be displayed on the client side - so you don't really need to generate and save all these images at all...
MapTiler automatically provides you with a Google Maps V3 viewer, which is doing the client-side over zooming out of the box.
See a preview here:
http://tileserver.maptiler.com/#weather/gmapsmaptiler.embed
If you are interested in the math behind the map tiles, check:
http://tools.geofabrik.de/calc/#type=geofabrik_standard&bbox=16.44,-34.85,32.82,-22.16
Thanks for providing the report (with http://www.maptiler.com/how-to/submit-report/) to us. Your original email to our support did not contain any technical data at all (not even the data you write here on the stackoverflow).
Please, before you publicly rant on the performance of a software - double check you know what you do. MapTiler Pro is a powerful tool, but the user must know what he does.
Based on your feedback - we have decided to implement for a future version of the MapTiler software an estimated final output size - and warn the user in the graphical user interface if he chooses options which are probably unwanted.

Related

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

How to Save an Image of a Large Flex Component (EX: 25000px by 3000px # 72dpi)

My application consists of displaying a large custom tree like structure to the user that can eventually grow to massive proportions like the dimensions listed in the question. I allow them to export the image with the following line of code tied to a button click event:
var image:ImageSnapshot = ImageSnapshot.captureImage(this, 72, new PNGEncoder(), false);
I've managed to export images close to the dimensions listed but around there it start to get the error message listed below after spinning for close to 15 seconds:
Error: Error #1000: The system is out of memory.
at flash.utils::ByteArray/writeBytes()
at mx.graphics::ImageSnapshot$/mergePixelRows()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:511]
at mx.graphics::ImageSnapshot$/captureAll()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:482]
at mx.graphics::ImageSnapshot$/captureImage()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:318]
at vertical/saveChart()[C:\devel\workspace\vertical\src\CustomObject.mxml:501]
at vertical/__saveImageBtn_click()[C:\devel\workspace\vertical\src\CustomObject.mxml:574]
Is the flashplayer plugin for my browser running out of memory? I noticed in my task manager it got up to about 1.2GB of memory usage(I have 4GB on my system). If that is the case is it possible to limit the memory usage for a given function like the ImageSnapshot.captureImage() call above?
Is there maybe a way to generate the component into 2 or 4 ImageSnapshot objects and piece them together afterward?
Any advice would be greatly appreciated.
I believe the latest Flash Player 11 has a new feature to solve this issue:
"Enhanced high resolution bitmap support — BitmapData objects are no longer limited to a maximum resolution of 16 megapixels (16,777,215 pixels), and maximum bitmap width/height is no longer limited to 8,191 pixels, enabling the development of apps that utilize very large bitmaps." from this PDF
If you are using BitmapData, it makes a difference which FlashPlayer you are targetting:
versions VS maximum bitmapsize
flashplayer -9 : 2880x2880 px
flashplayer 10 : 4096x4096 px
flashplayer 11 : unlimited
I don't know what you exactly are trying to do with this huge capture, but I would recommend using tiles. Break it down to chunks of relative small bitmaps. Create them separately, so you don't have to open/create that huge amount of data in your memory.
Anyway, it would be nice to know if it is possible to encode that big-ass sized image, without Error #1000 out of memory errors.

BitmapData and JpegEncoder Limitations

I am trying to save out a large image from flash using bitmapdata and the jpegencoder. I am looking into the limitations of this process and have noticed you can only set bitmapdata pizel width and height to a certain amount and this might be flexible with what you set the jpegencoder quality to (1-100).
Does anyone know what the specific limitations of these two things are? I'm basically trying to see just how large of an image I can save out (because I need to use the image exported for printing purposes, so I need it as high quality as possible).
I have read articles that say in fp 10 you can render up to something like 16,000 px. But I tried an image that is 3500 x 3500 and it timed out. So not sure if this is correct information.
The image size limit up to Flash Player 9 is 2880x2880, Flash 10 increased this limit to 4096x4096. This applies also for the Stage, Sprites and MovieClips.
The quality used for the JPGEncoder class does not circumvent this limitation as this is tied to the Flash core.

Volume render DICOMDIR CT scan

I got a CD from the hospital that is a head CD scan.
I am completely new to medical imaging. What I would like to do perform a volume rendering of the CT scan.
It is in DICOMDIR format. How and where would I start?
From messing about with various tools I get the feeling that I need to extract each series into DICOM format. Is this correct and if so how would I do it?
Unless you were given the volume data your rendering will be disappointing at best. Many institutions still acquire head CT's in separate "step-slices", and not as volumes so here you will have significant 'stepping' artifact.
Even if it was acquired with volume data, unless they transferred all the data to your CD, you will still be stuck with only the processed 'slab' or 'slice' images.
The best way to do a volume rendering is to actually have the volume data. "Slice image" data has most of the information dumbed down and removed. You are just getting 20 or 30 images in 256 x 256 x (8 or 16 bit greyscale) array data.
If you have a mac try OsiriX - it's free, open source and will do everything you need and more. If you don't, and this is a one time thing, you could always sign up for a free demo of a commercial grade DICOM viewer. Medical image viewing software is insanely expensive and would be impossible to sell without demos. Just claim to be working for a clinician and you'll have no problem getting working software.
I believe ImageJ will open any of the files in the DICOMDIR for you. I'm not entirely sure it can open the entire study from the DICOMDIR, but I'm fairly certain it will handle any individual files you need to open. It should also offer the option to export the images to various other formats. If you need more info, feel free to post a comment.
You can also try MevisLab (http://www.mevislab.de/) it is free but a bit more complex to use and maybe it requires two steps to get the rendering of your dicom images.
Most probably you have to use one of the widgets they provide to convert the image and then to load the converted image and render it.
I have done with ImageJ but ImageJ not support compress dicom files at that time you have create your own logic to read compress dicom file.
Fiji and VolumeJ are also Good Option for Volume Rendering
Try Real3d VolViCon which is an advanced application for reconstruction of computed tomography (CT), magnetic resonance (MR), ultrasound, and x-rays images. It gives features for exporting 3D surfaces or volume as triangular mesh files for creating physical models using 3D printing technologies. It also provides high-quality visualization, linear and angular measurement tools, and various type of markups. It takes a single raw volume file or a sequence of 2D (i.e., DICOM) files and reconstructs 3D volume (voxels) and mesh (surfaces) models.

Resources