how to draw filled polygon in Google Maps SDK for iOS - polygon

I would like draw a filled polygon on iPhone with Google map (Version 1.1.1, the last one).
Anyone knows how to do like that on ios :
(My code on Android)
mMap.addPolygon(new PolygonOptions()
.addAll(latLngList)
.fillColor(Color.BLUE)
.strokeColor(Color.RED)
.strokeWidth(3));
Regards,
PS : If you have many solutions, keep in mind that I have many Polygon to draw.

The SDK currently doesn't support filled polygons, however there is a feature request to add them here:
https://code.google.com/p/gmaps-api-issues/issues/detail?id=5070
In the meantime, one option could be to draw your polygons into an image, and then add them as a ground overlay. This would be very limiting, but might work as a temporary workaround.
Another option is to add another view over the top of the map view and draw the polygons into it, and then update them whenever the map view moves. It isn't possible to perfectly synchronize another view with the map view, so your polygons will lag behind a bit as you pan/zoom around, but this might also be okay for you as a temporary workaround.
UPDATE
These are just some random ideas to try for the ground overlay approach, I'm not sure if they would work, but they might get you started:
I would suggest converting the lat/lon corners of the rectangle into MKMapPoint (using MKMapPointForCoordinate). These are equivalent to Google's coordinate system at zoom level 20.
You can then use the aspect ratio of the width/height of the rectangle in MKMapPoint coordinates to determine the aspect ratio of your ground overlay UIImage. Once you have the aspect ratio, you'll just need to experiment with actual sizes (ie guess a width, calculate the height from the aspect ratio) to find one which looks okay. The bigger it is, the finer the detail of your rectangle will be, but the more memory it will use, and probably the slower the performance will be. Also you might hit a hard limit at some size - I'm guessing the UIImage gets converted by the Google Maps SDK into a texture, and textures have a max size of 2048x2048 on iPhone 3GS+.
Then, use something similar to How to setRegion with google maps sdk for iOS? to calculate a zoom level and centre lat/lon. Instead of the map view width/height you would use your UIImage width/height, and you'd use the bounds of your rectangle instead of the bounds of the desired view. You also wouldn't need to calculate the scale from both the width and height (as the scale should be the same) - so just use one of them. Instead of creating a camera with the zoom level and centre lat/lon, set them on the GMSGroundOverlayOptions. Also set the ground overlay's anchor to the centre of the image (ie 0.5, 0.5).
The above describes how to add one GroundOverlay per rectangle. If you have lots of overlapping or nearby rectangles you could probably combine them into a single UIImage, but that would be a bit more complicated.

Related

What is the best approach to displaying drawings on different-sized Paper JS views?

Context
I'm using Paper JS to build a multi-player drawing game. At any given point, a single user will be drawing to his/her canvas, and the data will get sent to the server to be broadcast to other users. Each user's canvas may be of variable size, and it resizes as the window resizes while maintaining the same aspect ratio.
The goal is for each user to have a scaled representation of the drawing (i.e. everything fits inside the different sized canvases and the content doesn't get distorted). This should be the case if a drawing transfers from a larger canvas to a smaller canvas, and vice-versa. The project supports a drawing tool as well as an eraser tool.
Problem
Approach 1 below scales the drawings the way I want, but there is substantial lag. Approach 2 deals with the lag, but doesn't scale the drawings the way I want.
My understanding is that SVGs will scale nicely whether they are scaled-up or scaled-down. But rasters are pixel-based and will become "blurry" when scaled-up. When I test approach 2, a drawing from a smaller canvas gets blurred on a larger canvas. The result is the same whether I use export/importJSON or export/importSVG. Is there a way to get both good performance and scaled-drawings? See below for example implementations of the tools.
Approach 1: Paths + Symbols:
Every path/symbol placement is kept in the active layer.
The eraser tool draws a white rectangle (defined as a symbol) to
mimic an "erasing" effect.
This works fine as a demo, but will start to lag very quickly as the
number of items in the active layer increases. The eraser tool in
particular will not function smoothly.
Relevant sketch
Approach 2: Rasterization:
After a path is drawn or a symbol is placed, the active layer is
rasterized and its children are removed.
This seems to work quite well on a single canvas, and the eraser
doesn't lag like in the first approach. There are only 2 items in the
active layer after each rasterization.
When a drawing from a client with a smaller canvas is exported (using exportJSON or exportSVG) to a client with a larger canvas, the result is "blurry".
The above also happens when a drawing is made and then the canvas is re-sized to be larger.
Relevant sketch
You could send your objects as SVG and rasterize them once received.

QPainter drawImage becomes very pixelated

I use QPainter and the function drawImage to draw an airplane on a map. The image and redrawn each time the position of the airplane changes. The problem is, after some time, the image becomes extremely pixelated. I have tried to use a high quality .svg and that did not help either.
Below is my code. Can somebody spot where the error is or what has caused the image to be so pixelated?
// Load .svg image
airplane->load("AirplaneTopDown.svg");
// Downsize image
airplaneSmall = airplane->scaled(120, 120,Qt::KeepAspectRatio);
// Rotate image by trans
airplaneSmall = airplaneSmall.transformed(trans);
// Draw image and center at a certain screen position
painter.drawImage(airplaneX-airplaneSmall.width()/2,airplaneY-airplaneSmall.height()/2,airplaneSmall);
Below are the images of the drawn airplanes. One taken as screenshot at the beginning of the program runtime another one taken after a couple of minutes.
Airplane
Airplane-pixelated
One of your problems is that you first rescale the image and then rotate it.
The rotation needs to interpolate new pixels from the old ones. The higher the resolution of the input, the better the quality of the interpolation. The quality of your SVG is completely lost after the rescale operation.
The second problem you are facing is that you use the "fast" (default) transformation method. This method does not antialias. So instead of interpolating from several input pixels, it will only take one best fit. Calling transformed() with the second argument Qt::SmoothTransformation and scaled() with the sceond argument Qt::SmoothTransformation |Qt::KeepAspectRatio` will greatly improve your results.
However it is also slower, as is performing the rotation on the image in its original, higher resolution.
The arguably best solution to your problem is to take on a different approach. Instead of loading the SVG into a QImage, which is a raster-based image, you should work with the vector graphics. So the SVG is rendered in the right orientation and scale in the first place. A good starting point is the SVG Viewer Example: http://doc.qt.io/qt-5/qtsvg-svgviewer-example.html

How to increase click radius in bokeh?

I'm doing some simple plotting and would like to increase the usability of my figure.
I have quite a lot of points on my graph and have issues with selecting the ones I want because the click radius is so tiny.
I can increase the circle radius of my point but the radius of the area which displays a tooltip is still only 1 dot. Can I increase the radius somehow without having to create additional points around which respond the same?
Would it be even possible to increase the click detection radius without increasing the actual circle radius?
in the current version (0.8.2) and in the upcoming version (0.9) this is not yet a tunable parameter. It would be a good feature to expose a click radius, so I have made an issue on our issue tracker, that you can follow, here:
https://github.com/bokeh/bokeh/issues/2230
In the short term, a possible workaround is to have a second, invisible set of glyphs that are used for hit testing. They would be at the same locations, but bigger, to provide a bigger hit area.

SCNNode material scale

I have an SCNNode that has its geometry populated from a collada file (.dae) and displays correctly on screen. I can apply materials to the geometry easily enough, however I'd like to change the scale of the material.
I currently populate it with
nodeArray[0].geometry?.firstMaterial!.diffuse.contents="wood.png"
but the scale of the material is too small. While I can edit the png in GIMP or something similar and import it as wood2.png is there any way I can set the material scale programatically?
what do you mean by "too small" ?
Geometries are made of different sources such as the vertices' positions, but also their texture coordinates. These texture coordinates (they belong in [0,1]x[0,1]) are specified per vertex and indicate where to look in the texture.
In your 3D modeler please check that your texture coordinates match what you want (i.e. they cover the whole image i.e. they go from 0 to 1 in very direction), and make sure that your image has no extra transparent margin or other wasted space.
You can have a look at SCNMaterialProperty's contentsTransform property. But please check your model and texture before using it.
You need to open your UV snapshot in an image editing software like Photoshop, scale the wood texture in Photoshop over your UV's, then resave your PNG/JPG, move PNG/JPG back to Xcode

Rendering an invisible occluder

I'm currently upgrading from a DirectDraw system (yeah I know, it's very old) to DirectX10. It's a 2D system but simulates real world as each object has a range/depth in meters. There is a background image that is rendered and kept on the farthest z-order. All other objects are drawn on top of it and scaled according to what their range/depth would be. However, there is a certain type of object I have that is defined as a polygon and renders a bit different. It acts as an invisible occluder. For instance, an occluder is at a range/depth of 40 (my units are meters) and is defined by 5 vertices (a pentagon) in the middle of the viewport. There is a sprite object at the same viewport position but at a range/depth of 50. The desired output is to have the sprite object not rendered, but the background should be seen through both of them. So in essence these are invisible occluders, except that they do not occlude the background.
As a note, the occluders and the sprites all derive from the same base object type and are mixed together in a depth-sorted container.
My idea was to override the occluders Render method so they draw to a render target writing the range/depth values. I then would render the sprites as normal, but in the vertex or pixel shader would compare the range value of the sprite with the range values in the render target. However, it seems to me that I'd have to potentially read/write from the render target in the same pass before Present is called, and that's undefined. If i was to render the occluders, unbind the render target and pass the texture in for a lookup by the other objects, I'll have to convert the sprite positions into that texture space which may be non-trivial. Are either of these methods possible?
After thinking some more about it, one other idea came to mind. I could take the occluders and set their texture coordinates in reference to the background texture. In this way they would draw the same color values as the background, and because of the sorting if a sprite was behind it the user would still see the "background" but really it's the occluder looking like it.
Sorry if this is less a question and more thinking out loud, but I wanted to get impressions and ideas on the best way to go about this. Seems to me I have options but wasn't certain which was most efficient and which is easiest. Thanks in advance for any responses.
As stated in my comments I went with setting the texture coordinates in reference to the background image and then making sure the occluder, which was a simple polygon, was triangulated properly to make use of those texture coordinates.

Resources