How to make a heavier perimeter outline in mapshaper - mapshaper

I'm trying to ink in the perimeter of region comprised of many polygons. I've tried dissolving the polygons to come up with one large perimeter and making the stroke-width heavier, then add back the attribute data. But for one thing, the dissolved perimeter loses attribute data and seemingly the way to hook back up with the polygons. But if I force retention of a parameter to join (copy-fields), what comes back results in the heavy stroke width everywhere anyway. So that's a rotten strategy. Can I specify stroke-width for just certain attributes? I also tried mosaic but I couldn't get that to work either. tia.

Related

Polygon comparison between layers

I'm trying to compare specific polygons between layers (different years), to see if there has been any change to the area. Eg[2021 & 2019]2019 & 2021 In this example (from left to right), A would decrease by the amount that is outside of the layer, as well as the amount that B is now taking instead. C would also increase in area.
The difference, symmetrical difference, union, and intersection tools all find areas outside the total bounding box of the layer, but cannot find changes to polygons within the layer.
My desired output is a change (either % or absolute) in area from each layer to a master layer (the latest year).
(Some polygons get split and/or renamed, however I suspect this is a small enough cohort that it can be addressed manually.)
Edit: Currently trying "Detect Dataset Changes", but the polygons don't exactly 100% overlap, the 0.00000001% changes in the border mess it up and mark every polygon as changed. Posted as new question: Is there a tolerance setting I can use with "detect dataset changes" tool?

Rendering highly granular and "zoomed out" data

There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?

Cel shading/alpha shape in current visualization

I am playing around with rgl and I have created a 3D rendering of the mouse brain, in which structures can be isolated and coloured separately.
The original data is a 3D array containing evenly spaced voxels.
Every voxel is coded with a structure ID.
Every structure is rendered separately as a mesh by marching cubes, and smoothed using Laplacian smoothing as implemented by Rvcg.
Some of these structures can be quite small, and it would make sense to look at them within the context of the whole brain structure.
One of the options is to create a low-threshold mesh of the whole set of voxels, so that only the outer surface of the brain is included in the mesh.
This surface can be smoothed and represented using a low alpha in rgl::shade3d colouring faces. This however seems to be quite taxing for the viewport as it slows down rotation etc especially when alpha levels are quite low.
I was wondering if there is any way to implement some sort of cel shading in rgl, e.g. outlining in solid colours the alpha hull of the 2D projection to the viewport in real time.
In case my description was not clear, here's a photoshopped example of what I'd need.
Ideally I would not render the gray transparent shell, only the outline.
Cel shading example
Does anybody know how to do that without getting deep into OpenGL?
Rendering transparent surfaces is slow because OpenGL requires the triangles making them up to be sorted from back to front. The sort order changes as you rotate, so you'll be doing a lot of sorting.
I can't think of any fast way to render the outline you want. One thing that might work given that you are starting from evenly spaced voxels is to render the outside surface using front="points", back="points", size = 1. Doing this with the ?surface3d example gives this fake transparency:
If that's not transparent enough, you might be able to improve it by getting rid of lighting (lit = FALSE), plotting in a colour close to the background (color = "gray90"), or some other thing like that. Doing both of those gives this:
You may also be able to cull your data so the surface has fewer vertices.

Determine minimum number of "invisible squares" to cover all markers on a map?

I have a map full of markers corresponding to GPS coordinates, represented as a PostgreSQL + PostGIS database table using "geography" type for the GPS column.
Imagine, if you will, one semi-transparent square on top of each of these points corresponding to 1x1 mile, based from the centre, often intersecting with each other.
I'm trying to determine the minimum number of such "squares" and their GPS coordinates, so that they "cover" all of the markers with a minimum of 25 meters to the nearest border.
If it makes it any easier, the positions of these "squares" don't have to match the positions of any of the markers.
The purpose of this is to attempt to cut down the number of API requests to a "houses for sale" service significantly, since most of the positions are close to each other and the API takes a 1x1 mile square "bounding box" as the input for each call. It would be insanely wasteful to call the API many times for basically the same area when maybe 1 or 2 times would do it if I can first figure out where these imaginary "squares" go.
I get the feeling that this is considered a "known, common and solved" problem, but so far, I've not been able to figure out how to do it.
Sorry, but it seems like you have no idea what you are doing and are just being rude, both here and at PostGIS irc-channel.
You give no information about your api.
What is creating your maps?
Is it a wms-service or what?
What most people would do is setting up a mapservice with a tile cache. Then the mapservice will pich the tiles needed for each house you want to show ( or multiple houses).
The tiles will be prepared o will be created on the fly. But they will be cached for next time.
So, I think you should read up on things like
MapServer
Mapnik
MapCache
Mapproxy
GeoServer
That is not a complete list, but might give you some ideas about what it is that you want

C# GDI: Union/Outline of Overlapping Ellipses

Banging my head on the wall with this one...
Google searches so far null...
I have a ton of over lapping circles in a mapping program...they represent radar ranges for installations such as fixed operating bases, strategic facilities, anti-aircraft assets...
Most if not all overlap one or many of their bretheren...some may stand alone...Imagine an outlying installation with limited range...
I am trying to draw the UNION of the aggregated collection of circle objects...technically ellipses bound by rectangles...
I am trying to draw the outside boundary of the air defense system...I want to eliminate all drawing of the portion of the child ellipses that fall within that outer boundry...
If an outlying station is standing alone so to speak it should be drawn as a simple circle...
Should I link a picture?
What the heck here it is...image is a bit big so I linked it
image 1024x1024
What I want to draw is union outline of the British and then of the Germans...
So far I can't figure out how to this in C# GDI...
I Do not want to fill the path using the Winding Mode Flag...I want to Draw the OUTLINE..
Any help greatly appreciated...
Oneway
Create a new image, render the circles in solid colour to that area, then overlay that image on your map at, say, 50% opacity.
Alternatively, run an edge detect on that solid-colour image to find the overall outline.

Resources