WKT: how do you define Polygons with 3 rings (==2 holes)? - polygon

I found in here this document. I read it but I keep wondering how to define a Polygon with 3 rings in WKT?

You can use either the POLYGON or the MULTIPOLYGON type, but make sure the outer container ring is listed first followed by the inner hole rings. The orientations of the inner rings are not important since holes are explicit in the syntax.
X & Y are space separated, coordinates are comma separated, and ring extents are limited by parentheses and separated by commas. Polygons (outer ring plus any inner rings) are also limited by parentheses.
Finally, inner rings cannot cross each other, nor can they cross the outer ring.
Examples:
POLYGON ((10 10, 110 10, 110 110, 10 110), (20 20, 20 30, 30 30, 30 20), (40 20, 40 30, 50 30, 50 20))
MULTIPOLYGON (((10 10, 110 10, 110 110, 10 110), (20 20, 20 30, 30 30, 30 20), (40 20, 40 30, 50 30, 50 20)))

Related

Find the minimum number of shipments between two countries to minimize the cost of the system

Let's suppose transportation between two countries. We have a list of containers with different weights. Our goal is to minimize the number of shipments between two countries to minimize the cost of the system.
In this problem, our ships have a limited capacity to load containers for each shipment. For Example
Total Weight = 80 and list of countries countries = [19, 29, 43, 45, 32, 22, 51, 65, 31, 13, 62]
Here is the code i've written so for
from itertools import chain, combinations
def powerset(list_name):
s = list(list_name)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
A = list(cargo.values())
#A.append(19)
print(A)
res = []
for x in powerset(sorted(A)):
if sum(x)==80:
if x not in res:
res.append(x)
print(res)
And I got the output as:
[(29, 51), (13, 22, 45), (19, 29, 32)]
Here 29 has occurred twice which shouldn't happen and i want to find the remaining possible combinations which should give the overall output as 5.

generate human-frinedly ticks in power scale

I want to find an algorithm to generate ticks in a power scale in a human-friendly way.
For example, if the power is 1/2, between the range of [0, 100], without considering human friendliness, the ticks may be (0, 1, 4, 9, 25, 36, 49, 64, 81, 100).
However, in order to make a plot in 1/2-power scale labeled with the ticks, it would be better to round the ticks to things in multiple of 1's 2's 5's and 10's for this specific example whenever appropriate.
So the human-friendly version of the numbers may be (0, 1, 5, 10, 25, 35, 50, 65, 80, 100) (if the input parameter of the number of ticks is around 10). This is easy to be done manually for specific examples like this.
How to come up with a general algorithm that will work for any positive power and any non-negative intervals (note the interval boundaries may not integers, they can be arbitrary positive real numbers) so that the algorithmic result would be the same as what would be chosen by a human?
This is how to round a power sequence to numbers dividable by 5:
power <- 2
a <- sapply(seq(10), function(x) x ** power)
a
#> [1] 1 4 9 16 25 36 49 64 81 100
b <- seq(0, 100, by = 5)
sapply(a, function(a, b) {b[which.min(abs(a-b))]}, b)
#> [1] 0 5 10 15 25 35 50 65 80 100
Created on 2022-04-29 by the reprex package (v2.0.0)

logticks in plain R graphics

https://ggplot2.tidyverse.org/reference/annotation_logticks.html
There is a package for logticks, but it is not for basic R graphics. Is there a ready-to-use function to easily generate the ticks (with the range as the input argument) that would be printed nicely so that they can be used with basic R graphics (the axis() command specifically)?
For example, between 10 and 100, I may want to have major ticks at 10, 20, ..., 90, 100, and minor ticks at 11, ..., 19, 21, ...,29, ..., 91, ..., 99.
Between 100 and 1000, the ticks will be just the above ones multiple by 10. For other ranges, the ticks can be similarly defined.
So if I have an input range of 15 and 150. I will have all the ticks described above that are within 15 and 150 as output.
The function should be relatively flexible. For example, I may reduce the frequency of major ticks, say to keep the ones at 10, 20, 50, 100. But the minor ticks can still remain as is, plus, the additional ones like 30, 40, 60, 70, 80, 90.
Another flexibility may be uses people can specify how many major ticks and minor ticks should be (but needs to be rounded so that the ticks are not an awkward place).

Creating subgraphs with overlapping vertices

I've been looking for packages using which I could create subgraphs with overlapping vertices.
From what I understand in Networkx and metis one could partition a graph into two or multi-parts. But I couldn't find how to partition into subgraphs with overlapping nodes.
Suggestions on libraries that support partitioning with overlapping vertices will be really helpful.
EDIT: I tried the angel algorithm in CDLIB to partition the original graph into subgraphs with 4 overlapping nodes.
import networkx as nx
from cdlib import algorithms
if __name__ == '__main__':
g = nx.karate_club_graph()
coms = algorithms.angel(g, threshold=4, min_community_size=10)
print(coms.method_name)
print(coms.method_parameters) # Clustering parameters)
print(coms.communities)
print(coms.overlap)
print(coms.node_coverage)
Output:
ANGEL
{'threshold': 4, 'min_community_size': 10}
[[14, 15, 18, 20, 22, 23, 27, 29, 30, 31, 32, 8], [1, 12, 13, 17, 19, 2, 21, 3, 7, 8], [14, 15, 18, 2, 20, 22, 30, 31, 33, 8]]
True
0.6470588235294118
From the communities returned, I understand 1 and 3 have an overlap of 4 nodes but 2 and 3 or 1 and 3 don't have an overlap size of 4 nodes.
It is not clear to me how the overlap threshold (4 overlaps) has to be specified
here algorithms. angel(g, threshold=4, min_community_size=10). I tried setting threshold=4 here to define an overlap size of 4 nodes. However, from the documentation available for angel
:param threshold: merging threshold in [0,1].
I am not sure how to translate the 4 overlaps to the value that has to be set between the bounds [0, 1]. Suggestions will be really helpful.
You can check out CDLIB:
They have a great amount of community finding algorithms applicable to networkX, including some overlapping communities algorithms.
On a side note:
The return type of the functions is called Node Clustering which might be a little confusing at first so here are the methods applicable to it, usually you simply want to convert to a Python dictionary.
Specifically about the angel algorithm in CDLIB:
According to ANGEL: efficient, and effective, node-centric community discovery in static and dynamic networks, the threshold is not the overlapping threshold, but used as follows:
If the ratio is greater than (or equal to) a given threshold, the merge is applied and the node label updated.
Basically, this value determines whether to further merge the nodes into bigger communities, and is not equivalent to the number of overlapping nodes.
Also, don't mistake "labels" with "node's labels" (as in nx.relabel_nodes(G, labels)). The "labels" referred are actually correlated with the Label Propagation Algorithm which is used by ANGEL.
As for the effects of varying this threshold:
[...] Increasing the threshold, we obtain a higher number of communities since lower quality merges cannot take place.
[based on the comment by #J. M. Arnold]
From ANGEL's github repository you can see that when threshold >= 1 only the min_comsize value is used:
self.threshold = threshold
if self.threshold < 1:
self.min_community_size = max([3, min_comsize, int(1. / (1 - self.threshold))])
else:
self.min_community_size = min_comsize

how to do fourier frequency matrix multiplication if size is different?

sorry this is not a program issue.
I just get confused for this Theory:
The FFT of a convolution is equal to the multiplication of their own's FFT.
i.e.:
FFT(conv(x,y)) = FFT(x) * FFT(y)
for the left side:
lets say i have a image with 100x100 size and kernel 3x3, if I convolve, i will get a matrix of 98x98, then its FFT will also be 98x98
for the right side:
if I take FFT for each I will get a frequency matrix of 3x3 and 100x100 respectively.
Then how should i do the multiplication? Some of you may say we can pad the 3x3 kernel to 100x100 and take FFT, but still we will get a matrix of 100x100 instead of 98x98?
Can someone give me some hints?
A convolution of two signals of size L and P respectively will have a result of size N = L + N - 1.
Therefore, the mathematically correct implementation of conv(x,y) will have size 102x102. You should zero pad to both x and y to make them of size 102.
When you perform the convolution as CNN convolution layers does (which is what I think you are doing) without any zero padding, you are actually cropping the result (you are leaving outside the border results).
Therefore, you can just do a 102x102 fft result and crop accordingly for the 98x98 result (crop 2 at the start and 2 and the end).
ATTENTION: Unlike how zero padding usually works for Convolutional layers, for this case add zeros at the END. If not, you will be adding a shift that will be reflected in a shift in the output. ex. the expected result could be [1, 2, 3, 4] and if you apply 1 zero at the beggining and 1 at the end (instead of 2 at the end) you will have [4, 1, 2, 3].
ATTENTION 2: Not making the sizes to 102 when using iff(fft()) technique will produce something call Aliasing. This will make for example, an expected result of 30, 31, 57, 47, 87, 47, 33, 27, 5 to be 77, 64, 84, 52, 87. Note this results is actually product of making:
30, 31, 57, 47, 87
+ 47, 33, 27, 5
--------------------
77, 64, 84, 52, 87

Resources