In the real world, aggregated data from induction loops is usually measured across all lanes of the road. As I try to model traffic demand from real world Data, which holds the number of vehicles that passed a specific induction loop, I wonder what is best practice to place these Induction loops within my net.
Is there a way in SUMO to place an induction loop across all lanes of an Edge ?
Or
Is there a way to group single Induction
loops within a higher level XML-Tag and get retrieve the collected data from the group ?
(The background of these Question is the intended use of the DfRouter for multilane Edges)
There is currently no way of grouping the detectors in the input or defining loops which cover multiple lanes. DFRouter (and also flowrouter) however groups them automatically based on their position on the lane.
What you can do to group them in one file is giving them the same "file" to output their data. For induction loops at different positions I use different output files to separate them.
Here is my code as an example
I created a loop for every lane and named them with the number of the loop and the number of the lane
<!-- AtoB; speed=60 km/h; 2 lanes -->
<inductionLoop id="AtoB-2.1(auto)" lane="27333102.218.2368_0" pos="250" freq="60" file="file1(auto).out.xml" freindlyPos="true" vTypes="auto"/>
<inductionLoop id="AtoB-2.2(auto)" lane="27333102.218.2368_1" pos="250" freq="60" file="file1(auto).out.xml" freindlyPos="true" vTypes="auto"/>
<!-- AtoB; speed=80 km/h; 100m before 2 to 1 lane -->
<inductionLoop id="AtoB-3.1(auto)" lane="27333102.218.2968_0" pos="825" freq="60" file="file2(auto).out.xml" freindlyPos="true" vTypes="auto"/>
<inductionLoop id="AtoB-3.2(auto)" lane="27333102.218.2968_1" pos="825" freq="60" file="file2(auto).out.xml" freindlyPos="true" vTypes="auto"/>
Related
I have a following problem Im trying to solve.
I have hundreds of particles with their corresponding chemical composition (elements with their weight percentages).
As an example, here are some made-up simplified particles:
Particle 1 - S (32%), K (25%), C (43%)
Particle 2 - S (33%), K (12%), C (15%), O (40%)
Particle 3 - Ti (18%), S (72%)
Particle 4 - Ti (10%), S (79%), K (12%)
In reality there are hundreds of them, some of them quite different to one another, some of them quite similar. As you can see, some particles do not have certain elements (i.e. they could be used as 0%).
What I would try to achieve is perform a cluster analysis, that would group the particles into groups with similar particles and give me some averages in terms of that cluster element composition.
I was looking at how cluster analysis works, but usually it only uses 2 parameters, whereas I have many elements for each particle and I want to take into account more than just one element for each particle while clustering it. I am not so much interested in the exact match in terms of all the elements contained. In other words, if for example some 2 particles were quite similar except that one contained one extra element in a very small quantity, that would be ok too. Very low percentages are sometimes caused by background noise when measuring it.
Once I know which strategy to use I would ideally use R to do it. But giving me just a hint as to how to go about it, or a link, would be enough.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I have a large sales database of a 'home and construction' retail.
And I need to know who are the electricians, plumbers, painters, etc. in the store.
My first approach was to select the articles related to a specialty (wires [article] is related to an electrician [specialty], for example) And then, based on customer sales, know who the customers are.
But this is a lot of work.
My second approach is to make a cluster segmentation first, and then discover which cluster belong to a specialty. (this is a lot better because I would be able to discover new segments)
But, how can I do that? What type of clustering should I occupy? Kmeans, fuzzy? What variables should I take to that model? Should I use PCA to know how many cluster to search?
The header of my data (simplified):
customer_id | transaction_id | transaction_date | item_article_id | item_group_id | item_category_id | item_qty | sales_amt
Any help would be appreciated
(sorry my english)
You want to identify classes of customers based on what they buy (I presume this is for marketing reasons). This calls for a clustering approach. I will talk you through the entire setup.
The clustering space
Let us first consider what exactly you are clustering: either orders or customers. In either case, the way you characterize the items and the distances between them is the same. I will discuss the basic case for orders first, and then explain the considerations that apply to clustering by customers instead.
For your purpose, an order is characterized by what articles were purchased, and possibly also how many of them. In terms of a space, this means that you have a dimension for each type of article (item_article_id), for example the "wire" dimension. If all you care about is whether an article is bought or not, each item has a coordinate of either 0 or 1 in each dimension. If some order includes wire but not pipe, then it has a value of 1 on the "wire" dimension and 0 on the "pipe" dimension.
However, there is something to say for caring about the quantities. Perhaps plumbers buy lots of glue while electricians buy only small amounts. In that case, you can set the coordinate in each dimension to the quantity of the corresponding article (presumably item_qty). So suppose you have three articles, wire, pipe and glue, then an order described by the vector (2, 3, 0) includes 2 wire, 3 pipe and 0 glue, while an order described by the vector (0, 1, 4) includes 0 wire, 1 pipe and 4 glue.
If there is a large spread in the quantities for a given article, i.e. if some orders include order of magnitude more of some article than other orders, then it may be helpful to work with a log scale. Suppose you have these four orders:
2 wire, 2 pipe, 1 glue
3 wire, 2 pipe, 0 glue
0 wire, 100 pipe, 1 glue
0 wire, 300 pipe, 3 glue
The former two orders look like they may belong to electricians while the latter two look like they belong to plumbers. However, if you work with a linear scale, order 3 will turn out to be closer to orders 1 and 2 than to order 4. We fix that by using a log scale for the vectors that encode these orders (I use the base 10 logarithm here, but it does not matter which base you take because they differ only by a constant factor):
(0.30, 0.30, 0)
(0.48, 0.30, -2)
(-2, 2, 0)
(-2, 2.48, 0.48)
Now order 3 is closest to order 4, as we would expect. Note that I have used -2 as a special value to indicate the absence of an article, because the logarithm of 0 is not defined (log(x) tends to negative infinity as x tends to 0). -2 means that we pretend that the order included 1/100th of the article; you could make the special value more or less extreme, depending on how much weight you want to give to the fact that an article was not included.
The input to your clustering algorithm (regardless of which algorithm you take, see below) will be a position matrix with one row for each item (order or customer), one column for each dimension (article), and either the presence (0/1), amount, or logarithm of the amount in each cell, depending on which you choose based on the discussion above. If you cluster by customers, you can simply sum the amounts from all orders that belong to that customer before you calculate what goes into each cell of your position matrix (if you use the log scale, sum the amounts before taking the logarithm).
Clustering by orders rather than by customers gives you more detail, but also more noise. Customers may be consistent within an order but not between them; perhaps a customer sometimes behaves like a plumber and sometimes like an electrician. This is a pattern that you will only find if you cluster by orders. You will then find how often each customer belongs to each cluster; perhaps 70% of somebody's orders belong to the electrician type and 30% belong to the plumber type. On the other hand, a plumber may only buy pipe in one order and then only buy glue in the next order. Only if you cluster by customers and sum the amounts of their orders, you get a balanced view of what each customer needs on average.
From here on I will refer to your position matrix by the name my.matrix.
The clustering algorithm
If you want to be able to discover new customer types, you probably want to let the data speak for themselves as much as possible. A good old fashioned
hierarchical clustering with complete linkage (CLINK) may be an appropriate choice in this case. In R, you simply do hclust(dist(my.matrix)) (this will use the Euclidean distance measure, which is probably good enough in your case). It will join closely neighbouring items or clusters together until all items are categorized in a hierarchical tree. You can treat any branch of the tree as a cluster, observe typical article amounts for that branch and decide whether that branch represents a customer segment by itself, should be split in sub-branches, or joined with a sibling branch instead. The advantage is that you find the "full story" of which items and clusters of items are most similar to each other and how much. The disadvantage is that the outcome of the algorithm does not tell you where to draw the borders between your customer segments; you can cut up the clustering tree in many ways, so it's up to your interpretation how you want to identify your customer types.
On the other hand, if you are comfortable fixing the number of clusters (k) beforehand, k-means is a very robust way to get just any segmentation of your customers in k distinct types. In R, you would do kmeans(my.matrix, k). For marketing purposes, it may be sufficient to have (say) 5 different profiles of customers that you make custom advertisement for, rather than treating all customers the same. With k-means you don't explore all of the diversity that is present in your data, but you might not need to do so anyway.
If you don't want to fix the number of clusters beforehand, but you also don't want to manually decide where to draw the borders between the segments afterwards, there is a third possibility. You start with the k-means algorithm, where you let it generate an amount of cluster centers that is much larger than the number of clusters that you hope to end up with (for example, if you hope to end up with somewhere about 10 clusters, let the k-means algorithm look for 200 clusters). Then, use the mean shift algorithm to further cluster the resulting centers. You will end up with a smaller number of compact clusters. The approach is explained in more detail by James Li over here. You can use the mean shift algorithm in R with the ms function from the LPCM package, see this documentation.
About using PCA
PCA will not tell you how many clusters you need. PCA answers a different question: which variables seem to represent a common underlying (hidden) factor. In a sense, it is a way to cluster variables, i.e. properties of entities, not to cluster the entities themselves. The number of principal components (common underlying factors) is not indicative of the number of clusters needed. PCA can still be interesting if you want to learn something about the predictive value of each article about a customer's interests.
Sources
Michael J. Crawley, 2005. Statistics. An Introduction using R.
Gerry P. Quinn and Michael J. Keough, 2002. Experimental Design and Data Analysis for Biologists.
Wikipedia: hierarchical clustering, k-means, mean shift, PCA
After obtaining "natural loops" from a control flow diagram of basic blocks. How can these loops be ordered from inner most to outer most? I.e the inner most loop contains no other loops?
I obtained the loops using the dominator method, see the slide titled "Identifying Natural Loops with Dominators" here: http://www.cs.colostate.edu/~mstrout/CS553Fall07/Slides/lecture15-control.pdf
Additionally what algorithm should be used to traverse the control flow graph such that writing out each node would yield the correct output code?
In a well structured program (i.e. no gotos), the beginning of a loop must dominate the contents of the loop.
Every node which has incoming backedges must be the head of a loop. However, you have some freedom to the actual loop contents thanks to the ability to specify explicit continues. The minimal set of nodes that must be in the loop are all blocks that have a backedge to the head and all blocks which are reverse reachable from them and dominated by the head. The maximal set of nodes that can be in the loop is of course just all nodes dominated by the head.
Nesting is determined by whether the head of one loop is in the contents of another loop. In some cases, you have the freedom to decide whether to place the loop inside the outer loop or not.
At a party with n people P1, . . . , Pn, certain pairs of individuals cannot stand each other.
Given a list of such pairs, determine if we can divide the n people into two groups such that all the people
in both group are amicable, that is, they can stand each other.
Suppose we have a G that the pairs of people cannot be in the same group has a edge between them. Use DFS in this G and set Group1 for s, and then Group2 for its successor, and then Group2.... If we can finish it, we find it, otherwise, there are some collisions, which means we can't divide them into two groups as the question asked.
One brute force solution would be to find all possible combinations of n choose n/2 people and then verify that everyone in the group is amicable, if so, then you must check everyone in the other half as well. If both sides are happy then you've found a solution. Otherwise, move on to the next combination. Obviously, this is not an ideal solution, but it does work deterministically. Typically in an interview, it is best to start with something that works and iterate on to better ideas.
A more sophisticated solution would compute the complement graph, then remove any edges that are not bi-directional, pick an arbitrary node to start from, use depth-first search, mark every node found in group 1. Then pick any unmarked node, and mark every node found in group 2. If there are any remaining unmarked nodes, then the individuals cannot be divided into two amicable groups.
For example, suppose I had a graph G that had all blue nodes and one red node. I also had a graph F that had all blue and one red node.
What is an algorithm that I can run to verify that these two graphs are isomorphic with respect to their colored nodes?
I have made a few attempts at trying to create a polynomial graph isomorphism algorithm, and while I have yet to create an algorithm that is proven to be polynomial for every case, one algorithm I came up with is particularly suited for this purpose. It's based on a DFA minimization algorithm (the specific algorithm is http://en.wikipedia.org/wiki/DFA_minimization#Hopcroft.27s_algorithm ; you may want to find a description from elsewhere, since Wikipedia's is difficult to follow).
The original algorithm was initialized by organizing the vertexes into distinct groups based on degree (one group for vertexes of degree 1, one for vertexes of degree 2, etc.). For your purposes, you will want to organize the vertexes into groups based upon both degree and label; this will ensure that no two nodes will be paired if they have different labels. Each graph should have its own structure containing such groups. Check the collection of groups for both graphs; there should be the same number of groups for the two graphs, and for each group in one graph, there should be a group in the other graph containing the same number of vertexes of the same degree and label. If this isn't the case, the graphs aren't isomorphic.
At each iteration of the main algorithm, you should generate a new data structure for each of the two graphs for the vertex groups that the next step will use. For each group, generate a list for each vertex of group indices/IDs that correspond to the vertexes that are adjacent to the vertex in question (include duplicate groups in this list). Check each group to see if the sorted group index/ID list for each contained vertex is the same. If this is the case, create a unmodified copy of this group in the next step's group structure. If this isn't the case, then for each unique list of group indices/IDs within that group, create a new group for vertexes within the original group that generated that list and add this new group to the next step's group structure. If you do not subdivide any of the groups of either graph in a given iteration, stop running the main portion of this algorithm. If you subdivide at least one group, you will need to once again check to make sure the group structures of the two graphs correspond to each other. This check will be similar to the one performed at the end of the algorithm's initialization (you may even be able to use the same function for both). If this check fails, then the graphs aren't isomorphic. If the check passes, then discard/free the current group structures and start the next iteration with the freshly created ones.
To make the process of determining "corresponding groups" easier, I would highly recommend using a predictable scheme for adding groups to the structure. For example, if you add groups during initialization in (degree, label) order, subdivide groups in ascending index order, and add subdivided groups to the new structure based on the order of the group index list (i.e., sorted by first listed index, then second, etc.), then corresponding groups between the two group structures will always have the same index, which makes the process of keeping track of which groups correspond to each other much easier.
If all groups contain 3 or fewer vertexes when the algorithm completes, then the graphs are isomorphic (for corresponding groups containing 2 or 3 vertexes, any vertex pairing is valid). If this isn't the case (this always happens for graphs where all nodes have equal degree and label, and sometimes happens for subgraphs with that property), then the graphs are not yet determined to be isomorphic or non-isomorphic. To differentiate between the two cases, choose an arbitrary node of the first graph's largest group and separate it into its own group. Then, for each node of the other graph's largest group, try running the algorithm again with that node separated into its own group. In essence, you are choosing an unpaired node from the first graph and pairing it by guess-and-check to every node in the second graph that is still a plausible pairing. If any of the forked iterations returns an isomorphism, the graphs are isomorphic. If none of them do, the graphs are not isomorphic.
For general cases, this algorithm is polynomial. In corner cases, the algorithm might be exponential. Whether this is the case or not is related to how frequently the algorithm can be forced to fork in the worst case of both graph input and node selection, which I have had difficulties trying to put useful bounds on. For example, although the algorithm forks at every step when comparing two full graphs, every branch of that tree produces an isomorphism; therefore, the algorithm returns in polynomial time in this case even though traversing the entire execution tree would require exponential time since traversing only one branch of the execution tree takes polynomial time.
Regardless, this algorithm should work well for your purposes. I hope my explanation of it was comprehensible; if not, I can try providing examples of the algorithm handling simple cases or expressing it as pseudocode instead.
Years ago, I created a simple and flexible algorithm for exactly this problem (graph isomorphism with labels).
I named it "Powerhash", and to create the algorithm it required two insights. The first is the power iteration graph algorithm, also used in PageRank. The second is the ability to replace power iteration's inside step function with anything that we want. I replaced it with a function that does the following on each iteration, and for each node:
Sort the hashes (from previous iteration) of the node's neighbors
Hash the concatenated sorted hashes
Replace node's hash with newly computed hash
On the first step, a node's hash is affected by its direct neighbors. On the second step, a node's hash is affected by the neighborhood 2-hops away from it. On the Nth step a node's hash will be affected by the neighborhood N-hops around it. So you only need to continue running the Powerhash for N = graph_radius steps. In the end, the graph center node's hash will have been affected by the whole graph.
To produce the final hash, sort the final step's node hashes and concatenate them together. After that, you can compare the final hashes to find if two graphs are isomorphic. If you have labels, then add them (on the first iteration) in the internal hashes that you calculate for each node.
For more on this you can look at my post here:
https://plus.google.com/114866592715069940152/posts/fmBFhjhQcZF
The algorithm above was implemented inside the "madIS" functional relational database. You can find the source code of the algorithm here:
https://github.com/madgik/madis/blob/master/src/functions/aggregate/graph.py
Just checking; do you mean strict graph isomorphism or something else? Isomorphic graphs have the same adjacency relations (I.e. if node A is adjacent to node B in one graph then node g(A) is adjacent to node g(B) in another graph that is the result of applying the transformation g to the first one...) If you just wanted to check of one graph has the same types and number of nodes as another then you can just compare counts.