I am trying to understand how Houdini generates terrain, but I cannot find the relevant documentation. For example, I am looking for a mathematical description of the algorithms used to calculate terrain erosion and animal path determination.
You can inspect the implementation of the erosion nodes within Houdini.
There are 4 terrain erosion nodes in Houdini 17.5:
HeightField Erode Hydro
HeightField Erode Precipitation
HeightField Erode Thermal
HeightField Erode - a convenience node that runs all the previous erode nodes
Each one of the nodes above is implemented by a Geometry (SOP) network. You can navigate/inspect these node networks in the Network editor.
The screenshots below show an eroded terrain with the top level geometry network that generates it and 2 example nodes you will find inside each of the top level nodes.
Eroded Terrain with Geometry Subnetwork
OpenCL Node
The OpenCL Node allows you to write code to implement the node with OpenCL.
Attribute Wrangle Node
The Attribute Wrangle Node contains VEX code that can read and write Houdini geometry attributes.
Related
I need to arrange 100 nodes in a hexagonal shape in my tcl script. My simulation is wireless (dsr protocol). How can I do this? Which is the best size of the grid? I thought 3000x3000.
Thank you.
The simplest way is to build the network graphically .. you can use nsg tool to build the network and generate the tcl code very easily
http://www.nsnam.com/2013/04/ns2-scenario-generator.html
first you must determine the transition range of nodes, put required number of node-> get network size , or you can compute it mathematically using transmition range and hexagon aria
https://en.wikipedia.org/wiki/Hexagon
If you plan to use nsg to draw a regular hexagon use this simple trick:
if the transmition range is equal to 300 m then make it 250 m
put first node in the center of the network.
put two nodes in the border of first node coverage aria.
put other nodes in the 4 Intersection of coverage area o previous 3 nodes.
repeat previous steps for other hexagons.
return the transmition range to its correct value (300 m).
I have a dataset of DNA relationships (as a percent match) between myself and few hundred relatives, almost all distant relatives. I also have data on DNA relationships between each of them and certain other members in the dataset.
I'm hoping to build a network graph that shows the interrelationships and have Gephi build something that loosely resembles a family tree. But even using a small sample database I can't get the resulting graph to look anything like that.
I want each relationship (i.e. edge) to have a "force" related to the closeness of the relationship, so distant relatives (nodes) are pushed further away. I want the graph to self-assemble based on these "forces" and assume there is a layout for this, but I haven't found one.
I'm currently putting the DNA relationship in the weight column, and not using the interval column at all. But even using just 8 relatives and artificially perfect data I have to manually move nodes around to make it look remotely useful.
What layout should I use for this type of graph, and what other advice can you offer to make this work? Should the weight field increase or decrease as relationship distance increases?
… and have Gephi build something that loosely resembles a family tree. But even using a small sample database I can't get the resulting graph to look anything like that.
A family tree connects descendants (mostly). DNA similarity (as a percentage) does not conform to this structure. Related questions may be answered here.
Setting a Library > Edges > Edge Weight -filter to the DNA similarity attribute may help (but will not produce "something that loosely resembles a family tree").
I want each relationship (i.e. edge) to have a "force" related to the closeness of the relationship, so distant relatives (nodes) are pushed further away. I want the graph to self-assemble based on these "forces" …
All layouts work like that. However, Gephi does not feature hierarchical positioning. 3rd party candidates include EventGraphLayout, Layered Layout and Concentric Layout.
Should the weight field increase or decrease as relationship distance increases?
The greater an edge's weight, the stronger its connection (resulting in less distance between the nodes it connects). To a family tree however this is irrelevant.
I'm hoping to build a network graph that shows the interrelationships between each member …
What layout should I use for this type of graph, and what other advice can you offer to make this work?
Following steps emphasize clustering and modularity:
Calculate modularity.
Color nodes by modularity class: Appearance > Nodes > Partition > Modularity Class
Apply a layout; ForceAtlas 2 for example (with Dissuade Hubs, LinLog mode and Prevent Overlap enabled).
Apply the Contraction layout afterwards if necessary. Optionally set node size according to (for example) Eigenvector Centrality (prior to applying layout).
I am curious how map software (Google/Bing maps) convert a map into a graph in the backend.
Now if we add houses between intersections 1 and 2, then how would the graph change. How do map software keep track of where the houses are?
Do they index the intersection nodes and also have smaller "subnodes" (between 1 and 2 in this case)? Or do they do this by having multiple layers? So when a user enters a home number, it looks up where the home is (i.e. between which vertices the home is located). After that, they simply apply a shortest path algorithm between those two node and at the beginning and the end, they basically make the home node go to one of the main vertices.
Could someone please give me a detailed explanation of how this works? Ultimately I would like to understand how the shortest path is determined given two the "address" of two "homes" (or "subnodes").
I can only speak for GraphHopper, not for the closed source services you mentioned ;)
GraphHopper has nodes (junctions) and edges (connection between those junctions), nearly exactly how your sketch looks like. This is very fast for the routing algorithms as it avoids massive traversal overhead of subnodes. E.g. in an early version we used subnodes everytime the connection was not straight (e.g. curved street) and this was 8 times slower and so we avoided those 'pillar' nodes and only used the 'tower' nodes for routing.
Still you have to deal with two problems:
How to deal with queries starting on the edge at e.g. house number 1? This is solved via introducing virtual nodes for every query (which can contain multiple locations), and you also need the additional virtual edges and hide some real edges. In GraphHopper we create a lightweight wrapper graph around the original graph (called QueryGraph) which handles all this. It then behaves exactly like a normal 'Graph' for every 'RoutingAlgorithm' like Dijkstra or A*. Also it becomes a bit hairy when you have multiple query locations on one edge, e.g. for a route with multiple via points. But I hope you get the main idea. Another idea would be to do the routing for two sources and two targets but initialized with the actual distance not with 0 like it is normally done for the first nodes. But this makes the routing algorithms more complex I guess.
And as already stated, most of the connections between junctions are not straight and you'll have to store this geometry somewhere and use it to draw the route but also to 'snap a location to the closest road' do finally do the actual routing. See LocationIndexTree for code.
Regarding the directed graphs. GraphHopper stores the graph via undirected edges, to handle oneways it stores the access properties for every edge and for every vehicle separately. So we avoid storing two directed edges and all of its properties (name/geometry/..), and make the use case possible "oneway for car and twoway for bike" etc. It additionally allows to traverse an edge in the reverse direction which is important for some algorithms and e.g. the bidirectional Dijkstra. This would not be possible if the graph would be used to model the access property.
Regarding 'nearly exactly how your sketch looks like': node 1, 3, 7 and 8 would not exist as they are 'pillar' nodes. Instead they would only 'exist' in the geometry of the edge.
To represent the connectivity of a road network, you want your directed road segments to be the graph nodes and your intersections to be collections of directed edges. There is a directed edge from X to Y if you can drive along X and then turn onto or continue on Y.
Consider the following example.
a====b====c
|
| <--one way street, down
|
d
An example connectivity graph for this picture follows.
Nodes
ab
ba
bc
cb
bd
Edges
ab -> bc
ab -> bd
cb -> ba
cb -> bd
Note that this encodes the following information:
No U-turns are allowed at the intersection,
because the edges ab -> ba and cb -> bc are omitted.
When coming from the right a left turn onto the vertical road is allowed,
because the edge cb -> bd is included.
With this representation, each node (directed road segment) has as an attribute all of the addresses along its span, each marked at some distance along the directed road segment.
I have started to work with gephi to help me display a dataset.
The dataset contains:
tags (terms for a certain picture) as nodes
Normalized Google Similarity Distance between those tags as edges with a weight (between 0 und 1)
Every tag is connected to every other tag, as long as they both belong to the same picture. So I have one cluster of nodes and edges for every picture.
I have now imported this dataset to gephi in the following format:
nodes: id, label
edges: target, source, weight (between 0 and 1)
Like 500 nodes and 6000 edges.
My problem now is that after importing all those nodes and edges the graph looks kind of bunched with no real order. Every cluster of every picture is mixed into other clusters of other pictures.
Now using Modularity as Partition algorithm (which should use the Louvain method) the graph is getting colored, each color represent a picture. Now I can split this mess, using the Force Atlas 2 Layout.
I now have a colored graph with something like 15 clusters (every cluster represent 1 picture)
Now I want to cluster those clusters again using tags (nodes) according to their Normalized google distance (weight of the edges), which should then be tags which are somewhat equal in their meaning.
I hope you guys understand what I want to accomplish.
I can also upload a picture to clarify it.
Thanks a lot
I don't think you can do that with the standard version of Gephi. You would need to develop a plugin to implement the very last step of your process.
Gephi is good for visualizing and browsing graphs, but (for now) there are more complete tools when it comes to processing topological properties. for instance, the igraph library (available in C, R and python) might be more appropriate for you. And note that you can use a file format compatible with both Gephi and igraph, which allows you to use both tools on the same data.
I was able to solve my problem. I had to import every one of these 15 clusters on their own. In this way i could use the Modularity method on just those few.
I'm looking to build an algorithm (or reuse one) that organizes nodes and edges on a 2 dimensional canvas where edges can have corresponding weights.
Any starting material and info would be helpful.
What would the weights do to affect their placement on your canvas?
That being said, you might want to look into graphviz and, more specifically, the DOT language, which organizes nodes on a canvas.
Many graph visualization frameworks use a force-based simulation, in which all nodes exert a repulsive force against each other (with their mass being their size), and edges exert tension on the nodes they connect. This creates aesthetically-arranged graph visualizations.
Although again, I'm not sure where you want node "weights" to come into play. Do you want weighted nodes to be more in the center? To be larger? More further apart?
Many graph/network layout algorithms are implicitly capable of handling weighted networks, but you may need to do some pre-processing and tweaks to the implementation to get it to work. Usually the first step is to determine if your weights represent "similarities" (usually interpreted to mean that stronger weights should place nodes closer togeter) or "dissimilarities" (stronger weights = father apart). The most common case is the former, so you will need to translate them to dissimilarities, often done by subtracting each edge value from the maximum observed edge value in the network. The matrix of dissimilarity values for each edge can then be fed to the algorithm and interpreted as desired distances in the layout space for each edge (i.e. "spring lengths")--usually after multiplying by some constant to transform to display units (pixels).
If you tell me what language you are using, I may be able to point you to some code examples.