I would like to create nice curved edges in my Cotoscape.js graph using the unbundled-bezier style. According to the database I have to set the control-point-distance(s) automatically, so I came up with following code:
{
selector: 'edge',
css: {
'curve-style': 'unbundled-bezier',
'target-arrow-shape': 'triangle',
'control-point-weights': '0.25 0.75.',
'control-point-distance': function( ele ){
console.log(ele.source().position());
var pos1 = ele.source().position().y;
var pos2 = ele.target().position().y;
var str = '' + Math.abs(pos2-pos1) + 'px -' + Math.abs(pos2-pos1) + 'px';
console.log(pos1, pos2, str);
return str;
}
}
}
My problem is, that the graph is rendered with straight lines ant the curvy line appears only when I click on some. Also, when I move the nodes the curve moves nicely with the node, but the node positions (ele.source().position().y) does not change
A style function ought to be a pure function. Yours is technically not: It depends on state outside of the edge's data.
The only way an arbitrary function could be used to specify style is if the function is continuously polled. That would be hacky and prohibitively expensive.
You must use a pure function if you want to use a custom function. Either rewrite your function to rely on only the edge's data or use a passthrough data() mapping and change the edge's data whenever you want to modify the edge.
Related
I have a large set of block objects using a custom geometry, that I am hoping to merge into a smaller number of larger geometries, as I believe this will reduce rendering costs.
I have been following guidance here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance which has led me to the geometry-merger component here:
https://github.com/supermedium/superframe/tree/master/components/geometry-merger/
The A-Frame docs say:
"You can use geometry-merger and then make use a three.js material with vertex colors enabled. three.js geometries keep data such as color, uvs per vertex."
The geometry-merger component also says:
"Useful if using vertex or face coloring as individual geometries' colors can still be manipulated individually since this component keeps a faceIndex and vertexIndex."
However I have a couple of problems.
If I set vertexColors on my material (as suggested by the A-Frame docs), then this ruins the appearance of my blocks.
Whether or not I set vertexColors on my material, all material information seems to be lost when the geometries are merged, and everything just ends up white.
See this glitch for a demonstration of both problems.
https://tundra-mercurial-garden.glitch.me/
My suspicion is that the A-Frame geometry-merger component just won't do what I need here, and I need to implement something myself using the underlying three.js functions.
Is that right, or is there a way that I could make this work using geometry-merger?
For the vertexColors to work, you need to have your vertices coloured :)
More specifically - the BufferGeometry expects an array of rgb values for each vertex - which will be used as color for the material.
In this bit of code:
var geometry = new THREE.BoxGeometry();
var mat = new THREE.MeshStandardMaterial({color: 0xffffff, vertexColors: THREE.FaceColors});
var mesh = new THREE.Mesh(geometry, mat);
The mesh will be be black unless the geometry contains information about the vertex colors:
// create a color attribute in the geometry
geometry.setAttribute('color', new THREE.BufferAttribute(new Float32Array(vertices_count), 3));
// grab the array
const colors = this.geometry.attributes.color.array;
// fill the array with rgb values
const faceColor = new THREE.Color(color_hex);
for (var i = 0; i < vertices_count / 3; i += 3) {
colors[i + 0] = faceColor.r; // lol +0
colors[i + 1] = faceColor.g;
colors[i + 2] = faceColor.b;
}
// tell the geometry to update the color attribute
geometry.attributes.color.needsUpdate = true;
I can't make the buffer-geometry-merger component work for some reason, but It's core seems to be valid:
AFRAME.registerComponent("merger", {
init: function() {
// replace with an event where all child entities are ready
setTimeout(this.mergeChildren.bind(this), 500);
},
mergeChildren: function() {
const geometries = [];
// traverse the child and store all geometries.
this.el.object3D.traverse(node => {
if (node.type === "Mesh") {
const geometry = node.geometry.clone();
geometry.applyMatrix4(node.parent.matrix);
geometries.push(geometry)
// dispose the merged meshes
node.parent.remove(node);
node.geometry.dispose();
node.material.dispose();
}
});
// create a mesh from the "merged" geometry
const mergedGeo = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries);
const mergedMaterial = new THREE.MeshStandardMaterial({color: 0xffffff, roughness: 0.3, vertexColors: THREE.FaceColors});
const mergedMesh = new THREE.Mesh(mergedGeo, mergedMaterial);
this.el.object3D.add(mergedMesh)
}
})
You can check it out in this glitch. There is also an example on using the vertex colors here (source).
I agree it sounds like you need to consider other solutions. Here are two different instances of instancing with A-Frame:
https://github.com/takahirox/aframe-instancing
https://github.com/EX3D/aframe-InstancedMesh
Neither are perfect or even fully finished, but can hopefully get you started as a guide.
Although my original question was about geometry merging, I now believe that Instanced Meshes were a better solution in this case.
Based on this suggestion I implemented this new A-Frame Component:
https://github.com/diarmidmackenzie/instanced-mesh
This glitch shows the scene from the original glitch being rendered with just 19 calls using this component. That compares pretty well with > 200 calls that would have been required if every object were rendered individually.
https://dull-stump-psychology.glitch.me/
A key limitation is that I was not able to use a single mesh for all the different block colors, but had to use one mesh per color (7 meshes total).
InstancedMesh can support different colored elements, but each element must have a single color, whereas the elements in this scene had 2 colors each (black frame + face color).
Are there any vector graphics standards that support variable-thickness paths / strokes, e.g. from a stylus input:
Some amount of smoothing may be acceptable. I'd assume that the best way to store it would be as a regular path (e.g. this) and then point-wise sparse thickness information at various points in the path, with gradients between them.
I have looked at SVG but there doesn't seem to be an element that can support it. Are there any vector graphics standards that can?
A single path as currently implemented does not allow variable thickness. There is a W3.org proposal for SVG standard, but no implementation so far in pure SVG.
There are several implementation of a "path with variable thickness", but that relies on svg objects (eg., multiple paths) and a c++ or javascript functions.
PowerStroke is an implementation of such idea of a variable thickness stroke in Inkscape. A good entry to the source in c++ is here.
There are other implementations in SVG and javascript, relying on multiple paths:
Tubefy, a set of few js functions, the principle is based on a linear interpolation. There are several implementation of Tubefy, the simplest is:
$ = function (id) { return typeof id=='string'?document.getElementById(id):id };
var root = document.rootElement;
function lerp(p, a, b) { return Number(a)+(b-a)*p; }
function lerpA(p, a, b) { var c=[];
for(var i=0; i<a.length; i++) c[i]=lerp(p, a[i], b[i]);
return c;
}
function toCss(a){
for(var i=0; i<a.length; i++) a[i]=Math.round(a[i]);
return "rgb(" + a.join() + ")";
}
Variable Stroke-Width, based on multiple path, which could be the best answer to your needs.
In one of the examples, the js function uses Tubefy and is directly implemented in the svg file:
<script>//<![CDATA[
var op=1, op1=1;
function vsw0(p0, n, g){ p0=$(p0);
var SW=p0.getAttribute('stroke-widths').replace(/ /g,'').split(',');
var T=p0.getTotalLength();
var n_1=n-1, dt=T/n, dash=(dt+1)+','+T;
p0.setAttribute('stroke-dasharray', dash);
for(var i=0; i<n; i++){ p=i/n_1;
var sw=lerp(p, SW[0], SW[1]); // current stroke width
var off=-i*dt; // current dash offset
var c=toCss(lerpA(p, [255,0,0], [255,255,0])); // curr color
var newP=p0.cloneNode(true);
newP.setAttribute('style', 'stroke-width:'+sw+';stroke-dashoffset:'+off+';stroke:'+c);
$(g).appendChild(newP);
}
}
function f(){ $('abg').setAttribute('stroke', $('bg').getAttribute('fill')) }
//]]></script>
</svg>
Unfortunately this has been proposed but not further developed as an SVG standard:
https://www.w3.org/Graphics/SVG/WG/wiki/Proposals/Variable_width_stroke
Your best bet would be to generate your own outline curve based on the desired inner curve and stroke widths.
Adobe Illustrator does this when using their width tool, and Inkscape has a feature which does that too.
So technically to answer your question, the .ai file format does save stroke width information, but when exported to SVG it is a closed path with fill.
I'm sorry I don't know what to call these kind of chart. I've already made a square grid map, rectangle located on every centroid. (pic 1)
Pic 1
Now I want to make like a percentage map (isotype?), but I don't know how to do it. (pic 2--photoshopped)
Pic 2
This is the code, I modified it from a simple map code
d3.json(url).then(function(dataset){
group = canvas.selectAll("g")
.data(dataset.features)
.enter()
.append("g")
areas = group.append("path")
.attr("d", path)
.attr("class","area")
.style("fill","white")
.style("opacity",0)
;
group.append("rect")
.attr("x",function (d) { return path.centroid(d)[0];})
.attr("y",function (d) { return path.centroid(d)[1];})
.attr("width",10)
.attr("height",10)
.style("fill","orange");
});
I already have an idea, by manually adding the data on the JSON, but I want to know how to do it from dynamic input. I want to google it but don't have any clue or keyword.
Thanks
Or putting it more accurately, I want to be able to get the distance between the top of a control to the top of one of its children (and adding the height member of all the above children yields specious results!) but the process of getting the absolute coordinates, and comparing them, looks really messed up.
I use this function to calculate the height between the tops of 2 tags:
private static function GetRemainingHeight(oParent:Container, oChild:Container,
yParent:Number, yChild:Number):Number {
const ptParent:Point = oParent.localToGlobal(new Point(0, yParent));
const ptChild:Point = oChild.localToGlobal(new Point(0, yChild));
const nHeightOfEverythingAbove:Number = ptChild.y - ptParent.y;
trace(ptChild.y.toString() + '[' + yChild.toString() + '] - ' +
ptParent.y.toString() + '[' + yParent.toString() + '] = ' + nHeightOfEverythingAbove.toString() + ' > ' + oParent.height.toString());
return nHeightOfEverythingAbove;
}
Note that oParent.y == yParent and oChild.y == yChild but I did it this way for binding reasons.
The result I get is very surprising:
822[329] - 124[0] = 698 > 439
which is impossible, because the top of oChild does not disappear below oParent. The only figure I find unexpected is ptChild.y. All the other numbers look quite sane. So I'm assuming that my mistake was in subtracting two figures that are not supposed to be comparable.
Of course, if anyone has a method of calculating the difference between two points that doesn't involve localToGlobal(), that'd be fine, too.
I'm using the 3.5 SDK.
I found a partial answer by looking to http://rjria.blogspot.ca/2008/05/localtoglobal-vs-contenttoglobal-in.html (including the comments). It dithers on whether or not I should be using localToGlobal() or contentToGlobal(), but it filled in some blanks that Adobe's documentation left, which is that you get the global coordinates by feeding the function new Point(0, 0). In the end, I used this:
public static function GetRemainingHeight(oParent:DisplayObject, oChild:DisplayObject,
yParent:Number, yChild:Number):Number {
const ptParent:Point = oParent.localToGlobal(new Point(0, 0));
const ptChild:Point = oChild.localToGlobal(new Point(0, 0));
const nHeightOfEverythingAbove:Number = ptChild.y - ptParent.y;
return nHeightOfEverythingAbove;
}
See question for an explanation for the seemingly unnecessary parameters, which now seem like they might really be irrelevant.
However, I didn't need this function as often as I thought, and I'm not terribly happy w/the way it works anyway. I've learned that the way I've done it, it isn't possible to just make all those parameters to the function Bindable and expect this function to be called when changes to oChild are made. In one case I had to call this function in the handler for the updateComplete event.
I have an application in Flex 4 with a map, a database of points and a search tool.
When the user types something and does the search it returns name, details and coordinates of the objects in my database.
I have a function that, when i click one of the results of my search, it zooms the selected point of the map.
The question is, i want a function that zooms all the result points at once. For example if i search "tall trees" and it returns 10 points, i want that the map zooms to a position where i can see the 10 points at once.
Below is the code im using to zoom one point at a time, i thought flex would have some kind of function "zoom to group of points", but i cant find anything like this.
private function ResultDG_Click(event:ListEvent):void
{
if (event.rowIndex < 0) return;
var obj:Object = ResultDG.selectedItem;
if (lastIdentifyResultGraphic != null)
{
graphicsLayer.remove(lastIdentifyResultGraphic);
}
if (obj != null)
{
lastIdentifyResultGraphic = obj.graphic as Graphic;
switch (lastIdentifyResultGraphic.geometry.type)
{
case Geometry.MAPPOINT:
lastIdentifyResultGraphic.symbol = objPointSymbol
_map.extent = new Extent((lastIdentifyResultGraphic.geometry as MapPoint).x-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).x+0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y+0.05,new SpatialReference(29101)).expand(0.001);
break;
case Geometry.POLYLINE:
lastIdentifyResultGraphic.symbol = objPolyLineSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
case Geometry.POLYGON:
lastIdentifyResultGraphic.symbol = objPolygonSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
}
graphicsLayer.add(lastIdentifyResultGraphic);
}
}
See the GraphicUtil class from com.esri.ags.Utils package. You can use the method "getGraphicsExtent" to generate an extent from an array of Graphics. You then use the extent to set the zoom factor of your map :
var graphics:ArrayCollection = graphicsLayer.graphicProvider as ArrayCollection;
var graphicsArr:Array = graphics.toArray();
// Create an extent from the currently selected graphics
var uExtent:Extent;
uExtent = GraphicUtil.getGraphicsExtent(graphicsArr);
// Zoom to extent created
if (uExtent)
{
map.extent = uExtent;
}
In this case, it would zoom to the full content of your graphics layer. You can always create an array containing only the features you want to zoom to. If you find that the zoom is too close to your data, you can also use map.zoomOut() after setting the extent.
Note: Be careful if you'Ve got TextSymbols in your graphics, it will break the GraphicUtil. In this case you need to filter out the Graphics with TextSymbols
Derp : Did not see the thread was 5 months old... Hope my answer helps other people