I'm plotting three 3D vector fields in one row in Maple 14:
> with(plots);
> A := Array(1 .. 3):
> A[1] := fieldplot3d(...):
> A[2] := fieldplot3d(...):
> A[3] := fieldplot3d(...):
> display(A);
Here are the three plots arranged like this: [plot1] [plot2] [plot3]
Now I can rotate each of them individually to explore the vector fields.
Is it possible to link other two plots so they will rotate to the same orientation automatically? It will be fine if this will be possible only when rotating just one of them (leftmost, for example).
For example, in MatLab there is linkprop function that can link properties of two axes so changes in one of them (rotation, scale, range, etc.) will be applied to other one too.
I don't believe that this can be done in current Maple using either the usual left-click-drag on the 3D plots or by adjusting the three orientation boxes in the plot menubar (which appears at the GUI's top, when you left-click to place the cursor focus on any of the individual 3D plots).
But you can set the plots in one or more Plot Components, and create three Sliders whose underlying Action code causes a redisplay. The three sliders could thus control the three orientation angles. This is not as pleasing as using the mouse cursor to rotate freehand. But at least it allows plots in several Plot Components (or, in your case an Array plot in a single Plot Component) to be rotated in unison.
One convenient way to set up the above in Maple 17, if you are unfamiliar with programming embedded components, is to use its enhanced Explore command.
In Maple 17 a simple example, which you might replace with calls to plots:-fieldplot, could be,
A:=Array(1..3):
A[1]:=plot3d(x^3*y,x=-10..10,y=-10..10):
A[2]:=plot3d(sin(x)*y,x=-10..10,y=-10..10):
A[3]:=plot3d(x*y^2,x=-10..10,y=-10..10):
Explore(plots:-display(A,orientation=[theta,phi,psi]),
parameters=[theta=-180..180,phi=-180..180,psi=-180..180]);
In Maple 16 the Explore command does not support the above call, but the three Sliders and the Plot Component are not difficult to hook together to get the same effect of unified reorientation and redisplay.
The above approach is not very memory efficient, as it entails recreation and communication of very many whole 3D plot structures from engine to GUI. That's in contrast to the kind of rotation obtained by freehand click-dragging of the mouse cursor over a 3D plot, which just involves the GUI alone and presumably just efficient OpenGL redisplay. Any kind of memory leak, even a small one for each passed 3D plot (as Maple 16's Standard GUI appears to have) and this approach could cause the Standard Java GUI to slowly consume memory and eventually grind to a halt.
Related
In plot3d2 and similar graphic functions of Scilab, is there a way to set the colour of the back (reverse, flip, inner) side of facets?
I'm trying to draw a part of a (rather crude) torus, and the result is OK except for one row of facets. I suppose that, because of the way I generate the mesh, those facets are oriented differently - whatever algorithm renders them on the screen follows their perimeter in the opposite direction compared to others.
Instead of poring over my code to try to mend the topology of my mesh, I'd rather make sure the facet orientation doesn't matter - just set both sides to my colour. It will also improve the looks of the ends of my torus, where the inside shows and, again, is in a colour I didn't ask for.
But, hard as I search the documentation, I cannot find any mention of the flip side of mesh facets.
Any clues?
The backface color is named "hiddencolor" in the properties of a surface entity (see https://help.scilab.org/docs/6.1.1/en_US/surface_properties.html). It can be changed a posteriori, for example:
[X,Y]=meshgrid(-1:0.5:1);
plot3d2(X,Y,X.^2-2*Y.^2)
gce().hiddencolor=color("red")
You can assign -1 (instead of above) to use the same color as the front facing patches.
However, if all your patches are facing wrongly, you can also transpose all your matrices in the plot3d2 call:
[X,Y]=meshgrid(-1:0.5:1);
plot3d2(X',Y',(X.^2-2*Y.^2)')
gce().hiddencolor=color("red")
I am playing around with rgl and I have created a 3D rendering of the mouse brain, in which structures can be isolated and coloured separately.
The original data is a 3D array containing evenly spaced voxels.
Every voxel is coded with a structure ID.
Every structure is rendered separately as a mesh by marching cubes, and smoothed using Laplacian smoothing as implemented by Rvcg.
Some of these structures can be quite small, and it would make sense to look at them within the context of the whole brain structure.
One of the options is to create a low-threshold mesh of the whole set of voxels, so that only the outer surface of the brain is included in the mesh.
This surface can be smoothed and represented using a low alpha in rgl::shade3d colouring faces. This however seems to be quite taxing for the viewport as it slows down rotation etc especially when alpha levels are quite low.
I was wondering if there is any way to implement some sort of cel shading in rgl, e.g. outlining in solid colours the alpha hull of the 2D projection to the viewport in real time.
In case my description was not clear, here's a photoshopped example of what I'd need.
Ideally I would not render the gray transparent shell, only the outline.
Cel shading example
Does anybody know how to do that without getting deep into OpenGL?
Rendering transparent surfaces is slow because OpenGL requires the triangles making them up to be sorted from back to front. The sort order changes as you rotate, so you'll be doing a lot of sorting.
I can't think of any fast way to render the outline you want. One thing that might work given that you are starting from evenly spaced voxels is to render the outside surface using front="points", back="points", size = 1. Doing this with the ?surface3d example gives this fake transparency:
If that's not transparent enough, you might be able to improve it by getting rid of lighting (lit = FALSE), plotting in a colour close to the background (color = "gray90"), or some other thing like that. Doing both of those gives this:
You may also be able to cull your data so the surface has fewer vertices.
The Wikipedia page for L-Systems describes many of them, including a couple rules that converge toward the Sierpinski triangle. That particular fractal also has a 3D version, which basically uses pyramids instead of triangles. Is there a way to reach this one with an L-system? That same wikipedia page mentions the existence of 3D L-systems, but doesn't explain how they work or give any example as to what their rules would look like.
So first, how do 3D L-systems differ from their 2D counterpart (if there are major differences), and second, can they be used to create this Sierpinski Pyramids?
I'm trying to create it in Processing, as I managed to draw the 2D version in this software using an L-system before. An example of making a 3D L-system work would be appreciated, but not necessary
A 2D L system in instructions for creating recursive 2D trees with branches that contain number of sub-branches, angle, and length. A 3D version expends the branches to have roll, pitch and yaw. Its easiest to create one with turtle graphics. (If you just use a orthographic projection, you can see the tree, which is of course flattened again to 2D, but looks more complex and less symmetrical than a 2D tree)
Otherwise the system is the same.
I don't know the instruction sequence specifically for creating a Seipinsky pyramid. Presumably you stat at the apex pointing down, then do a pitch of 45*,
and four Rolls with 4 As between them.
I am getting streaming measurement data from an ultrasonic device moving inside a pipeline, and I want to make a sliding/realtime plot of these measurements. The Y axis would represent a gradient of the 360 degrees around the pipe, and the X axis would represent the length-wise position in millimeters. In other words, the X axis will update and move at the same rate as the scanner while new data is arriving (approx 40Hz). The value at each (x,y) coordinate represents one measurement, which should be mapped to a color in a colormap.
I am new to graphics (systems&backend guy) and I have been looking at QImage, QWT and QCustomPlot but none of them seem to straight-forward solve the problem without having to manually build a 2D matrix, draw it in a QImage, and update and shift the coordinates of each datapoint and redraw to move/scroll it. QCustomplot does this very nicely for graphs, but I don't see how it can be applied to their colormaps.
Any hints to frameworks or packages that provide primitives (or widgets) for this kind of plot would be much welcomed.
This can be done with Qwt. The trick is creating a wrapper around the series data that triggers a replot every time you add a data point. If you want to get fancy you can add a timer that removes old data from the series and triggers another replot.
See the CPU, oscilloscope, and realtime examples that come with the Qwt source code. They implement this trick.
Using iGraph, how can I represent self-reflexive nodes with circle shaped curves? By default, these curves are represented by a pinched or tear drop shaped loop.
As Spacedman said, you would need to do quite some programming to do this. You could plot a graph without self-loops and then add them (graphs are basically a scatterplot and you can use points and similar functions to add lines to them), but this is not trivial (especially since you need to know the edge of nodes, not their center) and will cause the selfloops to be plotted on top of everything else which might not look good.
This weekend I have updated qgraph with how self-loops work. qgraph can be used to plot networks and should play nicely with igraph. e.g.:
# An adjacency matrix:
A <- matrix(1,3,3)
library("igraph")
# igraph graph and layout:
Graph <- graph.adjacency(A)
Layout <- layout.circle(Graph)
# Plot in qgraph:
library("qgraph")
qgraph(get.adjacency(Graph,sparse=FALSE),layout=Layout,diag=TRUE,directed=TRUE)
I am quite content with how these self-loops turned out and they seem to be more to what you describe. So this could be an option. However, my loops are just as hardcoded. For reference, I compute the edge of a node (starting and ending point of the loop) with the inner function qgraph:::Cent2Edge and compute the shape of the loop (spline) with the inner function qgraph:::SelfLoop.
Inside plot.igraph you can see that loops are drawn using a plot.bezier function, and all the control for that is pretty much hard coded there. You'd have to rewrite large chunks of plot.igraph to call a plot.circle function you'd have to write to do this.
Also, I'm guessing you don't want complete circles, but circle segments that start on the edge of the vertex symbol (the default blue circle with the vertex number in it) and end (possibly with an arrowhead) on another part of the edge of the vertex symbol? Or do you want circles that touch the symbol like the bezier teardrop loops do?
Either way, the answer seems to be 'no, not without doing some programming or submitting a feature request to the igraph guys'
I posted an earlier answer saying the layout functions were involved, but that's not true - the layout functions only position the vertices, and it is plot.igraph's job to draw the edges.