I'm trying to build a large structure from a simple geometric shape in gmsh and I'd like to use a structured (quadrilateral) grid. I start by creating that shape and then duplicating and translating it as often as needed to build my final structure.
The problem is that even if I define the lines and surfaces of the original shape to be transfinite, this property is lost once I duplicate and translate it. Check this sample code for a square:
Point(1) = {0, 0, 0, 1};
Point(2) = {0, 1, 0, 1};
Point(3) = {1, 1, 0, 1};
Point(4) = {1, 0, 0, 1};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop(5) = {1, 2, 3, 4};
Plane Surface(6) = {5};
Transfinite Line {1, 2, 3, 4} = 10 Using Progression 1;
Transfinite Surface {6};
Recombine Surface {6};
Translate {0, 1, 0} {
Duplicata { Surface{6}; }
}
I obtain the original square with a structured grid but the duplicated one does not have this property.
Is there a possibility to retain the structured grid when I copy the surface?
EDIT: It seems that there is indeed no possibility to duplicate a structured volume or surface. The problem is that these properties are directly related to the mesh itself and not the geometry. And the mesh cannot be duplicated.
It is possible.
You can use the GMSH Geometry.CopyMeshingMethod property that is responsible for copying the meshing method for duplicated or translated geometric entities. By default, it is turned off. To turn it on, you can simply add the following line to the beginning of your GEO file.
Geometry.CopyMeshingMethod = 1;
Now, compare:
Tested on GMSH 3.0.5, but should work with any modern version.
This fix (using "Geometry.CopyMeshingMethod = 1;") works unless you use OpenCASCADE to define your geometry.
Try simply to include "SetFactory("OpenCASCADE");" in the beginning of your script and you will see it fails.
Related
In the image above, I show the result of having the camera positioned in the same position as the vertex covered by the mouse. A similar result comes from using an orthographic matrix. My problem is that when rotating the camera, it rotates around the visible origin of the camera. What I want is for the view to rotate like normal FPS Cameras.
What I believe to be useful information:
I am doing the math manually and rendering to the screen using OpenGL.
The cube's vertices are from {0, 0, 0} to {1, 1, 1}
The camera is positioned at {0, 0, 0}
My (4x4) matrices are in column major and I get the same result from uploading the individual matrices to the shader via uniforms and multiplying them in the same order
The movement and rotation is otherwise sensible even when translating the camera, just that the origin of the camera is visible.
This last point makes sense to me mathematically with an orthographic projection, however, since the near clipping plane is supposed to be slightly in front of the camera, I'd expect the point near the mouse to be clipped. I know for a fact that it is not clipped, because if I rotate the camera to look down on the cube (without translating it), the clipping plane cuts off roughly halfway up that vertical edge of the cube.
I think my confusion may be due to a fundamental misunderstanding of how the mathematics works for the perspective projection matrix, but it may be due to my code as well, so let me include that:
static inline constexpr auto ortho(T l, T r, T b, T t, T n, T f) {
return Matrix4x4{{(T)2 / (r - l), 0, 0, 0},
{0, (T)2 / (t - b), 0, 0},
{0, 0, (T)2 / (n - f), 0},
{(l + r) / (l - r), (b + t) / (b - t), (f + n) / (n - f), 1}};
}
static inline constexpr auto perspective(T fov, T aspect, T near, T far) {
const T q = (T)1.0 / std::tan(0.5 * fov),
a = q * aspect,
b = (near + far) / (near - far),
c = near * far * 2 / (near - far);
return Matrix4x4{
{q, 0, 0, 0},
{0, a, 0, 0},
{0, 0, b, -1},
{0, 0, c, 1},
};
}
If anyone needs extra information on what is going on, let me know in the comments and will happily either answer or make an addendum to the question
After reading the link provided in the comments, and comparing my method with the code that I had written, I realized that I made a mistake in transcribing the mathematics into my code. I accidentally put a 1 in the last row of the last column of the perspective matrix, while it should have been a 0.
The corrected code is shown here:
static inline constexpr auto perspective(T fov, T aspect, T near, T far) {
const T q = (T)1.0 / std::tan(0.5 * fov),
a = q * aspect,
b = (near + far) / (near - far),
c = near * far * 2 / (near - far);
return Matrix4x4{
{q, 0, 0, 0},
{0, a, 0, 0},
{0, 0, b, -1},
{0, 0, c, 0},
};
}
After using CSG my mesh gets messed up with lots of more vertices and faces than needed. The data provides all vertices in a single array without differentiating wether they are real corners or somewhere in the middle of the surface / plane. I made a simple fiddle to show an example.
https://jsfiddle.net/apbln/k5ze30hr/82/
geometry.vertices.push(
new THREE.Vector3(-1, -1, 0), // 0
new THREE.Vector3( 1, -1, 0), // 1
new THREE.Vector3(-1, 1, 0), // 2
new THREE.Vector3( 1, 1, 0), // 3
new THREE.Vector3(-0.5, -0.5, 0), // 4
new THREE.Vector3( 0, 1, 0), // 5
new THREE.Vector3( 1, -1, 0), // 6
);
geometry.faces.push(
new THREE.Face3(0, 5, 2),
new THREE.Face3(0, 1, 5),
new THREE.Face3(3, 5, 1),
);
This is a simplified version of how my surfaces look like after using csg. How could I possibly know which vertices are real corners, so that I could repair the surface with 2 faces only?
You could try detecting adjacent faces. If two faces share an edge and they are coplanar then they can be merged. This would be a slightly more general algorithm.
Skecth of algorithm would be to first build adjacency graph where each edge is linked to the faces adjacent to it. Then loop through the list of edges and detect those where the two adjacent faces are coplaner, if so merge the faces and adjust adajacency lists.
Its a bit more complex if you require a triangular mesh. Best aproach might be to first construct the mesh with non triangular faces and then split them into triangles.
Its a pretty common problem and there are no doubt libraries which can do it for you. This answer may help: ThreeJS: is it possible to simplify an object / reduce the number of vertexes?
I have three figures: a cube, an octahedron, a dodecahedron.
Inside, each figure has an accelerometer.
The sides of the figures numbered between 1 and n.
Task: determine the current side of the cube, octahedron, dodecahedron.
For the cube, I derived the formula:
side = round((Ax*1/988)+(Ay*2/988)+(Az*3/988));
Variable "side" will give values in interval -3 and 3 (without 0), which means the current side of cube between 1 and 6.
Now I need to do the same for the octahedron and the dodecahedron. Help, how can I do this? Do I need additional sensors or accelerometer is enough?
Using a formula like that is quite clever but it has some undesirable properties. Firstly, when moving from one side to another, it will move through some intermediate values as a result of the formula that are geometrically meaningless. For example, if you are on side -3 and rotate to side -1, it will necessarily move through -2. Secondly it may not be robust to noisy accelerometer data, for example a vector that is part way between sides -3 and -1, but closer to -1 may give -2, when it should give -1.
An alternative approach is to store an array of face normals for the figure, and then take the dot product of the accelerometer reading with each of them. The closest match (the one with the highest dot product) is the closest side.
e.g:
float cube_sides[6][3] = {
{-1, 0, 0},
{0, -1, 0},
{0, 0, -1},
{1, 0, 0},
{0, 1, 0},
{0, 0, 1},
};
int closest_cube_side(float Ax, float Ay, float Az)
{
float largest_dot = 0;
int closest_side = -1; // will return -1 in case of a zero A vector
for(int side = 0; side < 6; side++)
{
float dot = (cube_sides[side][0] * Ax) +
(cube_sides[side][1] * Ay) +
(cube_sides[side][2] * Az);
if(dot > largest_dot)
{
largest_dot = dot;
closest_side = side;
}
}
return closest_side;
}
You can extend this for an octahedron and dodecahedron just by using the surface normals for each. No additional sensors should be necessary.
I've got a list of three dimensional points, ordered by time. Is there a way to plot the points so that I can get a visual representation that also includes information on where in the list the point occurred? My initial thought is to find a way to color the points by the order in which they were plotted.
ListPlot3D drapes a sheet over the points, with no regard to the order which they were plotted.
ListPointPlot just shows the points, but gives no indication as to the order in which they were plotted. It's here that I am thinking of coloring the points according to the order in which they appear in the list.
ListLinePlot doesn't seem to have a 3D cousin, unlike a lot of the other plotting functions.
You could also do something like
lst = RandomReal[{0, 3}, {20, 3}];
Graphics3D[{Thickness[0.005],
Line[lst,
VertexColors ->
Table[ColorData["BlueGreenYellow"][i], {i,
Rescale[Range[Length[lst]]]}]]}]
As you did not provide examples, I made up some by creating a 3d self-avoiding random walk:
Clear[saRW3d]
saRW3d[steps_]:=
Module[{visited},
visited[_]=False;
NestList[
(Function[{randMove},
If[
visited[#+randMove]==False,
visited[#+randMove]=True;
#+randMove,
#
]
][RandomChoice[{{1,0,0},{-1,0,0},{0,1,0},{0,-1,0},{0,0,1},{0,0,-1}}]])&,
{0,0,0},
steps
]//DeleteDuplicates
]
(this is sort of buggy but does the job; it produces a random walk in 3d which avoids itself, ie, avoids revisiting the same place in subsequent steps).
Then we produce 100000 steps like this
dat = saRW3d[100000];
this is like I understood your data points to be. We then make these change color depepnding on which step it is:
datpairs = Partition[dat, 2, 1];
len = Length#datpairs;
dressPoints[pts_, lspec_] := {RGBColor[(N#First#lspec)/len, 0, 0],
Line#pts};
datplt = MapIndexed[dressPoints, datpairs];
This can also be done all at once like the other answers
datplt=MapIndexed[
{RGBColor[(N#First##2)/Length#dat, 0, 0], Line##1} &,
Partition[dat, 2, 1]
]
but I tend to avoid this sort of constructions because I find them harder to read and modify.
Finally plot the result:
Graphics3D[datplt]
The path gets redder as time advances.
If this is the sort of thing you're after, I can elaborate.
EDIT: There might well be easier ways to do this...
EDIT2: Show a large set of points to demonstrate that this is very useful to see the qualitative trend in time in cases where arrows won't scale easily.
EDIT3: Added the one-liner version.
I think Heike's method is best, but she made it overly complex, IMHO. I would use:
Graphics3D[{
Thickness[0.005],
Line[lst,
VertexColors ->
ColorData["SolarColors"] /# Rescale#Range#Length#lst ]
}]
(acl's data)
Graphics3D#(Arrow /# Partition[RandomInteger[{0, 10}, {10, 3}], 2, 1])
As to your last question: If you want to have a kind of ListLinePlot3D instead of a ListPointPlot you could simply do the following:
pointList =
Table[{t, Sin[t] + 5 Sin[t/10], Cos[t] + 5 Cos[t/10],
t + Cos[t/10]}, {t, 0, 100, .5}];
ListPointPlot3D[pointList[[All, {2, 3, 4}]]] /. Point -> Line
Of course, in this way you can't set line properties so you have to change the rule a bit if you want that:
ListPointPlot3D[pointList[[All, {2, 3, 4}]]] /.
Point[a___] :> {Red, Thickness[0.02], Line[a]}
or with
ListPointPlot3D[pointList[[All, {2, 3, 4}]]] /.
Point[a___] :> {Red, Thickness[0.002], Line[a], Black, Point[a]}
But then, why don't you use just Graphics3D and a few graphics primitives?
Is it possible to draw a transperancy mask of an image (that is, paint all visible pixels with a constant color) using Graphics::DrawImage? I am not looking for manually scanning the image pixel-by-pixel and creating a seperate mask image, I wonder if it's possible to draw one directly from the original image.
My guessing is that it should be done with certain manipulations to ImageAttributes, if possible at all.
The color of the mask is arbitrary and should be accurate, and it would be a plus if there can be a threshold value for the transparency.
I have to draw an image with a per pixel alpha mask and found the best way was to just draw the base RGB image then draw the opaque parts of the alpha mask over the top. You need to map each alpha level to an alpha level + colour to get the image to honour every detail in the alpha mask. The code I've got has the alpha mask as a seperate 8bit image but the draw code looks like this:
g.DrawImage(&picture, r.left, r.top, r.Width(), r.Height()); // base image
if ( AlphaBuf != NULL ) // do we have and alpha mask?
{
Gdiplus::Bitmap mask(Width, Height, Width(), PixelFormat8bppIndexed, AlphaBuf);
picture.GetPalette( palette, picture.GetPaletteSize() );
// invert - only shows the transparent
palette->Entries[0] = DLIMAKEARGB(255, 0, 0, 200); // 0 = fully transparent
palette->Entries[255] = DLIMAKEARGB(0, 0, 0, 0); // 255 = fully opaque
mask.SetPalette(palette);
g.DrawImage(&mask, r.left, r.top, r.Width(), r.Height()); // draw the mask
}
My alpha masks are only full transparent or full opaque but I would think that setting the other alpha values in the pallette would let you follow a more graduated mask.
palette->Entries[1] = DLIMAKEARGB(254, 0, 0, 200);
palette->Entries[2] = DLIMAKEARGB(253, 0, 0, 200);
etc..
Hope this helps (or at least makes sense :-p )
Do you mean that you want to transform every existing color in the bitmap into one uniform color, while fully honoring the alpha/transparency information present in the image?
If so, just use the following colormatrix:
imageAttributes.SetColorMatrix( new ColorMatrix( new float[][] {
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {r, g, b, 0, 1}} ) );
where r, g, b are between 0 and 1, like so:
float r = DesiredColor.R / 255f;
float g = DesiredColor.G / 255f;
float B = DesiredColor.B / 255f;
I don't understand what you mean by "it would be a plus if there can be a threshold value for the transparency", though... so maybe this isn't the answer you were looking for?