Extend 2d polygon by vector - polygon

I have convex polygons, and I want to extend them by projecting along a vector like so:
(Original polygon and vector on left, desired result on right.)
My polygons are stored as a series of points with counter-clockwise winding. What I want to find are the "starting" and "stopping" point that I need to project from, as in the circled vertices below.
(The green arrows are to indicate the polygon's winding, giving the "direction" of each edge.)
My original plan was to determine which points to use by projecting a ray with the vector's direction from each point, and finding the first and last points whose ray doesn't intersect an edge. However, that seems expensive.
Is there a way I can use the edge directions vs the vector direction, or a similar trick, to determine which points to extend from?

Look at points where the direction of the vector falls between the directions of the edges.
In other words, take three vectors:
of the edge leading out of the vertex
of the translation vector
opposite to the edge leading to the vertex
If they are in this order when going CCW, i.e. if the second vector is between the first and the third, this is an "inside" point.
In order to determine whether a vector lies between two other vectors, use cross product as described e.g. here.

Yes you can. You want to project along x, y. So the normal is y, -x. Now rotate by that (atan2, or you can you it directly if you understand rotation matrices). The points to project from and now the minimum and maximum x, You can also speed up the projection by always doing it along an axis then rotating back.

n.m. answered the question as I asked and pictured it, but upon programming I soon noticed that there was a common case where all vertices would be "outside" vertices (this can be easily seen on triangles, and can occur for other polygons too).
The text explanation.
The solution I used was to look at the normal vectors of the edges leading into and exiting each vertex. The vertices we want to extend are vertices that have at least one edge normal with a minimum angle of less than 90 degrees to the delta vector we are extending by.
The outward-facing edge normals on a counterclockwise-wound polygon can be found by:
normal = (currentVertex.y - nextVertex.y, nextVertex.x - currentVertex.x)
Note that since we don't care about the exact angle, we don't need to normalize (make a unit vector of) the normal, which saves a square root.
To compare it to the delta vector, we use the dot product:
dot = edgeNormal.dot(deltaVector)
If the result is greater than zero, the minimum angle is acute (less than 90). If the result is exactly zero, the vectors are perpendicular. If the result is less than zero, the minimum angle is obtuse. It is worth noting when the vectors are perpendicular, since it lets us avoid adding extra vertices to the extended polygon.
If you want to visualize how the angle works with the dot product, like I did, just look at a graph of arc cosine (normally you get the angle via acos(dot)).
Now we can find the vertices that have one acute and one not-acute minimum angle between its edge normals and the delta vector. Everything on the "acute side" of these vertices has the delta vector added to it, and everything on the "obtuse side" stays the same. The two boarder vertices themselves are duplicated, having one extended and one staying the same, unless the "obtuse side" is exactly perpendicular to the delta vector (in this case we only need to extend the vertex, since otherwise we would have two vertices on the same line).
Here is the C++ code for this solution.
It may look a little long, but it is actually quite straightforward and has many comments so it hopefully shouldn't be hard to follow.
It is part of my Polygon class, which has a std::vector of counterclockwise-wound vertices. units::Coordinate are floats, and units::Coordinate2D is a vector class that I feel should be self-explanatory.
// Compute the normal of an edge of a polygon with counterclockwise winding, without normalizing it to a unit vector.
inline units::Coordinate2D _get_non_normalized_normal(units::Coordinate2D first, units::Coordinate2D second) {
return units::Coordinate2D(first.y - second.y, second.x - first.x);
}
enum AngleResult {
ACUTE,
PERPENDICULAR,
OBTUSE
};
// Avoid accumulative floating point errors.
// Choosing a good epsilon is extra important, since we don't normalize our vectors (so it is scale dependent).
const units::Coordinate eps = 0.001;
// Check what kind of angle the minimum angle between two vectors is.
inline AngleResult _check_min_angle(units::Coordinate2D vec1, units::Coordinate2D vec2) {
const units::Coordinate dot = vec1.dot(vec2);
if (std::abs(dot) <= eps)
return PERPENDICULAR;
if ((dot + eps) > 0)
return ACUTE;
return OBTUSE;
}
Polygon Polygon::extend(units::Coordinate2D delta) const {
if (delta.isZero()) { // Isn't being moved. Just return the current polygon.
return Polygon(*this);
}
const std::size_t numVerts = vertices_.size();
if (numVerts < 3) {
std::cerr << "Error: Cannot extend polygon (polygon invalid; must have at least three vertices).\n";
return Polygon();
}
// We are interested in extending from vertices that have at least one edge normal with a minimum angle acute to the delta.
// With a convex polygon, there will form a single contiguous range of such vertices.
// The first and last vertex in that range may need to be duplicated, and then the vertices within the range
// are projected along the delta to form the new polygon.
// The first and last vertices are defined by the vertices that have only one acute edge normal.
// Whether the minimum angle of the normal of the edge made from the last and first vertices is acute with delta.
const AngleResult firstEdge = _check_min_angle(_get_non_normalized_normal(vertices_[numVerts-1], vertices_[0]), delta);
const bool isFirstEdgeAcute = firstEdge == ACUTE;
AngleResult prevEdge = firstEdge;
AngleResult currEdge;
bool found = false;
std::size_t vertexInRegion;
for (std::size_t i = 0; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (isFirstEdgeAcute != (currEdge == ACUTE)) {
// Either crossed from inside to outside the region, or vice versa.
// (One side of the vertex has an edge normal that is acute, the other side obtuse.)
found = true;
vertexInRegion = i;
break;
}
prevEdge = currEdge;
}
if (!found) {
// A valid polygon has two points that define where the region starts and ends.
// If we didn't find one in the loop, the polygon is invalid.
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
found = false;
std::size_t first, last;
// If an edge being extended is perpendicular to the delta, there is no need to duplicate that vertex.
bool shouldDuplicateFirst, shouldDuplicateLast;
// We found either the first or last vertex for the region.
if (isFirstEdgeAcute) {
// It is the last vertex in the region.
last = vertexInRegion;
shouldDuplicateLast = currEdge != PERPENDICULAR; // currEdge is either perpendicular or obtuse.
// Loop backwards from the end to find the first vertex.
for (std::size_t i = numVerts - 1; i > 0; --i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i-1], vertices_[i]), delta);
if (currEdge != ACUTE) {
first = i;
shouldDuplicateFirst = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
} else {
// It is the first vertex in the region.
first = vertexInRegion;
shouldDuplicateFirst = prevEdge != PERPENDICULAR; // prevEdge is either perpendicular or obtuse.
// Loop forwards from the first vertex to find where it ends.
for (std::size_t i = vertexInRegion + 1; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (currEdge != ACUTE) {
last = i;
shouldDuplicateLast = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
// The edge normal between the last and first vertex is the only non-acute edge normal.
last = numVerts - 1;
shouldDuplicateLast = firstEdge != PERPENDICULAR;
}
}
// Create the new polygon.
std::vector<units::Coordinate2D> newVertices;
newVertices.reserve(numVerts + (shouldDuplicateFirst ? 1 : 0) + (shouldDuplicateLast ? 1 : 0) );
for (std::size_t i = 0; i < numVerts; ++i) {
// Extend vertices in the region first-to-last inclusive. Duplicate first/last vertices if required.
if (i == first && shouldDuplicateFirst) {
newVertices.push_back(vertices_[i]);
newVertices.push_back(vertices_[i] + delta);
} else if (i == last && shouldDuplicateLast) {
newVertices.push_back(vertices_[i] + delta);
newVertices.push_back(vertices_[i]);
} else {
newVertices.push_back( isFirstEdgeAcute ? // Determine which range to use.
( (i <= last || i >= first) ? vertices_[i] + delta : vertices_[i] ) : // Range overlaps start/end of the array.
( (i <= last && i >= first) ? vertices_[i] + delta : vertices_[i] )); // Range is somewhere in the middle of the array.
}
}
return Polygon(newVertices);
}
So far I tested this code with triangles, rectangles, approximated circles, and arbitrary convex polygons made by extending the approximated circles sequentially by many different delta vectors.
Please note that this solution is still only valid for convex polygons.

Related

Processing: Draw vector instead of pixels

I have a simple Processing Sketch, drawing a continuous line of ellipses with a 20px diameter. Is there a way to modify the sketch so that it draws vector shapes instead of pixels?
void setup() {
size(900, 900);
background(110, 255, 94);
}
void draw() {
ellipse(mouseX, mouseY, 20, 20);
fill(255);
}
Thanks to everyone who can provide some helpful advice.
Expanding my comment above, there a couple of things to tackle:
drawing a continuous line of ellipses with a 20px diameter
draws vector shapes
Currently you're drawing ellipses based on mouse movement.
A side effect is that if you move the mouse fast enough you will have gaps in between ellipses.
To fill the gaps you can work out the distance between every two ellipses.
If the distance is greater than the sizes of these two ellipses you can draw some in between.
The PVector class provides a lerp() function that allows you easily interpolate between two points.
You can read more on this and run some examples here
Using the ratio between these distance of two points and the ellipse size the number of points needed in between.
Here is an example that stores mouse locations to a list of PVectors as you drag the mouse:
//create an array list to store points to draw
ArrayList<PVector> path = new ArrayList<PVector>();
//size of each ellipse
float size = 20;
//how tight will the extra ellipses be drawn together
float tightness = 1.25;
void setup() {
size(900, 900);
}
void draw() {
background(110, 255, 94);
fill(255);
//for each point in the path, starting at 1 (not 0)
for(int i = 1; i < path.size(); i++){
//get a reference to the current and previous point
PVector current = path.get(i);
PVector previous = path.get(i-1);
//calculate the distance between them
float distance = previous.dist(current);
//work out how many points will need to be added in between the current and previous points to keep the path continuous (taking the ellipse size into account)
int extraPoints = (int)(round(distance/size * tightness));
//draw the previous point
ellipse(previous.x,previous.y,size,size);
//if there are any exta points to be added, compute and draw them:
for(int j = 0; j < extraPoints; j++){
//work out a normalized (between 0.0 and 1.0) value of where each extra point should be
//think of this as a percentage along a line: 0.0 = start of line, 0.5 = 50% along the line, 1.0 = end of the line
float interpolation = map(j,0,extraPoints,0.0,1.0);
//compute the point in between using PVector's linear interpolation (lerp()) functionality
PVector inbetween = PVector.lerp(previous,current,interpolation);
//draw the point in between
ellipse(inbetween.x,inbetween.y,size,size);
}
}
//draw instructions
fill(0);
text("SPACE = clear\nLEFT = decrease tightness\nRIGHT = increase tightness\ntightness:"+tightness,10,15);
}
void mouseDragged(){
path.add(new PVector(mouseX,mouseY));
}
void keyPressed(){
if(keyCode == LEFT) tightness = constrain(tightness-0.1,0.0,3.0);
if(keyCode == RIGHT) tightness = constrain(tightness+0.1,0.0,3.0);
if(key == ' ') path.clear();
}
Note that the interpolation between points is linear.
It's the simplest, but as the name implies, it's all about lines:
it always connects two points in a straight line, not curves.
I've added the option to control how tight interpolated ellipses will be packed together. Here are a couple of screenshots with different tightness levels. You'll notice as tightness increases, the lines will become more evident:
You run the code bellow:
//create an array list to store points to draw
var path = [];
//size of each ellipse
var ellipseSize = 20;
//how tight will the extra ellipses be drawn together
var tightness = 1.25;
function setup() {
createCanvas(900, 900);
}
function draw() {
background(110, 255, 94);
fill(255);
//for each point in the path, starting at 1 (not 0)
for(var i = 1; i < path.length; i++){
//get a reference to the current and previous point
var current = path[i];
var previous = path[i-1];
//calculate the distance between them
var distance = previous.dist(current);
//work out how many points will need to be added in between the current and previous points to keep the path continuous (taking the ellipse size into account)
var extraPoints = round(distance/ellipseSize * tightness);
//draw the previous point
ellipse(previous.x,previous.y,ellipseSize,ellipseSize);
//if there are any exta points to be added, compute and draw them:
for(var j = 0; j < extraPoints; j++){
//work out a normalized (between 0.0 and 1.0) value of where each extra point should be
//think of this as a percentage along a line: 0.0 = start of line, 0.5 = 50% along the line, 1.0 = end of the line
var interpolation = map(j,0,extraPoints,0.0,1.0);
//compute the point in between using PVector's linear interpolation (lerp()) functionality
var inbetween = p5.Vector.lerp(previous,current,interpolation);
//draw the point in between
ellipse(inbetween.x,inbetween.y,ellipseSize,ellipseSize);
}
}
//draw instructions
fill(0);
text("BACKSPACE = clear\n- = decrease tightness\n+ = increase tightness\ntightness:"+tightness,10,15);
}
function mouseDragged(){
path.push(createVector(mouseX,mouseY));
}
function keyPressed(){
if(keyCode == 189) tightness = constrain(tightness-0.1,0.0,3.0);
if(keyCode == 187) tightness = constrain(tightness+0.1,0.0,3.0);
if(keyCode == BACKSPACE) path = [];
}
//https://stackoverflow.com/questions/40673192/processing-draw-vector-instead-of-pixels
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
If you want smoother lines you will need to use a different interpolation such as quadratic or cubic interpolation. You can start with existing Processing functions for drawing curves such as curve() or bezier(),and you'll find some helpful resources unrelated to Processing here,here and here.
On vector shapes
You're not directly working with pixels[], you're drawing shapes.
These shapes can easily be saved to PDF using Processing's PDF library
Check out the Single Frame from an Animation (With Screen Display) example.
Here is a version that saves to PDF when pressing the 's' key:
import processing.pdf.*;
//create an array list to store points to draw
ArrayList<PVector> path = new ArrayList<PVector>();
//size of each ellipse
float size = 20;
//how tight will the extra ellipses be drawn together
float tightness = 1.25;
//PDF saving
boolean record;
void setup() {
size(900, 900);
}
void draw() {
background(110, 255, 94);
fill(255);
//if we need to save the current frame to pdf, begin recording drawing instructions
if (record) {
// Note that #### will be replaced with the frame number. Fancy!
beginRecord(PDF, "frame-####.pdf");
}
//for each point in the path, starting at 1 (not 0)
for(int i = 1; i < path.size(); i++){
//get a reference to the current and previous point
PVector current = path.get(i);
PVector previous = path.get(i-1);
//calculate the distance between them
float distance = previous.dist(current);
//work out how many points will need to be added in between the current and previous points to keep the path continuous (taking the ellipse size into account)
int extraPoints = (int)(round(distance/size * tightness));
//draw the previous point
ellipse(previous.x,previous.y,size,size);
//if there are any exta points to be added, compute and draw them:
for(int j = 0; j < extraPoints; j++){
//work out a normalized (between 0.0 and 1.0) value of where each extra point should be
//think of this as a percentage along a line: 0.0 = start of line, 0.5 = 50% along the line, 1.0 = end of the line
float interpolation = map(j,0,extraPoints,0.0,1.0);
//compute the point in between using PVector's linear interpolation (lerp()) functionality
PVector inbetween = PVector.lerp(previous,current,interpolation);
//draw the point in between
ellipse(inbetween.x,inbetween.y,size,size);
}
}
//once what we want to save has been recorded to PDF, stop recording (this will skip saving the instructions text);
if (record) {
endRecord();
record = false;
println("pdf saved");
}
//draw instructions
fill(0);
text("SPACE = clear\nLEFT = decrease tightness\nRIGHT = increase tightness\ntightness:"+tightness+"\n's' = save PDF",10,15);
}
void mouseDragged(){
path.add(new PVector(mouseX,mouseY));
}
void keyPressed(){
if(keyCode == LEFT) tightness = constrain(tightness-0.1,0.0,3.0);
if(keyCode == RIGHT) tightness = constrain(tightness+0.1,0.0,3.0);
if(key == ' ') path.clear();
if(key == 's') record = true;
}
In addition to George's great answer (which I've +1'd), I wanted to offer a more basic option:
The problem, like George said, is that when you move the mouse, you actually skip over a bunch of pixels. So if you only draw ellipses or points at mouseX, mouseY then you'll end up with gaps.
The dumb fix: the pmouseX and pmouseY variables hold the previous position of the cursor.
That might not sound very useful, but they allow you to solve exactly your problem. Instead of drawing ellipses or points at the current mouse position, draw a line from the previous position to the current position. This will eliminate any gaps in your lines.
void draw(){
line(pmouseX, pmouseY, mouseX, mouseY);
}
Shameless self-promotion: I've written a tutorial on getting user input in Processing available here.
Note: This dumb solution will only work if you aren't redrawing the background every frame. If you need to redraw everything every frame, then George's answer is the way to go.

Find the path and max-weigted edge in a Minimax path-finding solution?

I am currently having a programing assignment: given a large weighted unconnected graph (1 < V < 2000,
0 < E < 100 000). Find the maximum-weighted edge along the minimum-weighted path from "source" to point "destination".
What I've got so far is storing the graph in an AdjacencyList (Vector of Vector of IntegerPair where first integer is the neighbor and second is the weight of the edge).
I've also obtained the Minimum Spanning Tree by using Prim's algorithm:
private static void process(int vtx) {
taken.set(vtx, true);
for (int j = 0; j < AdjList.get(vtx).size(); j++) {
IntegerPair v = AdjList.get(vtx).get(j);
if (!taken.get(v.first())) {
pq.offer(new IntegerPair(v.second(), v.first())); //sort by weight then by adjacent vertex
}
}
}
void PreProcess() {
Visited = new Vector<Boolean>();
taken = new Vector<Boolean>();
pq = new PriorityQueue<IntegerPair>();
taken.addAll(Collections.nCopies(V, false));
process(0);
int numTaken = 1;
int mst_cost = 0;
while (!pq.isEmpty() && numTaken != V) { //do this until all V vertices are taken (or E = V - 1 edges are taken)
IntegerPair front = pq.poll();
if (!taken.get(front.second())) { // we have not connected this vertex yet
mst_cost += front.first(); // add the weight of this edge
process(front.second());
numTaken++;
}
}
}
What I am stuck at now is how to find the path from source to destination and return the maxmum weight edge in the below query:
int Query(int source, int destination) {
int ans = 0;
return ans;
}
I was told to use Depth-First Search to traverse the resulting MST but I think the DFS will traverse all vertices that are not on the correct path (am I right?). And how to find the maximum edge?
(This problem is not related to any SSSP algorithm because I haven't been taught Dijstra's, etc.)
One possible way to do this would be to use Kruskal's MST algorithm. It is a greedy algorithm that will start with an empty graph, the repeatedly add the lightest edge that does not produce a cycle. This satisfies the properties of a tree, while assuring the minimum-weighted path.
To find the maximum weighted edge, you can also use the properties of the algorithm. Since you know the that EdgeWeight(n) =< EdgeWeight(n+1), the last edge you add to the graph will be the maximum edge.

THREE.js: Why is my object flipping whilst travelling along a spline?

Following up from my original post Three.JS Object following a spline path - rotation / tangent issues & constant speed issue, I am still having the issue that the object flips at certain points along the path.
View this happening on this fiddle: http://jsfiddle.net/jayfield1979/T2t59/7/
function moveBox() {
if (counter <= 1) {
box.position.x = spline.getPointAt(counter).x;
box.position.y = spline.getPointAt(counter).y;
tangent = spline.getTangentAt(counter).normalize();
axis.cross(up, tangent).normalize();
var radians = Math.acos(up.dot(tangent));
box.quaternion.setFromAxisAngle(axis, radians);
counter += 0.005
} else {
counter = 0;
}
}
The above code is what moves my objects along the defined spline path (an oval in this instance). It was mentioned by #WestLangley that: "Warning: cross product is not well-defined if the two vectors are parallel.".
As you can see, from the shape of the path, I am going to encounter a number of parallel vectors. Is there anything I can do to prevent this flipping from happening?
To answer the why question in the title. The reason its happening is that at some points on the curve the vector up (1,0,0) and the tangent are parallel. This means their cross product is zero and the construction of the quaternion fails.
You could follow WestLangley suggestion. You really want the up direction to be the normal to the plane the track is in.
Quaternion rotation is tricky to understand the setFromAxisAngle function rotates around the axis by a given angle.
If the track lies in the X-Y plane then we will want to rotate around the Z-axis. To find the angle use Math.atan2 to find the angle of the tangent
var angle = Math.atan2(tangent.y,tangent.x);
putting this together set
var ZZ = new THREE.Vector3( 0, 0, 1 );
and
tangent = spline.getTangentAt(counter).normalize();
var angle = Math.atan2(tangent.y,tangent.x);
box.quaternion.setFromAxisAngle(ZZ, angle);
If the track leaves the X-Y plane things will get trickier.

Calculating if or not a 3D eyepoint is behind a 2D plane or upwards

The setup
Draw XY-coordinate axes on a piece of paper. Write a word on it along X-axis, so that the word's centerpoint is at origo (half on positive side of X/Y, the other half on negative side of X/Y).
Now, if you flip the paper upside down you'll notice that the word is mirrored in relation to both X- and Y-axis. If you look from behind the paper, it's mirrored in relation to Y-axis. If you look at it from behind and upside down, it's mirrored in relation to X-axis.
Ok, I have points in 2D-plane (vertices) that are created in similar way at the origo and I need to apply exactly the same rule for them. To make things interesting:
The 2D plane is actually 3D, each point (vertex) being (x, y, 0). Initially the vertices are positioned to the origo and their normal is Pn(0,0,1). => Correctly seen when looked at from point Pn towards origo.
The vertex-plane has it's own rotation matrix [Rp] and position P(x,y,z) in the 3D-world. The rotation is applied before positioning.
The 3D world is "right handed". The viewer would be looking towards origo from some distance along positive Z-axis but the world is also oriented by rotation matrix [Rw]. [Rw] * (0,0,1) would point directly to the viewer's eye.
From those I need to calculate when the vertex-plane should be mirrored and by which axis. The mirroring itself can be done before applying [Rp] and P by:
Vertices vertices = Get2DPlanePoints();
int MirrorX = 1; // -1 to mirror, 1 NOT to mirror
int MirrorY = 1; // -1 to mirror, 1 NOT to mirror
Matrix WorldRotation = GetWorldRotationMatrix();
MirrorX = GetMirrorXFactor(WorldRotation);
MirrorY = GetMirrorYFactor(WorldRotation);
foreach(Vertex v in vertices)
{
v.X = v.X * MirrorX * MirrorY;
v.Y = V.Y * MirrorY;
}
// Apply rotation...
// Add position...
The question
So I need GetMirrorXFactor() & ..YFactor() -functions that return -1 if the viewer's eyepoint is at greater "X/Y"-angle than +-90 degrees in relation to the vertex-plane's normal after the rotation and world orientation. I have already solved this, but I'm looking for more "elegant" mathematics. I know that rotation matrices somehow contain info about how much is rotated by which axis and I believe that can be utilized here.
My Solution for MirrorX:
// Matrix multiplications. Vectors are vertical matrices here.
Pnr = [Rp] * Pn // Rotated vertices's normal
Pur = [Rp] * (0,1,0) // Rotated vertices's "up-vector"
Wnr = [Rw] * (0,0,1) // Rotated eye-vector with world's orientation
// = vector pointing directly at the viewer's eye
// Use rotated up-vector as a normal some new plane and project viewer's
// eye on it. dot = dot product between vectors.
Wnrx = Wnr - (Wnr dot Pur) * Pur // "X-projected" eye.
// Calculate angle between eye's X-component and plane's rotated normal.
// ||V|| = V's norm.
angle = arccos( (Wnrx dot Pnr) / ( ||Wnrx|| * ||Pnr|| ) )
if (angle > PI / 2)
MirrorX = -1; // DO mirror
else
MirrorX = 1; // DON'T mirror
Solution for mirrorY can be done in similar way using viewer's up and vertex-plane's right -vectors.
Better solution?
if (([Rp]*(1,0,0)) dot ([Rw]*(1,0,0))) < 0
MirrorX = -1; // DO mirror
else
MirrorX = 1; // DON'T mirror
if (([Rp]*(0,1,0)) dot ([Rw]*(0,1,0))) < 0
MirrorY = -1; // DO mirror
else
MirrorY = 1; // DON'T mirror
Explaining in more detail is difficult without diagrams, but if you have trouble with this solution we can work through some cases.

CSG operations on implicit surfaces with marching cubes

I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not.
For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result.
void march(const vec2& cmin, //min x and y for the grid cell
const vec2& cmax, //max x and y for the grid cell
std::vector<vec2>& tri,
float iso,
float (*cmp1)(const vec2&), //distance from stationary circle
float (*cmp2)(const vec2&) //distance from moving circle
)
{
unsigned int squareindex = 0;
float scalar[4];
vec2 verts[8];
/* initial setup of the grid cell */
verts[0] = vec2(cmax.x, cmax.y);
verts[2] = vec2(cmin.x, cmax.y);
verts[4] = vec2(cmin.x, cmin.y);
verts[6] = vec2(cmax.x, cmin.y);
float s1,s2;
/**********************************
********For-loop of interest******
*******Set difference between ****
*******two implicit surfaces******
**********************************/
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
if((s1 < iso)){ //if inside circle1
if((s2 < iso)){ //if inside circle2
scalar[i] = s2; //then set the scalar to the moving circle
} else {
scalar[i] = s1; //only inside circle1
squareindex |= (1<<i); //mark as inside
}
}
else {
scalar[i] = s1; //inside neither circle
}
}
if(squareindex == 0)
return;
/* Usual interpolation between edge points to compute
the new intersection points */
verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]);
verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]);
verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]);
verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]);
for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token
int index = triTable[squareindex][i]; //look up our indices for triangulation
if(index == -1)
break;
tri.push_back(verts[index]);
}
}
This gives me weird jaggies:
(source: mechcore.net)
It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this.
A full testcase can be downloaded HERE
EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)
EDIT 2: Here are some new images after implementing the answerer's whitepaper:
1.Difference
2.Intersection
3.Union
EDIT 3: I implemented this in 3D too, with proper shading/lighting:
1.Difference between a greater sphere and a smaller sphere
2.Difference between a greater sphere and a smaller sphere in the center, clipped by two planes on both sides, and then union with a sphere in the center.
3.Union between two cylinders.
This is not how you mix the scalar fields. Your scalars say one thing, but your flags whether you are inside or not say another. First merge the fields, then render as if you were doing a single compound object:
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
s = max(s1, iso-s2); // This is the secret sauce
if(s < iso) { // inside circle1, but not inside circle2
squareindex |= (1<<i);
}
scalar[i] = s;
}
This article might be helpful: Combining CSG modeling with soft blending using
Lipschitz-based implicit surfaces.

Resources