CGAL - 2D Delaunay triangulation - remove is not removing - 2d

I need to remove some points from a 2d Delaunay triangulation. I´m using remove but it is not working, the code is:
//---TYPEDEFS
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Triangulation_vertex_base_2<K> Vb;
typedef CGAL::Constrained_triangulation_face_base_2<K> Fb;
typedef CGAL::Triangulation_data_structure_2<Vb,Fb> TDS;
typedef CGAL::Exact_predicates_tag Itag;
typedef CGAL::Constrained_Delaunay_triangulation_2<K,TDS,Itag> CDT;
typedef CDT::Point Point;
typedef CDT::Vertex_handle Vertex_handle;
typedef CDT::Vertex_circulator Vertex_circulator;
typedef CDT::Face_handle Face_handle;
...
int RemovePontosDesabilitados(CDT& cdt, tysetPonto3D& SetPonDesabilitados)
{
for (auto& RPonto : SetPonDesabilitados)
{
auto Face(cdt.locate(Point(RPonto.x,RPonto.y)));
if (Face != NULL)
{
auto Tam = cdt.number_of_vertices();
auto Pon = cdt.insert(Point(RPonto.x, RPonto.y)); //--- return the existing point
cdt.remove(Pon);
Tam = cdt.number_of_vertices();
}
}
return 1;
}
after the remove Tam returns the same value.
How to correctly remove points from a 2d Delaunay triangulations?

Related

CGAL - 2D Delaunay triangulation - remove is not removing - Part 2

This question is continuing of the question
CGAL - 2D Delaunay triangulation - remove is not removing
Trying to solve the problem I decide to use for deletion the point returned on cdt.locate() so there will be not reason for the point do not be removed:
//---TYPEDEFS
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Triangulation_vertex_base_2<K> Vb;
typedef CGAL::Constrained_triangulation_face_base_2<K> Fb;
typedef CGAL::Triangulation_data_structure_2<Vb,Fb> TDS;
typedef CGAL::Exact_predicates_tag Itag;
typedef CGAL::Constrained_Delaunay_triangulation_2<K,TDS,Itag> CDT;
typedef CDT::Point Point;
typedef CDT::Vertex_handle Vertex_handle;
typedef CDT::Vertex_circulator Vertex_circulator;
typedef CDT::Face_handle Face_handle;
...
int RemovePontosDesabilitados(CDT& cdt, tysetPonto3D& SetPonDesabilitados)
{
for (auto& RPonto : SetPonDesabilitados)
{
Point PonRemover(RPonto.x, RPonto.y);
auto Face(cdt.locate(PonRemover));
if (Face != NULL)
{
int C(0);
bool Achou(false);
while (C < 3 && !Achou)
{
if((fabs(Face->vertex(C)->point().x()) - RPonto.x) < 1e-5 &&
(fabs(Face->vertex(C)->point().y()) - RPonto.y) < 1e-5)
{
Achou = true;
}
else
{
C++;
}
}
if (Achou)
{
Point PonRemover(Face->vertex(C)->point().x(), Face->vertex(C)->point().y());
auto Tam = cdt.number_of_vertices();
auto PonIns = cdt.insert(PonRemover);
Tam = cdt.number_of_vertices();
cdt.remove(PonIns);
Tam = cdt.number_of_vertices();
}
}
}
return 1;
}
But on cdt.insert(PonRemover) the point is inserted (See that this point is exactly the same point found in cdt.locate(PonRemover)) and on cdt.remove(PonIns) an exception is raised and the program finished.
Why is this exception being raised?
Thank you

Structure pointer dereference

I am trying to pass a structure of point array as shown, how can I correctly dereference the address to change the value the address points to.
// header file "header.h"
typedef struct {
double x;
double y;
} Pointbase;
typedef Pointbase *XYpt;
typedef struct {
XYpt xy[1];
} ChartPointsbase;
typedef ChartPointsbase **PointArray;
#include "header.h"
...
void npCluster(double drop, XYpt *newpt, PointArray outpts)
{
double xx[2]={-15, 100};
int i;
outpts = (PointArray)malloc(sizeof(PointArray) * 2);
for (i=0;i<2; i++)
{
(*(*outpts)->xy[i])->x=xx[i];
(*(*outpts)->xy[i])->y=drop;
}
}
The complier likes the following line but does not compute
(*outpts)->xy[i]->y=drop;
Any suggestions will be most appreciated.
I figured it out for "c" compiler as follows:
Define struct with two 1D arrays each of size dimsize, allocate memory to handles, set the size =k, and dereference as follows:
for (i=0; i<k; i=i++)
{
(*(outpts->xx))->dat[i]=135*i+j;
(*(outpts->yy))->dat[i]=drop;
}
For further nesting, say struct array of the above with two unequal point arrays, where cht is an array of PointArray
typedef struct {
int32 dimSize;
C1Hdl cht[1];
} XYchartCluster;
// initialize 1st array
for (i=0; i<k; i=i++)
{
(*(*(xycht)->cht[0])->xx)->dat[i]=135*i+j;
(*(*(xycht)->cht[0])->yy)->dat[i]=drop;
}
// initialize 2nd array with values from point npt
for (i=0; i<sz; i=i++)
{
(*(*(xycht)->cht[1])->xx)->dat[i]=npt->x;
(*(*(xycht)->cht[1])->yy)->dat[i]=npt->y;
}
/*
Note: size of each array in chart should be initialized and
memory assigned (dynamically changing size)
*/

Where is the implementation of pixman_region32_init()?

lib: http://www.pixman.org/
repository: https://cgit.freedesktop.org/pixman/tree/
void pixman_region32_init (pixman_region32_t *region);
I can not find an implementation of that function..
thanks...
It's found in pixman/pixman.h (pixman-0.34.0)
ypedef struct pixman_region32_data pixman_region32_data_t;
typedef struct pixman_box32 pixman_box32_t;
typedef struct pixman_rectangle32 pixman_rectangle32_t;
typedef struct pixman_region32 pixman_region32_t;
struct pixman_region32_data {
long size;
long numRects;
/* pixman_box32_t rects[size]; in memory but not explicitly declared */
};
struct pixman_rectangle32
{
int32_t x, y;
uint32_t width, height;
};
struct pixman_box32
{
int32_t x1, y1, x2, y2;
};
struct pixman_region32
{
pixman_box32_t extents;
pixman_region32_data_t *data;
};
It's in pixman-region32.c. You don't see it because those functions are generated by the PREFIX macro and then the code in pixman-region.c is used. See here:
typedef pixman_box32_t box_type_t;
typedef pixman_region32_data_t region_data_type_t;
typedef pixman_region32_t region_type_t;
typedef int64_t overflow_int_t;
typedef struct {
int x, y;
} point_type_t;
#define PREFIX(x) pixman_region32##x
#define PIXMAN_REGION_MAX INT32_MAX
#define PIXMAN_REGION_MIN INT32_MIN
#include "pixman-region.c"
It first sets the PREFIX macro to pixman_region32 and then imports the code from pixman-region.c.

Purpose of *,& symbol behide datatype?

I am learning to implement graph using c++. I came across to see the follow code. Could anyone explain what is the function of symbols * and & behide the data type "vertex" and "string"?
#include <iostream>
#include <vector>
#include <map>
#include <string>
using namespace std;
struct vertex {
typedef pair<int, vertex*> ve;
vector <ve> adj; //cost of edge, distination to vertex
string name;
vertex (string s) : name(s) {}
};
class gragh
{
public:
typedef map<string, vertex *> vmap;
vmap work;
void addvertex (const string&);
void addedge (const string& from, const string&, double cost);
};
void gragh::addvertex (const string &name)
{
vmap::iterator itr = work.find(name);
if (itr == work.end())
{
vertex *v;
v = new vertex(name);
work[name] = v;
return;
}
cout << "Vertex alreay exist";
}
int main()
{
return 0;
}
'*' means to de-reference something i.e. go to the address of some variable whose address lies in the pointer.
int x=*p;
This means x will have the value of memory address to whom p is pointing.
x=&p;
This means x will have the address of that memory location where p resides.

Shortest path between two nodes in a graph based on a different condition

I was trying to solve this problem on Hackerrank. Initially, I was thinking that this would be a straight forward Dijkstra's implementation but this was not to be.
The code I have written is
#include <iostream>
#include <algorithm>
#include <climits>
#include <vector>
#include <set>
using namespace std;
typedef struct edge { unsigned int to, length; } edge;
int dijkstra(const vector< vector<edge> > &graph, int source, int target) {
vector< int > min_distance(graph.size(), INT_MAX);
min_distance[ source ] = 0;
std::vector< bool > visited(graph.size(), false);
set< pair<int,int> > active_vertices;
active_vertices.insert( {0,source} );
while (!active_vertices.empty()) {
int where = active_vertices.begin()->second;
int where_distance = active_vertices.begin()->first;
visited[where] = true;
active_vertices.erase( active_vertices.begin());
for (auto edge : graph[where])
{
if(!visited[edge.to])
{
int cost = where_distance | edge.length;
min_distance[edge.to] = min(cost, min_distance[edge.to]);
active_vertices.insert({cost, edge.to});
}
}
}
return min_distance[target];
}
int main( int argc, char const *argv[])
{
unsigned int n, m, source, target;
cin>>n>>m;
std::vector< std::vector<edge> > graph(n, std::vector<edge>());
while(m--)
{
unsigned int from, to, dist;
cin>>from>>to>>dist;
graph[from-1].push_back({ to-1, dist});
graph[to-1].push_back({from-1, dist});
}
cin>>source>>target;
cout<<dijkstra(graph, source-1, target-1)<<endl;
return 0;
}
The approach that I have is pretty simple. At each vertex I consume it's outgoing edge and update the active_vertices with it's updated cost provided that vertex is not yet visited. Also, a min_distance vector keeps track of the minimum distance so far.
But this fails for half the test cases. I am not able to find out why from the input as the input file has a large number of edges and recreating it is quite difficult.
It would be nice if you can help me with what's wrong with my current approach and I'm also a bit confused if it's running time is exponential.
What would be the running time of this code?
You missed this: multiple edges are allowed. As such, you have to choose which edge that you want to use (Not necessarily the one with smallest C).

Resources