I'm doing a job of surface reconstruction. But I met an issue :
I want to use the DIRICHLET boundary condition in poisson, but it seems that the poisson of pcl doesn't support specifying boundary condition, it just uses NEUMANN boundary condition always.
So I wonder how to use the DIRICHLET boundary condition in pcl poisson.
BTW:My goal is to calculate the volume of a container. But my pointcloud isn't watertight, so I need the algorithm to 'image' the surface of holes. CloudCompare supports specifying boundary condition, and it works well. But in pcl, the effect of NEUMANN boundary condition is terrible.
The mesh generated by pcl poisson(NEUMANN condition) like below:
The mesh generated by cloudcompare(specified DIRICHLET condition) like below:
The original PoissonRecon code is at this github repository. You can also find there prebuild executables for window command line (--bType to set the conidtion). This is available in the command line executable starting from version 9.0.
[--bType ] This integer specifies the boundary type for
the finite elements. Valid values are: 1: Free boundary constraints 2:
Dirichlet boundary constraints 3: Neumann boundary constraints The
default value for this parameter is 3 (Neumann).
CloudCompare uses version 7.
PCL (1.12.0 at the moment of this post) uses version 4 of PoissonRecon.
Open3D (0.14.1 at the moment of this post) includes a wrapper over version 12, which supports both boundary conditions. It is however hard-coded to use NEUMANN. You should be able to easily change the enum and compile a version of Open3D that uses DIRICHLET condition (never tried this myself).
Alternatively (if you can't use the original console app or recompile Open3D), you can try to work with what you've got.
You can try to identify the "imaginary faces" based on the area of the faces (smaller density - larger triangle area) and remove them. The original repository offers a SurfaceTrimmer tool (another console project) that does just that (based on density value).
Close the remaining open mesh using either some hole-closing-method or convex-hull.
Related
I am having trouble navigating the source code to see how the design variables in the initial population for the SimpleGA and DifferentialEvolution Drivers are set. Is there some sort of Latin Hypercube sampling of the design variable ranges? Do the initial values I set in my problem instance get used like they would for the other drivers (Scipy and pyOptSparse)?
Many thanks,
Garrett
For these two drivers, the initial value in the model is not used. Its not even clear to me what it would mean to use that value directly, since you need a stochastically generated population --- but I'm admittedly not an expert on the latest GA population initialization methods. However, I can answer the question of how they do get initialized as of OpenMDAO V3.17:
Simple GA Driver:
This driver does seem to use an LHS sampling like this:
new_gen = np.round(lhs(self.lchrom, self.npop, criterion='center',
random_state=random_state))
new_gen[0] = self.encode(x0, vlb, vub, bits)
Differential Evolution Driver:
This driver uses a uniform random distribution like this:
population = rng.random([self.npop, self.lchrom]) * (vub - vlb) + vlb # scale to bounds
Admittedly, it doesn't make a whole lot of sense why the intialization methods are different, and perhaps there should be some option to pick from a set of methods or provide your own initial population somehow. A POEM and/or pull-request to improve this would be most welcome.
I'm wondering if this is correct. Most of the Implicit and Explicit components I have created use the line:
self.declare_coloring(wrt='*', method='cs', tol=1.0E-12, show_sparsity=True)
Then when I get to the file that runs the driver I use:
p.driver.declare_coloring()
And in my /coloring_files directory I have a 'col' and a 'disc' for each component.
coloring_traj_phases_phase0_rhs_col_brakeThrottle.pkl coloring_traj_phases_phase0_rhs_disc_implicitOutputs.pkl
coloring_traj_phases_phase0_rhs_col_implicitOutputs.pkl coloring_traj_phases_phase0_rhs_disc_powerTrain.pkl
coloring_traj_phases_phase0_rhs_col_powerTrain.pkl coloring_traj_phases_phase0_rhs_disc_spin.pkl
coloring_traj_phases_phase0_rhs_col_spin.pkl coloring_traj_phases_phase0_rhs_disc_timeAdder.pkl
coloring_traj_phases_phase0_rhs_col_timeAdder.pkl coloring_traj_phases_phase0_rhs_disc_timeSpace.pkl
coloring_traj_phases_phase0_rhs_col_timeSpace.pkl coloring_traj_phases_phase0_rhs_disc_tracking.pkl
coloring_traj_phases_phase0_rhs_col_tracking.pkl coloring_traj_phases_phase0_rhs_disc_tyreConstraint.pkl
coloring_traj_phases_phase0_rhs_col_tyreConstraint.pkl coloring_traj_phases_phase0_rhs_disc_tyre.pkl
coloring_traj_phases_phase0_rhs_col_tyre.pkl total_coloring.pkl
coloring_traj_phases_phase0_rhs_disc_brakeThrottle.pkl
Are both sets of files needed or am I repeating an operation twice? Also I'm wondering if declaring coloring with the driver is using a method other than CS? I do intent on using the total_coloring.pkl for static coloring.
Dymos can use one of two methods for transcription: The Radau Pseudospectral Method or the high-order GaussLobatto method.
The GaussLobatto method is a two-step process:
The ODE is evaluated at the "discretization" nodes.
The values and rates at the discretization nodes are used to interpolate the state and state rates to the "collocation" nodes.
The ODE is evaluated a second time at the collocation nodes using the interpolated state values from step 2.
The interpolated rates are compared to the rates output by the ODE at the collocation nodes (these are called the defects) - if they're tiny, then the physics are assumed to be accurate.
The Radau transcription follows a similar process, except the collocation nodes are a subset of the discretization nodes, so interpolation isn't necessary, and the ODE only needs to be evaluated once.
If you change your transcription from dymos.GaussLobatto to dymos.Radau, then you'll only have one partial-coloring file for each of your ODE components. Otherwise, both need to have their coloring worked out separately.
I am getting this error and this post telling me that I should decrease the sigma but here is the thing this code was working fine a couple of months ago. Nothing change based on the data and the code. I was wondering why this error out of blue.
And the second point, when I lower the sigma such as 13.1, it looks running (but I have been waiting for an hour).
sigma=203.9057
dimyx1=1024
A22den=density(Lnetwork,sigma,distance="path",continuous=TRUE,dimyx=dimyx1) #
About Lnetwork
Point pattern on linear network
69436 points
Linear network with 8417 vertices and 8563 lines
Enclosing window: rectangle = [143516.42, 213981.05] x [3353367, 3399153] units
Error: Required number of iterations = 1087633109 exceeds iterMax = 1e+06 ; either increase iterMax, dx, dt or reduce sigma
This is a question about the spatstat package.
The code for handling data on a linear network is still under active development. It has changed in recent public releases of spatstat, and has changed again in the development version. You need to specify exactly which version you are using.
The error report says that the required number of iterations of the algorithm is too large. This occurs because either the smoothing bandwidth sigma is too large, or the spacing dx between sample points along the network is too small. The number of iterations is proportional to (sigma/dx)^2 in most cases.
First, check that the value of sigma is physically reasonable.
Normally you shouldn't have to worry about the algorithm parameter dx because it is determined automatically by default. However, it's possible that your data are causing the code to choose a very small value of dx.
The internal code which automatically determines the spacing dx of sample points along the network has been changed recently, in order to fix several bugs.
I suggest that you specify the algorithm parameters manually. See the help file for densityHeat for information on how to control the spacings. Setting the parameters manually will also ensure greater consistency of the results between different versions of the software.
The quickest solution is to set finespacing=FALSE. This is not the best solution because it still uses some of the automatic rules which may be giving problems. Please read the help file to understand what that does.
Did you update spatstat since this last worked? Probably the internal code for determining spacing on the network etc. changed a bit. The actual computations are done by the function densityHeat(), and you can see how to manually set spacing etc. in its help file.
I am currently learning Modelica by trying some very simple examples. I have defined a connector Incompressible for an incompressible fluid like this:
connector Incompressible
flow Modelica.SIunits.VolumeFlowRate V_dot;
Modelica.SIunits.SpecificEnthalpy h;
Modelica.SIunits.Pressure p;
end Incompressible;
I now wish to define a mass or volume flow source:
model Source_incompressible
parameter Modelica.SIunits.VolumeFlowRate V_dot;
parameter Modelica.SIunits.Temperature T;
parameter Modelica.SIunits.Pressure p;
Incompressible outlet;
equation
outlet.V_dot = V_dot;
outlet.h = enthalpyWaterIncompressible(T); // quick'n'dirty enthalpy function
outlet.p = p;
end Source_incompressible;
However, when checking Source_incompressible, I get this:
The problem is structurally singular for the element type Real.
The number of scalar Real unknown elements are 3.
The number of scalar Real equation elements are 4.
I am at a loss here. Clearly, there are three equations in the model - where does the fourth equation come from?
Thanks a lot for any insight.
Dominic,
There are a couple of issues going on here. As Martin points out, the connector is unbalanced (you don't have matching "through" and "across" pairs in that connector). For fluid systems, this is acceptable. However, intensive fluid properties (e.g., enthalpy) have to be marked as so-called "stream" variables.
This topic is, admittedly, pretty complicated. I'm planning on adding an advanced chapter to my online Modelica book on this topic but I haven't had the time yet. In the meantime, I would suggest you have a look at the Modelica.Fluid library and/or this presentation by one of its authors, Francesco Casella.
That connector is not a physical connector. You need one flow variable for each potential variable. This is the OpenModelica error message if it helps a little:
Warning: Connector .Incompressible is not balanced: The number of potential variables (2) is not equal to the number of flow variables (1).
Error: Too many equations, over-determined system. The model has 4 equation(s) and 3 variable(s).
Error: Internal error Found Equation without time dependent variables outlet.V_dot = V_dot
This is because the unconnected connector will generate one equation for the flow:
outlet.V_dot = 0.0;
This means outlet.V_dot is replaced in:
outlet.V_dot = V_dot;
And you get:
0.0 = V_dot;
But V_dot is a parameter and can not be assigned to in an equation section (needs an initial equation if the parameter has fixed=false, or a binding equation in the default case).
(edited)
Is there any library or tool that allows for knowing the maximum accumulated error in arithmetic operations?
For example if I make some iterative calculation ...
myVars = initialValues;
while (notEnded) {
myVars = updateMyVars(myVars)
}
... I want to know at the end not only the calculated values, but also the potential error (the range of posible values if results in each individual operations took the range limits for each operand).
I have already written a Java class called EADouble.java (EA for Error Accounting) which holds and updates the maximum positive and negative errors along with the calculated value, for some basic operations, but I'm afraid I might be reinventing an square wheel.
Any libraries/whatever in Java/whatever? Any suggestions?
Updated on July 11th: Examined existing libraries and added link to sample code.
As commented by fellows, there is the concept of Interval Arithmetic, and there was a previous question ( A good uncertainty (interval) arithmetic library? ) on the topic. There just a couple of small issues about my intent:
I care more about the "main" value than about the upper and lower bounds. However, to add that extra value to an open library should be straight-forward.
Accounting the error as an independent floating point might allow for a finer accuracy (e.g. for addition the upper bound would be incremented just half ULP instead of a whole ULP).
Libraries I had a look at:
ia_math (Java. Just would have to add the main value. My favourite so far)
Boost/numeric/Interval (C++, Very complex/complete)
ErrorProp (Java, accounts value, and error as standard deviation)
The sample code (TestEADouble.java) runs ok a ballistic simulation and a calculation of number e. However those are not very demanding scenarios.
probably way too late, but look at BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
Pretty complete, simple, account for computer error, and if your errors are centered easy access to your nominal (through Mid(...)).