who knows the volume of Stanford bunny model? - volume

I have developed an algorithm to compute the volume of point set.And now I plan to use Stanford bunny model for testing my algorithm,but I haven't found the true value of the volume yet,so I don't know whether the value I computed is accurate.Is there anybody who knows the true volume of Stanford bunny model?

I'm not sure that there is an actual concrete answer to this, as the model can be scaled up and down across versions.
Also, for example, my version of the Stanford Bunny doesn't have the bottom properly closed, probably due to the holes because of the hollow porcelain it was modelled off of.
There might be too much variation across the models and its conversions to have one specific value.

Related

Is there an R package with which I can model the effects of competition on ideal free distribution?

I am a university student working on a research project, because of our local lockdown I cannot go into the field to collect observation data, I am therefore looking for an R package that will allow me to model the effects of competition when testing for ideal free distribution (IFD).
To give you a better idea of what I am looking for I have described the project in more detail below.
In my original dataset (which I received i.e., I did not collect the data myself) I have two patches (A,B) which received random treatments of food input (1:1, 2:1, 5:1). Under the ideal free distribution hypothesis species should distribute into the patches in accordance with the treatment ratios. This is not the case.
Under normal circumstances I would go into the field and observe behaviour of individuals in the patches to see if dominance affects distribution. Since we are in a lockdown I am unable to do so. I am hoping that there is a package out there that would allow me to model this scenario and help me investigate how competition affects IFD.
I have already found two packages called coexist and EcoVirtual but they model coexistence and extinction dynamics, whereas I want to investigate how competition might alter distribution between profitable patches when there is variation in the level of competition.
I am fairly new to R and creating my own package is beyond my skillset at this point, so I would appreciate the help.
I hope this makes sense and thanks in advance.
Wow, that's an odd place to find another researcher of IFD. I do not believe there are packages on R specifically about IFD. Its too specific and most models are relatively simple to estimate using common tests. For example, the input-matching rule you mentioned can be tested using a simple run-of-the-mill t-test, already included in base R.
What you have is not a coding problem per say, or even an statistical one. It is a biological problem. What ratio would you expect when animals are ideal (full knowledge of the environment), free (no movement costs), but with the presence of competition? Is this ratio equal to the ratio in your dataset? Sutherland,1983 suggests animals would undermatch.
I would love to discuss this at depth, given my PhD was in IFD, but I fear you hit the wrong forum.

FEM software for increment calculations

Can anyone suggest any software that will calculate increment stress effects on a body ?
a particular application would be calculating increment stress on gear teeth through a simulation run.
Since we would have a cyclic run, if we had 2 gears, their teeth would be in contact once every revolution, and i am interested in knowing if there is a software that will keep track of the "damage" done on first contact, which would slightly change the geometry of the gear and most importantly change the way the gear responds to the same stress at future contacts.
You'll need a non-linear transient FEA capability.
I'm assuming that the gear rotational velocity is small enough where you aren't interested in inertial effects. You want to do a non-linear load that tracks loading over one or more rotational cycles.
You need to model contact and friction at the contact points. That's a challenging non-linear problem.
You'll need a mesh that's refined enough in the contact zone to resolve the surface stress you're interested in.
Small strain is sufficient as a first step. Large strains would imply that your geometry is in some trouble.
Damage implies a non-linear material model of some kind. What were you assuming? Small strain plasticity with isotropic or kinematic hardening? Or a more advanced model like Walker or Chaboche?
Do temperature effects matter to you? Must you do a heat transfer analysis as well?
Do you have a model for metallurgical effects (e.g. austenite/martensite phase changes for carbon steel)? Do you have any heat treatment or grain size data that impact your material model?
I'd recommend starting simple and modeling contact between two teeth, one stationary and another in motion.
I haven't done finite element analysis for a living in many years, but when I was a practitioner this kind of problem would be solved with something like MARC or ABAQUS. I believe ANSYS is very popular now. There are also open source finite element solvers, but I'm less familiar with those.
I'm sure you've done a Google search for something like "finite element analysis gear tooth". You're far from the first to be interested in a problem like this.

How to make a good mesh in a biologically accurate model with very small domains

I have been trying to make a biologically accurate 2D spatial model of tissue layers, where different physiological processes happen. This includes mainly chemical reactions, diffusion and fluxes over boundaries.
I am making this model in COMSOL Multiphysics, a finite element software package that solves different physics like reaction-diffusion systems, although for my question this might not be really relevant.
In my geometry, I have really small regions between the cells of the tissue layers. These regions serve as openings where diffusion can take place between the cells (junctions). The quality of the mesh is not great here and if I want to improve the quality (mainly by introducing more elements and such), my simulation time increases drastically. The lesser quality mesh also causes convergence to take longer. I added a picture of the geometry to give an idea. I tried different meshes, all with different qualities of the elements and the number of elements ranging from 16000 to 50000.
My background in FEM is really limited and I wanted to know if I can tackle this problem in such a way that it
doesn't negatively affect the biology (keep the tissue domain sizes/problem etc as biologically accurate as possible),
doesn't increase the simulation time drastically,
give a better mesh quality.
So I really want to know what the best way to go is, since I have already thought of some things.
Can I go with the lesser quality mesh (which is not really bad, but not good either), so that I can keep the small regions for optimum biological accuracy and have a relatively small computation time (and hope I won't run into convergence errors).
But maybe there are possibilities that I am missing, for instance: is it possible to make the small domain bigger and then add some kind of factor to the diffusion rates. In other words, if I want to make the domain twice as large, do I factor the diffusion rate with half? Is that even accurate in chemical/physical laws :S.
Hopefully I made the problem a bit clear and thank you greatly in advance for the help.
Cheers,
Mesh of the tissue model
I know this thread was posted some months back but I am unsure if you found a solution.
In order to find the relationship between accuracy and computational time would be that you run a mesh analysis on your model and see how the mesh size directly affects the results you are expecting to obtain (pore pressure, fluid velocity, strain, etc.) This will allow you to determine the most appropriate mesh strategy for your specific problem.
Also, you might need to keep in mind that the diffusion rate of a material will depend on the pore size and the permeability (by means of Darcy's law) so depending on the assumptions you are making for the implementation of your constitutive law and your problem boundary conditions you might simplify/enlarge some of the smaller domains you have in your model so long they are within your previously made assumptions.

rapid exploring random trees

http://msl.cs.uiuc.edu/rrt/
Can anyone explain how rrt works with simple wording that is easy to understand?
I read the description in the site and in wikipedia.
What I would like to see, is a short implementation of a rrt or a thorough explanation of the following thing:
Why does the rrt grow outwards instead of just growing very dense around the center?
How is it different from a naive random tree?
How is the next new vertex that we attempt to reach picked?
I know there is an Motion Strategy Library I could download but I would much rather understand the idea before I delve into the code rather than the other way around.
The simplest possible RRT algorithm has been so successful because it is pretty easy to implement. Things tend to get complicated when you:
need to visualise planning concepts in more than two dimensions
are unfamiliar with the terminology associated with planning, and;
in the huge number of variants of RRT that are have been described in the literature.
Pseudo code
The basic algorithm looks something like this:
Start with an empty search tree
Add your initial location (configuration) to the search tree
while your search tree has not reached the goal (and you haven't run out of time)
3.1. Pick a location (configuration), q_r, (with some sampling strategy)
3.2. Find the vertex in the search tree closest to that random point, q_n
3.3. Try to add an edge (path) in the tree between q_n and q_r, if you can link them without a collision occurring.
Although that description is adequate, after a while working in this space, I really do prefer the pseudocode of figure 5.16 on RRT/RDT in Steven LaValle's book "Planning Algorithms".
Tree Structure
The reason that the tree ends up covering the entire search space (in most cases) is because of the combination of the sampling strategy, and always looking to connect from the nearest point in the tree. This effect is described as reducing the Voronoi bias.
Sampling Strategy
The choice of where to place the next vertex that you will attempt to connect to is the sampling problem. In simple cases, where search is low dimensional, uniform random placement (or uniform random placement biased toward the goal) works adequately. In high dimensional problems, or when motions are very complex (when joints have positions, velocities and accelerations), or configuration is difficult to control, sampling strategies for RRTs are still an open research area.
Libraries
The MSL library is a good starting point if you're really stuck on implementation, but it hasn't been actively maintained since 2003. A more up-to-date library is the Open Motion Planning Library (OMPL). You'll also need a good collision detection library.
Planning Terminology & Advice
From a terminology point of view, the hard bit is to realise that although lots of the diagrams you see in the (early years of) publications on RRT are in two dimensions (trees that link 2d points), that this is the absolute simplest case.
Typically, a mathematically rigorous way to describe complex physical situations is required. A good example of this is planning for a robot arm with n- linkages. Describing the end of such an arm requires a minimum of n joint angles. This set of minimum parameters to describe a position is a configuration (or some publications state). A single configuration is often denoted q
The combination of all possible configurations (or a subset thereof) that can be achieved make up a configuration space (or state space). This can be as simple as an unbounded 2d plane for a point in the plane, or incredibly complex combinations of ranges of other parameters.

Which particular software development tasks have you used math for? And which branch of math did you use?

I'm not looking for a general discussion on if math is important or not for programming.
Instead I'm looking for real world scenarios where you have actually used some branch of math to solve some particular problem during your career as a software developer.
In particular, I'm looking for concrete examples.
I frequently find myself using De Morgan's theorem when as well as general Boolean algebra when trying to simplify conditionals
I've also occasionally written out truth tables to verify changes, as in the example below (found during a recent code review)
(showAll and s.ShowToUser are both of type bool.)
// Before
(showAll ? (s.ShowToUser || s.ShowToUser == false) : s.ShowToUser)
// After!
showAll || s.ShowToUser
I also used some basic right-angle trigonometry a few years ago when working on some simple graphics - I had to rotate and centre a text string along a line that could be at any angle.
Not revolutionary...but certainly maths.
Linear algebra for 3D rendering and also for financial tools.
Regression analysis for the same financial tools, like correlations between financial instruments and indices, and such.
Statistics, I had to write several methods to get statistical values, like the F Probability Distribution, the Pearson product moment coeficient, and some Linear Algebra correlations, interpolations and extrapolations for implementing the Arbitrage pricing theory for asset pricing and stocks.
Discrete math for everything, linear algebra for 3D, analysis for physics especially for calculating mass properties.
[Linear algebra for everything]
Projective geometry for camera calibration
Identification of time series / statistical filtering for sound & image processing
(I guess) basic mechanics and hence calculus for game programming
Computing sizes of caches to optimize performance. Not as simple as it sounds when this is your critical path, and you have to go back and work out the times saved by using the cache relative to its size.
I'm in medical imaging, and I use mostly linear algebra and basic geometry for anything related to 3D display, anatomical measurements, etc...
I also use numerical analysis for handling real-world noisy data, and a good deal of statistics to prove algorithms, design support tools for clinical trials, etc...
Games with trigonometry and AI with graph theory in my case.
Graph theory to create a weighted graph to represent all possible paths between two points and then find the shortest or most efficient path.
Also statistics for plotting graphs and risk calculations. I used both Normal distribution and cumulative normal distribution calculations. Pretty commonly used functions in Excel I would guess but I actully had to write them myself since there is no built-in support in the .NET libraries. Sadly the built in Math support in .NET seem pretty basic.
I've used trigonometry the most and also a small amount a calculus, working on overlays for GIS (mapping) software, comparing objects in 3D space, and converting between coordinate systems.
A general mathematical understanding is very useful if you're using 3rd party libraries to do calculations for you, as you ofter need to appreciate their limitations.
i often use math and programming together, but the goal of my work IS the math so use software to achive that.
as for the math i use; mostly Calculus (FFT's analysing continuous and discrete signals) with a slash of linar algebra (CORDIC) to do trig on a MCU with no floating point chip.
I used a analytic geometry for simple 3d engine in opengl in hobby project on high school.
Some geometry computation i had used for dynamic printing reports, where was another 90° angle layout than.
A year ago I used some derivatives and integrals for store analysis (product item movement in store).
Bot all the computation can be found on internet or high-school book.
Statistics mean, standard-deviation, for our analysts.
Linear algebra - particularly gauss-jordan elimination and
Calculus - derivatives in the form of difference tables for generating polynomials from a table of (x, f(x))
Linear algebra and complex analysis in electronic engineering.
Statistics in analysing data and translating it into other units (different project).
I used probability and log odds (log of the ratio of two probabilities) to classify incoming emails into multiple categories. Most of the heavy lifting was done by my colleague Fidelis Assis.
Real world scenarios: better rostering of staff, more efficient scheduling of flights, shortest paths in road networks, optimal facility/resource locations.
Branch of maths: Operations Research. Vague definition: construct a mathematical model of a (normally complex) real world business problem, and then use mathematical tools (e.g. optimisation, statistics/probability, queuing theory, graph theory) to interrogate this model to aid in the making of effective decisions (e.g. minimise cost, maximise efficency, predict outcomes etc).
Statistics for scientific data analyses such as:
calculation of distributions, z-standardisation
Fishers Z
Reliability (Alpha, Kappa, Cohen)
Discriminance analyses
scale aggregation, poling, etc.
In actual software development I've only really used quite trivial linear algebra, geometry and trigonometry. Certainly nothing more advanced than the first college course in each subject.
I have however written lots of programs to solve really quite hard math problems, using some very advanced math. But I wouldn't call any of that software development since I wasn't actually developing software. By that I mean that the end result wasn't the program itself, it was an answer. Basically someone would ask me what is essentially a math question and I'd write a program that answered that question. Sure I’d keep the code around for when I get asked the question again, and sometimes I’d send the code to someone so that they could answer the question themselves, but that still doesn’t count as software development in my mind. Occasionally someone would take that code and re-implement it in an application, but then they're the ones doing the software development and I'm the one doing the math.
(Hopefully this new job I’ve started will actually let me to both, so we’ll see how that works out)

Resources