Delaunay triangulatioin vertex insertion - delaunay

enter image description here
this is the code for inserting a vertex in Delaunay
knowing persons pls explain this pseudocode
thanks in advance

The text you cite looks like it may have come from "Delaunay Mesh Generation" by Cheng, Dey, and Shewchuk (who has contributed greatly to software community through his work on the Delaunay).
The discussion is a bit terse and I can see where you might find it hard to follow. I used it as a starting point for my own Delaunay implementation, but ended up referring to other sources. I wrote up some discussion of what I did, and you may find some helpful information in http://gwlucastrig.github.io/Tinfour/doc/TinfourAlgorithmsAndDataElements.pdf
If you want to see a Java-based implementation of the algorithm, you can download the source code from the Tinfour project, find the IncrementalTin.java class and look at the insert() method. However, Tinfour is intended to be production-grade code rather than instructive, and the code is complicated a bit by the inclusion of optimizations. You may visit Tinfour at https://github.com/gwlucastrig/Tinfour

Related

Starting out with Fstar

I have been reading about F-star from some of its paper and the F-star tutorial, but I find myself quite lost trying to understand its concepts. For example, dependently type, Dijkstra monads, etc.
What are the pre-requisites to properly understand and learn about F-star?
Any explanation of links to any resource will be helpful too.
You might find the following general resources helpful.
https://softwarefoundations.cis.upenn.edu/
https://www.springer.com/gp/book/9783540208549
http://adam.chlipala.net/cpdt/
None of these are particularly specific to F*, but some of the concepts you learn there will provide useful background.

What is RELIEF stands for?

I recently applied a feature selection algorithm called 'RELIEF' for my pattern recognition problem for comparison. The wiki page of 'RELIEF' can be found here RELIEF. But search the Internet, I couldn't find what is RELIEF stands for. Even in the original paper I couldn't find it. Does anyone knows this abbreviation? Thanks a lot.
A.
It's just a name for a feature selection algorithm, and it's not an abbreviation for any other words as far as I know. Additionally, in the original paper 'RELIEF' is also written as 'Relief'. This also proves my point of view.

Scientific Algorithms that can produce imagery, pseudocode perhaps?

I have a client who is based in the field of mathematics. We are developing, amongst other things, a website. I like to create a mock-up of a drawing tool that can produce some imagery in the background based on some scientific algorithms. The intention being that the client, later, may create their own. (They use emacs for everything, great client.)
I'm looking for an answer about where or what to search. Not code specific, pseudocode even, as we can adapt and have not yet settled on a platform.
I'm afraid my mathematics stops at the power of two and some trigonometry. Appreciated if they're are any mathematics related students/academics how could enlighten me? What to search for will be accepted?
Edit: To summarise/clarify, I want to draw pretty pictures (the design perspective). I want them to have some context (i.e. not just for the sake of pretty images but have some explanation available). In essence I would to create a rendering engine which they can draw/code the images and we set the style parameters: line, colour, etc... But to pursue this option I want to experiment myself.
Edit: great responses thanks. The aim is to make something along the lines of http://hascanvas.com/ if anyone is interested.
Thanks
Ross
Mandelbrot set, Julia sets, random graphs, Lorenz attractor.
Maybe minimising energy functions on a sphere.
I'm quite sure that I don't fully understand what you are after, so to provoke you and others into clarifying, I suggest you grab a copy of Mathematica and of Web Mathematica and knock your clients out with that.
Mandelbulb.
Fractals with pseudo code.
You can have a look at these links:
https://mathshistory.st-andrews.ac.uk/Curves/
https://www.nctm.org/classroomresources/
https://planetmath.org/famouscurves

Technical choices in unmarshalling hash-consed data

There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshalling-unmarshalling of data. I am looking for citable references to these tidbits.
For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshalling itself might as well be, too, so that it's possible to do everything in a single pass.
I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations.
I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshalling or distribution.
OK, this is not much more use, but Andrew Kennedy wrote a functional pearl called simply Pickling Combinators, which appears in the Journal of Functional Programming, (2004), 14:6:727-739. There is extensive discussion of structure sharing and how it is handled in pickles, but no direct discussion of how this problem might relate to hash-consing in the implementation of the language. But the article does discuss structure sharing in memory as well as in a pickle, so I hope it is better than nothing.
Martin Elsman had a follow-on paper in 2005 in Trends in Functional Programming; the title is Type-specialized serialization with sharing. The article deals primarily with hash-consing by the unpickler (deserializer), not with hash-consing in the impelementation, but again it may be worth something.
The JFP paper is proprietary, but there appears to be a preprint on Andrew's web page.
Elsman's paper appears to be available through Google Scholar at http://tinyurl.com/yd5tw2b.
(In a previous life, I worked on a project to create ASCII pickles that people could read and edit. I stupidly failed to publish it, but I have retained an interest.)
I found one reference on marshalling in functional languages; not sure if it will be useful, but the authors are smart: http://tinyurl.com/yc3hob9
I believe that Matthias Blume and/or Andrew Appel did something on this, but I can't find the paper. I also believe I reviewed something once for the Journal of Functional Programming, but I can't remember if the paper was accepted or who wrote it.
I suggest you ask Matthias Blume, Andrew Appel, and Phil Wadler if they can help.
Coq V5.10 had hash-consing and marshaling/unmarshaling. I didn't find anything in published form but the unmarshaling steps would be referenced as "reinterning" in the source code. Coq unmarhsaled values and then traversed them in order to re-create sharing, the obvious and only solution when all the language provides is an unmarshal function of type int_channel -> 'a.

a question about a design

My teammates and I have a very challenging new project to do, and we are supposed to submit it next week. We don't have a single clue about how to do it, and really need help. We are undergraduate students, new to Information Retrieval and AI, and really need your ideas.
The project is roughly:
When an expert is cited in a document,
find an expert with an opposing
opinion & find out what he/she says
about that topic.
We are free to use any programming language, but we are not concerned with the programming. We would like help to get us started. Please give us a rough idea on how to design such a system and how to retrieve information on the internet. How should we get his opinion, then find an opposite opinion?
Simple: use Amazon's Mechanical Turk.
Without that (or an equivalent) you're in trouble. If there are no further constraints on the problem then you will need a full-blown AI, the kind that doesn't yet exist. If there are severe restraints then you might have a chance of doing this in a week. If the expert can be in any field (medicine, politics, history, fashion, science, comic books, etc.) then there will be no single, well-organized repository of essays. You'll have to use Google to find Dr. X's opinion. Once you find Dr. X's writing (and let's pray it's text, not audio) you'll have to do some kind of natural language processing to get the thrust of it, even if you're lucky enough to find a descriptive title ("Digital Photography Is Absolutely Great"). Then you have to figure out it's opposite. What's the opposite of "Neil Gaiman draws on folklore for his story ideas"? Figuring out what opinion you're looking for will be a serious problem. After that, things actually get easier: you can google for the subject and use the same magic tools to find the one you're looking for.
So what do have a chance of solving? A search for opinions that someone else has already organised into "pro" and "con". Some online political forums are organised that way. Wikipedia cites opposing views in a special section in some of its articles. Science journals print letters of rebuttal. Look around, you might find a site even more cut-and-dried. Choose a small enough arena and you'll have a tractible problem.
EDIT: Damn, Ben Dunlap beat me to all my major points in a comment. Sigh
Sounds like an NLP problem to me. As for the information about documents and cites, http://citeseerx.ist.psu.edu should be a good starting point.
For each paper, there are several citations which refer to the paper. At the very minimum, you have to scan the abstract of the paper and that of the citations and run your own algorithm to figure if any citation is of the opposing opinion. Maybe your professor can give you hints on some approximate heuristic, but as far as I know it is a really hard problem.
I would be watching this thread for more interesting approaches.
Automatically submit a Google search request similar to "expert_name sucks", "expert_name wrong", or something like that. Find the first result that has "PhD" with a document link in the same sentence and return the link.
I think you might be blowing this up a little too big... as an undergraduate project, I would approach it a little more small scale.
Unless your specification says you must use actual internet resources, you would be better off creating your own database of custom short documents. Add metadata to each document stating the points they make about certain topics.
Next, I would create a list of citations which link to each document and add some metadata representing that experts stance on the topic. When someone reads a document, I would augment the list of citations with lists of links to documents which have alternative views on that topic.
Basically it would consist of these tables:
Document (id, data)
DocumentPoints (documentId, topic, stance)
Citation (documentId, topic, stance)
And when someone loads up a document, the citations are pulled up as well. For each citation, you search DocumentPoints for the same topics with different stances. The most difficult part of this project would be creating the 5 or 6 documents you need to have data in your database. After that the solution is trivial.
On a side note, most of these other answers are telling you to use some existing solution... don't do that unless the assignment tells you to. You'll be much better off understanding the problem and various ways to solve it (this is definitely not the only/best one) if you work through the entire problem yourself. When the teacher asks you to do something not supported by whatever product you chose to implement your solution on, you wouldn't be able to fix it. If you had just written it yourself, you could just as easily implement to the new spec as well.

Resources