Is an implementation of Iterative Closest Point (ICP) available in R? - r

Does someone have an implementation of Iterative Closest Point (ICP) algorithm for two dimensions (2D) in R?
Here is an attempt in c#
Iterative Closest Point Implementation
Here is a more general question
iterative closest point library
This is to match two sets of points through translation and scaling.

Spacedman's comment is probably best. You might also take a look at http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=12627&objectType=file for a matlab implementation. Assuming it works ok, translating Matlab to R code is relatively easy.

This is somewhat of an answer in the form of a non-answer.
There are many variants of ICP. The design choices are at least partially organized by the late 90's Ph.D. work of Pulli and by Rusinkiewicz & Levoy. If you're going to be using ICP for anything remotely important (translation: "more than just a class assignment"), you should understand the tradeoffs.
Thus, it's probably best to take one of the existing implementations and port it to R.

3 Years too late, but there is the function icpmat in the package Morpho by the same guy who wrote Rvcg. I don't know which variant is implemented though.
Link:
https://github.com/zarquon42b/Morpho

There is a self-contained (as far as I can tell) C++ implementation of ICP here. Maybe you can create your own R wrapper around this C++ code.

Related

CVX-esque convex optimization in R?

I need to solve (many times, for lots of data, alongside a bunch of other things) what I think boils down to a second order cone program. It can be succinctly expressed in CVX something like this:
cvx_begin
variable X(2000);
expression MX(2000);
MX = M * X;
minimize( norm(A * X - b) + gamma * norm(MX, 1) )
subject to
X >= 0
MX((1:500) * 4 - 3) == MX((1:500) * 4 - 2)
MX((1:500) * 4 - 1) == MX((1:500) * 4)
cvx_end
The data lengths and equality constraint patterns shown are just arbitrary values from some test data, but the general form will be much the same, with two objective terms -- one minimizing error, the other encouraging sparsity -- and a large number of equality constraints on the elements of a transformed version of the optimization variable (itself constrained to be non-negative).
This seems to work pretty nicely, much better than my previous approach, which fudges the constraints something rotten. The trouble is that everything else around this is happening in R, and it would be quite a nuisance to have to port it over to Matlab. So is doing this in R viable, and if so how?
This really boils down to two separate questions:
1) Are there any good R resources for this? As far as I can tell from the CRAN task page, the SOCP package options are CLSCOP and DWD, which includes an SOCP solver as an adjunct to its classifier. Both have similar but fairly opaque interfaces and are a bit thin on documentation and examples, which brings us to:
2) What's the best way of representing the above problem in the constraint block format used by these packages? The CVX syntax above hides a lot of tedious mucking about with extra variables and such, and I can just see myself spending weeks trying to get this right, so any tips or pointers to nudge me in the right direction would be very welcome...
You might find the R package CVXfromR useful. This lets you pass an optimization problem to CVX from R and returns the solution to R.
OK, so the short answer to this question is: there's really no very satisfactory way to handle this in R. I have ended up doing the relevant parts in Matlab with some awkward fudging between the two systems, and will probably migrate everything to Matlab eventually. (My current approach predates the answer posted by user2439686. In practice my problem would be equally awkward using CVXfromR, but it does look like a useful package in general, so I'm going to accept that answer.)
R resources for this are pretty thin on the ground, but the blog post by Vincent Zoonekynd that he mentioned in the comments is definitely worth reading.
The SOCP solver contained within the R package DWD is ported from the Matlab solver SDPT3 (minus the SDP parts), so the programmatic interface is basically the same. However, at least in my tests, it runs a lot slower and pretty much falls over on problems with a few thousand vars+constraints, whereas SDPT3 solves them in a few seconds. (I haven't done a completely fair comparison on this, because CVX does some nifty transformations on the problem to make it more efficient, while in R I'm using a pretty naive definition, but still.)
Another possible alternative, especially if you're eligible for an academic license, is to use the commercial Mosek solver, which has an R interface package Rmosek. I have yet to try this, but may give it a go at some point.
(As an aside, the other solver bundled with CVX, SeDuMi, fails completely on the same problem; the CVX authors aren't kidding when they suggest trying multiple solvers. Also, in a significant subset of cases, SDTP3 has to switch from Cholesky to LU decomposition, which makes the processing orders of magnitude slower, with only very marginal improvement in the objective compared to the pre-LU steps. I've found it worth reducing the requested precision to avoid this, but YMMV.)
There is a new alternative: CVXR, which comes from the same people.
There is a website, a paper and a github project.
Disciplined Convex Programming seems to be growing in popularity observing cvxpy (Python) and Convex.jl (Julia), again, backed by the same people.

Which packages make good use of S4 objects?

Which R packages make good use of S4 classes? I'm looking for packages that use S4 appropriately (i.e. when the complexity of the underlying problem demands), are well written and well documented (so you can read the code and understand what's going on).
I'm interested because I'll be teaching S4 soon and I'd like to point students to good examples in practice so they can read the code to help them learn.
Thinking about this some more, maybe Matrix and/or lme4? Matrix does a lot of trickery with efficient representation of sparse matrices so this may be a worthwhile (though possibly heavy) example.
Else, given that all of BioConductor is done in S4, some of it is bound to be better than average :) I am sure Martin Morgan will pipe in with good examples.
This doesn't exactly answer your question, but....
R in a Nutshell develops an S4 class for a timeseries object and then compares it to the S3 representation. It's a very nice illustration (without being overly complex or too simple) of the differences between S3 and S4.
R programming for Bioinformatics briefly discusses the ExpressionSet set object.
In regards with using the Bioconductor packages, you might find that to fully appreciate the code - or even just to get started - you will have to a reasonable knowledge of biology. I suppose the same applies to complex statistics packages; you need to have a vague idea of what's going on to understand the reasons behind the code structure.
At the last LondonR meeting Brandon Whicher gave a fascinating talk about the use of S4 classes in his package dcemriS4, for use in analysing magnetic resonance imaging (MRI) in medical research.
His talk is available here:
http://www.londonr.org/Medical%20Image%20Analysis%20using%20S4%20classes%20&%20methods.pdf
And the package is on CRAN:
http://star-www.st-andrews.ac.uk/cran/web/packages/dcemriS4/index.html
sp and dependent packages use S4 and well documented. Alpha and omega for spatial stuff in R.
I would go for kernlab, which additionally includes a lot of C code.
It comes with an handy vignette, detailing some of S4 concepts. (It doesn't seem to use roxygen for the documentation, though, but this is not the question here.)
Trying to get a hold of the S4 system I ran across an educational package sequence. The implementation of the class system is illustrated in an accompanying set of slides in a repo roo by the same author. Though the example used is from biostatistics, it's good to follow.
It is a great learning resource, because the author took carefully contrasted the different object systems while at the same time keeping the complexity of the package adequate for learning.

Quickly cross-check complex math results?

I am doing matrix operations on large matrices in my C++ program. I need to verify the results that I get, I used to use WolframAlpha for the task up until now. But my inputs are very large now, and the web interface does NOT accept such large values (textfield is limited).
I am looking for a better solution to quickly cross-check/do math problems.
I know there is Matlab but I have never used it and I don't know if thats what will suffice my needs and how steep the learning curve would be?
Is this the time to make the jump? or there are other solutions?
If you don't mind using python, numpy might be an option.
Apart from the license costs, MATLAB is the state of the art numerical math tool. There is octave as free open source alternative, with a similar syntax. The learning curve is for both tools absolutely smooth!
WolframAlpha is web interface to Wolfram Mathematica. The command syntax is exactly the same. If you have access to Mathematica at your university, it would be most smooth choice for you since you already have experience with WolframAlpha.
You may also try some packages to convert Mathematica commands to MATLAB:
ToMatlab
Mathematica Symbolic Toolbox for MATLAB 2.0
Let us know in more details what is your validation process. How your data look like and what commands have you used in WolframALpha? Then we can help you with MATLAB alternative.

How to describe a MATLAB function using mathematical notation?

Basically I have created two MATLAB functions which involve some basic signal processing and I need to describe how these functions work in a written report. It specifically requires me to describe the algorithms using mathematical notation.
Maths really isn't my strong point at all, in fact I'm quite surprised I've even been able to develop the functions in the first place. I'm quite worried about the situation at the moment, it's the last section of writing I need to complete but it is crucially important.
What I want to know is whether I'm going to have to grab a book and teach myself mathematical notation in a very short space of time or is there possibly an easier/quicker way to learn? (Yes I know reading a book should be simple enough, but maths + short time frame = major headache + stress)
I've searched through some threads on here already but I really don't know where to start!
Although your question is rather vague, and I have no idea what sorts of algorithms you have coded that you are trying to describe in equation form, here are a few pointers that may help:
Check the MATLAB documentation: If you are using built-in MATLAB functions, they will sometimes give an equation in the documentation that describes what they are doing internally. Some examples are the functions CONV, CORRCOEF, and FFT. If the function is rather complicated, it may not have an equation but instead have links to some papers describing the algorithm, which may themselves have equations for the algorithm. An example is the function HILBERT (which you can also find equations for on Wikipedia).
Find some lists of common mathematical symbols: Some standard symbols used to represent common mathematical operations can be found here.
Look at some sample pseudocode to see how it's done: For algorithms you yourself have coded up, you'll have to write them out in equation or pseudocode form. A paper that I've used often in my work is Templates for the Solution of Linear Systems, and it has some examples of pseudocode that may be helpful to you. I would suggest first looking at the list of symbols used in that paper (on page iv) to see some typical notations used to represent various mathematical operations. You can then look at some of the examples of pseudocode throughout the rest of the document, such as in the box on page 8.
I suggest that you learn a little bit of LaTeX and investigate Matlab's publish feature. You only need to learn enough LaTeX to write mathematical expressions. Then you have to write Matlab comments in your source file in LaTeX, but only for the bits you want to look like high-quality maths. Finally, open the Matlab editor on your .m file, and select File | Publish.
See Very Quick Intro to LaTeX and check your Matlab documentation for publish.
In addition to the answers already here, I would strongly advise using words in addition to forumlae in your report to describe the maths that you are presenting.
If I were marking a student's report and they explained the concepts of what they were doing correctly, but had poor or incorrect mathematical notation to back it up: this would lose them some marks, but would hopefully not impede my understanding of the hard work they've put in.
If they had poor/wrong maths, with no explanation of what they meant to say, this could jeapordise my understanding of their entire project and cost them a passing grade.
The reason you haven't found any useful threads is because most of the time, people are trying to turn maths into algorithms, not vice versa!
Starting from an arbitrary algorithm, sometimes pseudo-code, along with suitable comments, is the clearest (and possibly only) representation.

In what areas of programming is a knowledge of mathematics helpful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
For example, math logic, graph theory.
Everyone around tells me that math is necessary for programmer. I saw a lot of threads where people say that they used linear algebra and some other math, but no one described concrete cases when they used it.
I know that there are similar threads, but I couldn't see any description of such a case.
Computer graphics.
It's all matrix multiplication, vector spaces, affine spaces, projection, etc. Lots and lots of algebra.
For more information, here's the Wikipedia article on projection, along with the more specific case of 3D projection, with all of its various matrices. OpenGL, a common computer graphics library, is an example of applying affine matrix operations to transform and project objects onto a computer screen.
I think that a lot of programmers use more math than they think they do. It's just that it comes so intuitively to them that they don't even think about it. For instance, every time you write an if statement are you not using your Discrete Math knowledge?
In graphic world you need a lot of transformations.
In cryptography you need geometry and number theory.
In AI, you need algebra.
And statistics in financial environments.
Computer theory needs math theory: actually almost all the founders are from Maths.
Given a list of locations with latitudes and longitudes, sort the list in order from closest to farthest from a specific position.
All applications that deal with money need math.
I can't think of a single app that I have written that didn't require math at some point.
I wrote a parser compiler a few months back, and that's full of graph-theory. This was only designed to be slightly more powerful than regular expressions (in that multiple matches were allowed, and some other features were added), but even such a simple compiler requires loop detection, finite state automata, and tons more math.
Implementing the Advanced Encryption Standard (AES) algorithm required some basic understanding of finite field math. See act 4 of my blog post on it for details (code sample included).
I've used a lot of algebra when writing business apps.
Simple Examples
BMI = weight / (height * height);
compensation = 10 * hours * ((pratio * 2.3) + tratio);
A few years ago, I had a DSP project that had to compute a real radix-2 FFT of size N, in a given time. The vendor-supplied real radix-2 FFT wouldn't run in the allocated time, but their complex FFT of size N/2 would. It is easy to feed the real data into the complex FFT. Getting the answers out afterwards is not so easy: it is called post-weaving, or post-unweaving, or unweaving. Deriving the unweave equations from the FFT and complex number theory was not fun. Going from there to tightly-optimized DSP code was equally not fun.
Naturally, the signal I was measuring did not match the FFT sample size, which causes artifacts. The standard fix is to apply a Hanning window. This causes other artifacts. As part of understanding (and testing) that code, I had to understand the artifacts caused by the Hanning window, so I could interpret the results and decide whether the code was working or not.
I've used tons of math in various projects, including:
Graph theory for dealing with dependencies in large systems (e.g. a Makefile is a kind of directed graph)
Statistics and linear regression in profiling performance bottlenecks
Coordinate transformations in geospatial applications
In scientific computing, project requirements are often stated in algebraic form, especially for computationally intensive code
And that's just off the top of my head.
And of course, anything involving "pure" computer science (algorithms, computational complexity, lambda calculus) tends to look more and more like math the deeper you go.
In answering this image-comparison-algorithm question, I drew on lots of knowledge of math, some of it from other answers and web searches (where I had to apply my own knowledge to filter the information), and some from my own engineering training and lengthy programming background.
General Mindforming
Solving Problems - One fundamental method of math, independent of the area, is transofrming an unknown problem into a known one. Even if you don't have the same problems, you need the same skill. In math, as in programming, virtually everything has different representations. Understanding the equivalence between algorithms, problems or solutions that are completely different on the surface helps you avoid the hard parts.
(A similar thing happens in physics: to solve a kinematic problem, choice of the coordinate system is often the difference between one and ten pages full of formulas, even though problem and solution are identical.)
Precision of Language / Logical reasoning - Math has a very terse yet precise language. Learning to deal with that will prepare you for computers doing what you say, not what you meant. Also, the same precision is required to analyse if a specification is sufficient, to check a piece of code if it covers all possible cases, etc.
Beauty and elegance - This may be the argument that's hardest to grasp. I found the notion of "beauty" in code is very close to the one found in math. A beautiful proof is one whose idea is immediately convincing, and the proof itself is merely executing a sequence of executing the next obvious step.
The same goes for an elegant implementation.
(Most mathematicians I've encountered have a faible for putting the "Aha!" - effect at the end rather than at the beginning. As have most elite geeks).
You can learn these skills without one lesson of math, of course. But math ahs perfected this for centuries.
Applied Skills
Examples:
- Not having to run calc.exe for a quick estimation of memory requirements
- Some basic statistics to tell a valid performance measurement from a shot in the dark
- deducing a formula for a sequence of values, rather than hardcoding them
- Getting a feeling for what c*O(N log N) means.
- Recursion is the same as proof by inductance
(that list would probably go on if I'd actively watch myself for items for a day. This part is admittedly harder than I thought. Further suggestions welcome ;))
Where I use it
The company I work for does a lot of data acquisition, and our claim to fame (comapred to our competition) is the brain muscle that goes into extracting something useful out of the data. While I'm mostly unconcerned with that, I get enough math thrown my way. Before that, I've implemented and validated random number generators for statistical applications, implemented a differential equation solver, wrote simulations for selected laws of physics. And probably more.
I wrote some hash functions for mapping airline codes and flight numbers with good efficiency into a fairly limited number of data slots.
I went through a fair number of primes before finding numbers that worked well with my data. Testing required some statistics and estimates of probabilities.
In machine learning: we use Bayesian (and other probabilistic) models all the time, and we use quadratic programming in the form of Support Vector Machines, not to mention all kinds of mathematical transformations for the various kernel functions. Calculus (derivatives) factors into perceptron learning. Not to mention a whole theory of determining the accuracy of a machine learning classifier.
In artifical intelligence: constraint satisfaction, and logic weigh very heavily.
I was using co-ordinate geometry to solve a problem of finding the visible part of a stack of windows, not exactly overlapping on one another.
There are many other situations, but this is the one that I got from the top of my head. Inherently all operations that we do is mathematics or at least depends on/related to mathematics.
Thats why its important to know mathematics to have a more clearer understanding of things :)
Infact in some cases a lot of math has gone into our common sense that we don't notice that we are using math to solve a particular problem, since we have been using it for so long!
Thanks
-Graphics (matrices, translations, shaders, integral approximations, curves, etc, etc,...infinite dots)
-Algorithm Complexity calculations (specially in line of business' applications)
-Pointer Arithmetics
-Cryptographic under field arithmetics etc.
-GIS (triangles, squares algorithms like delone, bounding boxes, and many many etc)
-Performance monitor counters and the functions they describe
-Functional Programming (simply that, not saying more :))
-......
I used Combinatorials to stuff 20 bits of data into 14 bits of space.
Machine Vision or Computer Vision requires a thorough knowledge of probability and statistics. Object detection/recognition and many supervised segmentation techniques are based on Bayesian inference. Heavy on linear algebra too.
As an engineer, I'm trying really hard to think of an instance when I did not need math. Same story when I was a grad student. Granted, I'm not a programmer, but I use computers a lot.
Games and simulations need lots of maths - fluid dynamics, in particular, for things like flames, fog and smoke.
As an e-commerce developer, I have to use math every day for programming. At the very least, basic algebra.
There are other apps I've had to write for vector based image generation that require a strong knowledge of Geometry, Calculus and Trigonometry.
Then there is bit-masking...
Converting hexadecimal to base ten in your head...
Estimating load potential of an application...
Yep, if someone is no good with math, they're probably not a very good programmer.
Modern communications would completely collapse without math. If you want to make your head explode sometime, look up Galois fields, error correcting codes, and data compression. Then symbol constellations, band-limited interpolation functions (I'm talking about sinc and raised-cosine functions, not the simple linear and bicubic stuff), Fourier transforms, clock recovery, minimally-ambiguous symbol training sequences, Rayleigh and/or Ricean fading, and Kalman filtering. All of those involve math that makes my head hurt bad, and I got a Masters in Electrical Engineering. And that's just off the top of my head, from my wireless communications class.
The amount of math required to make your cell phone work is huge. To make a 3G cell phone with Internet access is staggering. To prove with sufficient confidence that an algorithm will work in most all cases sometimes takes people's careers.
But... if you're only ever going to work with this stuff as black boxes imported from a library (at their mercy, really), well, you might get away with just knowing enough algebra to debug mismatched parentheses. And there are a lot more of those jobs than the hard ones... but at the same time, the hard jobs are harder to find a replacement for.
Examples that I've personally coded:
wrote a simple video game where one spaceship shoots a laser at another ship. To know if the ship was in the laser's path, I used basic algebra y=mx+b to calculate if the paths intersect. (I was a child when I did this and was quite amazed that something that was taught on a chalkboard (algebra) could be applied to computer programming.)
calculating mortgage balances and repayment schedules with logarithms
analyzing consumer buying choices by calculating combinatorics
trigonometry to simulate camera lens behavior
Fourier Transform to analyze digital music files (WAV files)
stock market analysis with statistics (linear regressions)
using logarithms to understand binary search traversals and also disk space savings when using packing information into bit fields. (I don't calculate logarithms in actual code, but I figure them out during "design" to see if it's feasible to even bother coding it.)
None of my projects (so far) have required topics such as calculus, differential equations, or matrices. I didn't study mathematics in school but if a project requires math, I just reference my math books and if I'm stuck, I search google.
Edited to add: I think it's more realistic for some people to have a programming challenge motivate the learning of particular math subjects. For others, they enjoy math for its own sake and can learn it ahead of time to apply to future programming problems. I'm of the first type. For example, I studied logarithms in high school but didn't understand their power until I started doing programming and all of sudden, they seem to pop up all over the place.
The recurring theme I see from these responses is that this is clearly context-dependent.
If you're writing a 3D graphics engine then you'd be well advised to brush up on your vectors and matrices. If you're writing a simple e-commerce website then you'll get away with basic algebra.
So depending on what you want to do, you may not need any more math than you did to post your question(!), or you might conceivably need a PhD (i.e. you would like to write a custom geometry kernel for turbine fan blade design).
One time I was writing something for my Commodore 64 (I forget what, I must have been 6 years old) and I wanted to center some text horizontally on the screen.
I worked out the formula using a combination of math and trial-and-error; years later I would tackle such problems using actual algebra.
Drawing, moving, and guidance of missiles and guns and lasers and gravity bombs and whatnot in this little 2d video game I made: wordwarvi
Lots of uses sine/cosine, and their inverses, (via lookup tables... I'm old, ok?)
Any geo based site/app will need math. A simple example is "Show me all Bob's Pizzas within 10 miles of me" functionality on a website. You will need math to return lat/lons that occur within a 10 mile radius.
This is primarily a question whose answer will depend on the problem domain. Some problems require oodles of math and some require only addition and subtraction. Right now, I have a pet project which might require graph theory, not for the math so much as to get the basic vocabulary and concepts in my head.
If you're doing flight simulations and anything 3D, say hello to quaternions! If you're doing electrical engineering, you will be using trig and complex numbers. If you're doing a mortgage calculator, you will be doing discrete math. If you're doing an optimization problem, where you attempt to get the most profits from your widget factory, you will be doing what is called linear programming. If you are doing some operations involving, say, network addresses, welcome to the kind of bit-focused math that comes along with it. And that's just for the high-level languages.
If you are delving into highly-optimized data structures and implementing them yourself, you will probably do more math than if you were just grabbing a library.
Part of being a good programmer is being familiar with the domain in which you are programming. If you are working on software for Fidelity Mutual, you probably would need to know engineering economics. If you are developing software for Gallup, you probably need to know statistics. LucasArts... probably Linear Algebra. NASA... Differential Equations.
The thing about software engineering is you are almost always expected to wear many hats.
More or less anything having to do with finding the best layout, optimization, or object relationships is graph theory. You may not immediately think of it as such, but regardless - you're using math!
An explicit example: I wrote a node-based shader editor and optimizer, which took a set of linked nodes and converted them into shader code. Finding the correct order to output the code in such that all inputs for a certain node were available before that node needed them involved graph theory.
And like others have said, anything having to do with graphics implicitly requires knowledge of linear algebra, coordinate spaces transformations, and plenty of other subtopics of mathematics. Take a look at any recent graphics whitepaper, especially those involving lighting. Integrals? Infinite series?! Graph theory? Node traversal optimization? Yep, all of these are commonly used in graphics.
Also note that just because you don't realize that you're using some sort of mathematics when you're writing or designing software, doesn't mean that you aren't, and actually understanding the mathematics behind how and why algorithms and data structures work the way they do can often help you find elegant solutions to non-trivial problems.
In years of webapp development I didn't have much need with the Math API. As far as I can recall, I have ever only used the Math#min() and Math#max() of the Math API.
For example
if (i < 0) {
i = 0;
}
if (i > 10) {
i = 10;
}
can be done as
i = Math.max(0, Math.min(i, 10));

Resources