Efficient free/open-source SOCP (second order cone programming) solvers [closed] - convex-optimization

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am looking for a recommendation (or comparison) of solvers for second order cone programming with regard to evaluation speed. The solver must be free for non-profit use or open source.
I am fairly open regarding the environment: stand-alone solutions, libraries, Matlab, Python, R, etc. are all acceptable.
My problem has significant sparsity in the constraints which I believe can be exploited by good solvers to speed up the calculation.

As you probably know, cvxpy uses either cvxopt
or ecos as solver.
I've used ecos only a tiny bit, for LP not cones (3x faster then cvxopt on one testcase).
It's ~ 5k lines c + python wrapper, does everything! in scipy.sparse.csc format; might be worth a look.

you might want to take a look to the benchmark maintained at
http://plato.la.asu.edu/bench.html
there you can find both SOCP and QP tests of various size. Most of the solvers would provide you with several interfaces, no issues on that. For a list of solvers look here
http://en.wikipedia.org/wiki/Second-order_cone_programming
I am not sure it is complete but you can start from here.
In my experience, for large size problems commercial solvers, as MOSEK and CPLEX, will give much better performances and stability, well of course I am biased as you might imagine given my username.
Remember that most of the commercial vendors nowadays can provide you either an academic or a trial license. This can be handy to tests and comparisons.
In my opinion, you may consider leaving to the user the choice on which solver to use. It is a little bit more work to do, but it gives much more flexibility to you and to the user. You can draw some inspiration here
Ipopt - COIN-OR Project:
Cbc :
I suggest you to use a commercial solver to come up with a good formulation that such a solver can solve as fast as you want. This is then the ground to compare with others. If you have some nice large scale problems you need help with, you can contact us at mosek.com.
cbc: https://projects.coin-or.org/Cbc
ipopt: https://projects.coin-or.org/Ipopt

In addition to CVXPY (http://www.cvxpy.org/), you might also consider QCML (https://github.com/cvxgrp/qcml), which generates C code specific to your problem.
CVXPY has been improving very rapidly. The issues in Check constraints are ok in cvxpy with actual values are from a totally obsolete version. Assuming your problem isn't too large (less than a million variables), CVXPY will probably meet your needs. Even with large problems you can use the SCS solver in CVXPY to find a fast (though somewhat less accurate) solution.

There exist also SCS (splitting conic solver) under the MIT licence, it is written in C and it has multiple ports (Python, R ..).

Related

Julia's speed advantage over R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been using R for my research in corporate finance and asset pricing, and really like it due to my background in Math and Statistics. Until now, I encountered 2 main constrains in R. The first one is handling big data files, but I have kind of circumvented it by combining R with PostgreSQL and Spark, and I believe I can get more RAM from high performance computer or AWS cloud in the future. The second constrain is executing speed (important for handling tick by tick security quote data), and I was recommended that Julia has huge speed advantage over R. My question is that since Rcpp offers a really fast execution, does the speed advantage of Julia still hold? I am considering if I should learn Julia.
In addition, R provides a perfect database connection with WRDS, Quandl, TrueFX, and TAQ, and I am really used to Hadley Wickham style data cleaning. As a academic guy, I kind of like the fact that R has support from peer review journals like Journal of Stat Software. I will try Julia and see how it works. Thanks for all the answers and comments!
Rcpp and Julia will get you to the same place in the end performance-wise. In fact, type-stable Julia will compile to the essentially the same LLVM IR as clang compiled C++. Design-wise there's nothing that stops it from being the same (in the type-stable case), other than a few missing optimizations because the language is young (example, #fastmath doesn't add FMA by default, so you'd have to add FMA calls yourself, whereas I believe C++ compiled with fastmath will FMA). But you can play around check that #code_llvm and #code_native outputs the same code, given type-stability.
However, Rcpp would require that you write a bunch of C++ code and test/maintain that code along with your R code. C/C++ is much lower level and can be more difficult to maintain (the "two language problem"). If you choose to go with Julia, you can write it all in Julia. That's the main difference.
(As for the whole "Julia is 2x slower than C", should probably be mentioned here. Normally it's due to having small parts of type-unstable code, not turning off array bounds checking with #inbounds (the language comparison from the comments notably doesn't do this, which can cause a pretty large difference in a tight loop), and relying on vectorization styles (a la R/MATLAB/Python). The last part is much better in Julia v0.6, but it will always have a small cost over looping. In the end, it's opt-in/opt-out choices for concise code and additional safety checks that causes the difference.)

Should Maths Generally Be Used Over Other Functions/ Statements [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In quite a few of my more recent programs, I've been using basic calculus to replace if statements such as in a for loop I've used:
pos = cbltSz*(x-1) to get the position of a small cube relative to a large one rather than saying something like if(x == 0){pos = -cbltSz}. This is more or less to neaten up the code a little bit. But it got me thinking. To what extent would using maths out-perform pre-defined statements/ functions? and how much would it vary from language to language? This is assuming that my maths used is preferable to the alternative in a way other than aesthetic.
Modern CPUs have deep pipelines, so that a branch misprediction may come at a considerable performance impact. On the other hand, they tend to have considerable floating-point computation power, often being able to perform multiple floating-point operations in parallel on different ALUs. So there might be a performance benefit to this approach. But it all depends. If the application is already doing a lot of number crunching, the ALUs might be saturated. If the branch predictor does a good job, branch mispredictions may be rare in many applications.
So I'd go with the usual rules for optimizations: don't try to hand-optimize everything. Don't optimize at the cost of readability and maintainability. If you have a performance bottleneck, identify the hot portions of your codebase and consider alternatives for those. Try out alternatives in benchmarks, because theoretic considerations only get you so far.
Note that your statements pos = cbltSz*(x-1) and if(x == 0){pos = -cbltSz} are not equivalent if x is non-zero: the first sets pos to some definite value while the second leaves it to its previous value. The significance of this difference depends on the rest of your program. The two statements also differ in clarity--the second expresses your purpose better and the first does not "neaten up the code a little bit". In most programs, improved clarity is more important than a slight increase in speed.
So the answer to your question depends on too many factors to get a full answer here.
What most early programming language designers didn't understand, and many still don't understand, is that mathematical functions are a bit different from user-defined functions. If we've done any trig at all, we all know what sin(PI/8) is going to mean,and we're happy embedding the class in expressions like
rx = cos(theta) * x - sin(theta) * y;
But functions you write yourself are only seldom like basic mathematical functions. They take several parameters, they return several parameters, it's not usually quite clear what they do. The last thing you want is to embed them in complicated expressions.
Secondly, maths has its own system of notation for a reason. The cut down, ascii-only system of a programming language breaks down as soon as expressions go above a certain low complexity. (I use the rule of three, three levels of nested parentheses are all your user can take in). And, without special programming support, programming functions cannot escape their domain.
pow(sqrt(-1), sqrt(-1));
won't do what you want unless you have a complex math library installed.
As for performance, some Fortran and C compilers optimise mathematical routines very aggressively, with constant propagation and the like. Others won''t. It just depends.

Beginners guide to own CFD code? 2D Euler Equation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Do you know a good and especially easy guide to code one's own Computational Fluid Dynamics solver, for the 2D Euler equations?
I just would like to understand what commercial software like Fluent is doing. And when it's easy enough I would like to show some friends how to do and code that.
Unfortunately I couldn't find how to translate this http://en.wikipedia.org/wiki/Euler_equations_%28fluid_dynamics%29 into a numeric application.
Has anyone done this before? Any help is appreciated,
Andreas
Yes, lots of people have done it before.
The trick is to write conservation laws for mass, momentum, and energy as integral equations and turn them into matrix equations so you can solve them numerically. The transformation process usually involves discretizing a control volume using simple shapes like triangles and quadrilaterals for 2D and tetrahedra and bricks for 3D and assuming distributions of pertinent variables within the shape.
You'll need to know a fair amount about linear algebra, and numerical integration if the problem is transient.
There are several techniques for doing it: finite differences, finite elements, and boundary elements (if a suitable Green's function exists).
It's not trivial. You'll want to read something like this:
http://www.amazon.com/Numerical-Transfer-Hemisphere-Computational-Mechanics/dp/0891165223
This book:
http://www.amazon.com/Computational-Fluid-Dynamics-John-Anderson/dp/0070016852
is a pretty straightforward, simple description of what it takes to write a CFD code. It's suitable for an undergraduate level intro with more practical examples than theory.
Your 6 year old question is still fairly common among all Computational Fluid Dynamics (CFD) newbies ("How hard can this be?"). However, one must at this stage be careful to not trivialize the math behind solving a given system of equations.
To those new to (or interested) in CFD -
Before you start thinking about coding, it is important to understand the nature of the equations you are trying to solve. An elliptic problem (like a Poisson solver for potential flow) is very different from a hyperbolic system (like the Euler equations) in which information "propagates" through the numerical domain in the form of different wave modes. Which is my first point,
1. Know the properties of the system and study the equations - For this step, you will need to go through math textbooks on partial differential equations, and know how to classify different equations. (See Partial Differential Equations for Scientists and Engineers by Farlow, or revisit your undergraduate math courses.)
2. Study linear algebra - The best CFD experts I know, have strong fundamentals in linear algebra.
Moving to a specific case for hyperbolic problems, e.g. the Euler equations
3. Read on spatial and temporal discretization - This is the point that is less well understood by people new to CFD. Since information propagates in a definite direction and speed in hyperbolic problems, you cannot discretize your equations arbitrarily. For this, you need to understand the concept of Riemann problems, i.e. given a discontinuous interface between two states at a given time, how does the system evolve? Modern finite-volume methods, use spatial discretizations that replicate how information is propagated through your simulation in space and time. This is called upwinding. Read Toro's book on Riemann solvers for a good introduction to upwinding.
4. Understand the concept of stability - Not all discretizations and time-integration methods will lead to a stable solution. Understand the concept of a limiting time-step (CFL-condition). If you don't follow the laws of upwinding, it will be difficult to get a stable solution.
At this point of time, you will have a clearer idea of what goes into a CFD code and you can start worrying about which language to use to code. Most widely used CFD codes are written in C or Fortran for computational speed and parallelization. However, if you intend to code only to learn, you can use Matlab or Python, which will be less frustrating to work with. I should also mention that coding a 2D Euler solver is a typical homework problem for new graduate students in Aerospace engineering, so try and be humble and open to learning if you succeed.
For anyone who is looking into CFD, know that it is a challenging and amazing field, with many advancements. If you wish to succeed, read up on papers (especially the fundamentals) and don't give up if you can't understand a topic. Keep working hard, and you will find yourself pushing the boundaries of what CFD can do.
The answer to your question depends on the approach you want to use to solve the 2D Euler equation . Personally , I recommend the finite Volume approach and to understand it, I think you should take a look on this book:
Computational Fluid Dynamics: Principles and Applications by Jiri Blazek.
It's a good book that takes from the beginning to stand the finite volume method to writing your own code and it also comes with a companion code to guide along the way . It's very good book, it did me wonders when I was writing my Master's thesis.

Math programming optimization [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I would like to compile a list of tips and tricks on mathematical programming optimization, often I read in the forums things like:
Compare distance using the distance square value because square root
calculation is more expensive
(variable * 0.5) is faster than (variable / 2.0)
For me being an programming enthusiast I would like to do my best in wath optimization is concern, Any contribution would be much appreciated
Two key points.
MEASURE - dont assume you know what is slow and/or why. Measure it in real production code. Then only worry about the bit that is chewing up most of your time.
Optimise your algorithm not your code. Look for somethig that you're doing that is o(N^2) instead of o(N) or is o(N) instead of o(ln(N)) and switch to an algorithm with better asymptotic behaviour.
I would say, the first thing to pin down before thinking of optimisation is the scope and intended purpose of your library. For example, is this library 2D or 3D does it includes geometrical algorithms, like convex hull?
Like most people developing such library you will run into a few unavoidable issues. Things like precisions errors can definitely drive you mad at times. Beware of degenerated triangles as well.
Consider algorithms that include an epsilon or tolerance carefully. This is a neat feature to have, but it will make your algorithms more complex.
If you venture in the world of 3D, treat point and vector differently (this is one of the most common issue in 3D math). Consider meta programming for template multiplications (this one will get flamed I feel it) as it can considerably speed up rendering.
In general, try to avoid virtual calls for anything but substantial algorithms, small classes like vectors or points should not be inherited (another flaming opportunity).
I would say, start by sticking to good development practice, read Efficient C++ and More Efficient C++ by Scott Meyers and If you take short cuts like comparing the squared value to avoid a square root calculation, comment your code so future developer can understand the maths.
Finally, do not try to over optimize up front, use a profiler for this. Personally I often start by coding the most elegant solution (should I say what I consider the most elegant solution) and then optimize, you will be surprised at how good a job the C++ optimizer often do.
Hope this helps
Martin

A good uncertainty (interval) arithmetic library? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
edited
Given that the words "uncertain" and "uncertainty" are fairly ubiquitous, it's hard to Google "uncertainty arithmetic" and get anything immediately helpful. Thus, can anyone suggest a good library of routines, in almost any programming/scripting language, that implements handling of uncertain values, as per this description:
Use uncertainty arithmetic to record values that are approximations, for which there is a measured tolerance. This is when we are unsure about a value, but know the upper and lower bounds it can have, expressed as a ±value.
I believe "Interval Arithmetic" is the more common name for what you're looking for.
boost::interval would be my first choice for a supporting library.
If you are looking for an error propagation module (this is different from interval arithmetic, but error propagation is what is commonly used by scientists), I would suggest that you have a look at my uncertainties Python module. It handles error/uncertainty propagation in a transparent way, and, contrary to many implementations, properly handles correlations between variables.
Have a look at Thomas Flanagan's Error Propagation Java class. The approach it uses is most excellent for handling uncertainty without excess trouble.
for reference, as it's probably way too late for you, I'd suggest BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
It's not a library, but your question reminded me of an example in "Expert F#" that describes probabilistic workflows:
instead of writing expressions to compute, say, integers, we instead write expressions that compute distributions of integers. This case study is based on a paper by Ramsey and Pfeffer from 2002.
You can read the excerpt on google books.
I'd probably go about this by declaring a class called UncertainValue, with methods and properties such as (psuedocode):
class UncertainValue
{
private double upperbound;
private double lowerbound;
private double nominalvalue;
private double certainty;
...
UncertainValue add(UncertainValue value);
UncertainValue multiply(UncertainValue factor);
}
I realise this doesn't answer your question in terms of finding a pre-made library, sorry.
INTLAB (INTerval LABoratory) is a well-known library for interval arithmetic and verified numerical linear algebra. It is based on MATLAB/Octave. You can download this library from here:
http://www.ti3.tu-harburg.de/rump/intlab/
kv library is an interval arithmetic library made by C++ and Boost C++ libraries. Multiple precision interval arithmetic is available. It also has a verified ODE solver.
http://verifiedby.me/kv/index-e.html
For other interval arithmetic libraries/software, check the following website:
http://www.cs.utep.edu/interval-comp/intsoft.html

Resources