Related
I am looking for an accurate and understandable definition. The ones I have found differ from each other:
From a book on functional reactive programming
Denotational semantics is a mathematical expression of the
formal meaning of a programming language.
However, wikipedia refers to it as an approach and not a math expression
Denotational semantics is an approach of formalizing the meanings of
programming languages by constructing mathematical objects (called
denotations) that describe the meanings of expressions from the
languages
The term "denotational semantics" refers to both the mathematical meanings of programs and the approach of giving such meanings to programs. It is like, say, the word "history", which means the history of something as well as the entire research field on histories of things.
I've never found the definitions of the term "denotational semantics" useful for understanding the concept and its significance. Rather, I think it's best approached instead by considering the forms of reasoning that denotational semantics enables.
Specifically, denotational semantics enables equational reasoning with referentially transparent programs. Wikipedia gives this introductory definition of referential transparency:
An expression is said to be referentially transparent if it can be replaced with its value without changing the behavior of a program (in other words, yielding a program that has the same effects and output on the same input).
But a more precise definition wouldn't talk about replacing an expression with a "value", but rather replacing it with another expression. Then, referential transparency is the property where, if your replace parts with replacements that have the same denotation, then the resulting wholes also have the same denotation.
So IMHO, as a programmer, that's the key thing to understand: denotational semantics is about how to give mathematical "teeth" to the concept of referential transparency, so we can give principled answers to claims about correctness of substitution. In the context of functional programming, for example, one of the key applications is: when can we say that two function-valued expressions actually denote "the same" function, and thus either can safely substitute for the other? The classic denotational answer is extensional equality: two functions are equal if and only if they map the same inputs to the same outputs, so we just have to prove whether the expressions in question denote extensionally equivalent functions. So for example, Quicksort and Bubblesort are notably different arguments, but denotationally they are the same function.
In the context of reactive programming, the big question would be: when can we say that two different expressions nevertheless denote the same event stream or time-dependent value?
In Rust by Example #36, the sum of odd integers up to a limit is calculated in both imperative style and functional style.
I separated these two out and increased the upper limit to 10000000000000000 and timed the results:
Imperative style:
me.home:rust_by_example>time ./36_higher_order_functions_a
Find the sum of all the squared odd numbers under 10000000000000000
imperative style: 333960700851149440
real 0m2.396s
user 0m2.387s
sys 0m0.009s
Functional style:
me.home:rust_by_example>time ./36_higher_order_functions_b
Find the sum of all the squared odd numbers under 10000000000000000
functional style: 333960700851149440
real 0m5.192s
user 0m5.188s
sys 0m0.003s
The functional version runs slower and also takes very slightly longer to compile.
My question is, what causes the functional version to be slower? Is this inherent to the functional style or is it due to the compiler not optimising as well as it could?
what causes the functional version to be slower? Is this inherent to the functional style or is it due to the compiler not optimising as well as it could?
Generally, the compiler will translate the higher level/shorter functional version to an imperative encoding as part of code generation. It may also apply optimizations that improve performance.
If the compiler has poor optimizations or a poor code generator, the functional code may be worse than manually-written versions.
It's really up to the compiler. Start by enabling optimizations.
Do you know if there are any plans to introduce parallel programming in R for all packages?
I'm aware of some developments such as R-revolution and parallel programming packages, but they seem to have specialised functions which replace the most popular functions (linear programming etc..). However one of the great things about R is the huge amount of specialised packages which prop up every day and make complex and time-consuming analysis very easy to run. Many of these use very popular functions such as the generalised linear model, but also use the results for additional calculation and comparison and finally sort out the output. As far as I understand you need to define which parts of a function can be run in parallel programming so this is probably why most specialised R packages don't have this functionality and cannot have it unless the code is edited.
Are there are any plans (or any packages) to enable all the most popular R functions to run in parallel processing so that all the less popular functions containing these can be run in parallel processing? For example, the package difR uses the glm function for most of its functions; if the glm package was enabled to run in parallel processing (or re-written and then released in a new R version) for all multi-processor machines then there would be no need to re-write the difR package and this could then run some of its most burdensome procedures with the aid of parallel programming on a Windows PC.
I completely agree with Paul's answer.
In addition, a general system for parallelization needs some very non-trivial calibration, even for those functions that can be easily parallelized: What if you have a call stack of several functions that offer parallel computation (e.g. you are bootstrapping some model fitting, the model fitting may already offer parallelization and low level linear algebra can be implicitly parallel)? You need to estimate (or choose manually) at which level explicit parallelization should be done. In addition, you possibly have implicit parallelization, so you need to trade off between these.
However, there is one particularly easy and general way to parallelize computations implicitly in R: linear algebra can be parallelized and sped up considerably by using an optimized BLAS. Using this can (depending on your system) be as easy as telling your package manager to install the optimized BLAS and R will use it. Once it is linked to R, all packages that use the base linear algebra functions like %*%, crossprod, solve etc. will profit.
See e.g. Dirk Eddelbüttel's gcbd package and its vignette, and also discussions how to use GotoBLAS2 / OpenBLAS.
How to parallelize a certain problem is often non-trivial. Therefore, a specific implementation has to be made in each and every case, in this case for each R package. So, I do not think a general implementation of parallel processing in R will be made, or is even possible.
I'm implementing the Euclidian algorithm for finding the GCD (Greatest Common Divisor) of two integers.
Two sample implementations are given: Recursive and Iterative.
http://en.wikipedia.org/wiki/Euclidean_algorithm#Implementations
My Question:
In school I remember my professors talking about recursive functions like they were all the rage, but I have one doubt. Compared to an iterative version don't recursive algorithms take up more stack space and therefore much more memory? Also, because calling a function requires uses some overhead for initialization, aren't recursive algorithms more slower than their iterative counterpart?
It depends entirely on the language. If your language has tail-call recursion support(a lot do now days) then they will go at an equal speed. If it does not, then the recursive version will be slower and take more (precious) stack space.
It all depends on the language and compiler. Current computers aren't really geared towards efficient recursion, but some compilers can optimize some cases of recursion to run just as efficiently as a loop (essentially, it becomes a loop in the machine code). Then again, some compilers can't.
Recursion is perhaps more beautiful in a mathematical sense, but if you feel more comfortable with iteration, just use it.
Given my previous questions about the the usage of AMPL.
Are there any other programming/scripting languages that are strictly meant for mathmatical processing?
For example: Matlab (it does deviate a bit from a mathematical structure, but its close enough), Mathematica, and AMPL
R / S+ for statistical computing
Other stat languages: SAS, SPSS, STATA, GAUSS, etc.
Octave, an open source clone of Matlab
Fortress, "a language for high-performance computation that provides abstraction and type safety on par with modern programming language principles."
Maple
Maxima
There's always APL, with its builtin matrix operators. Modern APL even supports .NET.
R, Numpy/scipy for Python, Maple, Yacas, even Fortran.
This may be only of historical significance, but Fortan (The IBM Mathematical Formula Translating System) is especially suited to numeric computation and scientific computing.
OPL (Optimization Programming Language) is one of the most comprehensive modelling languages for Mathematical Programming. You can do Linear Programming (LP), Mixed Integer Programming (MIP), Quadratic Programming (QP), Constraint Programming (CP), MIQP, etc.
IBM-ILOG CPLEX Optimization Studio uses this language.
Maple for symbolic math (similar to Mathematica).
SAS, SPSS, R for statistics.
The Operation Research / Management Science magazine has a yearly survey of Simulation Software, and while I can't find the link I believe they have one yearly survey on optimization packages, such as AMPL you are quoting.
Sage is basically Python with a load of packages and a few language extensions put into a "notebook" interface like that of Mathematica. It has interfaces to all sorts of computer algebra systems. And with Numpy and Scipy (which are included) it's a fine replacement for Matlab. And it's open source and actively developed.
Given your previous question, I assume you are looking for an alternative to commercial mathematics packages. If so, you should try Sage, it is open source and is a unified front end for almost all of the open source mathematics/sci.calc. packages out there (list).
The way it works, is that it uses your web browser as a graphical front end for displaying, editing and evaluating Mathematica style notebooks (it is also possible to just use the command line). All the dirty work, such as selecting the appropriate package for the situation, is done transparently in the background.
Sage uses Python as it's main language / syntax, so it's fairly easy to learn, and if you have old Python scripts, they should work straight out of the box. If I didn't have access to a Mathematica license, I would definitely use this.
Interactive Data Language (IDL) is a proprietary language used in astronomy, medicine and other sciences at least in part because of its built-in array operations and mathematical library.
As this question is still open and well indexed in Google, I would definitively add to the list the Julia language.
Aside the technical aspects that make shine this high level/high performance new language, an important consideration is that the community of developers/users is clearly biased toward mathematicians.