Mathematical notation or Pseudocode? [closed] - math

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
At this moment I am concerned about which is the best way to explain an algorithm intuitively.
I have try to read some pseudocode an wow it may be complex for some cases (specially for math applications even more than the formulas itself or pure code like in PHP, C++ or Py). I have thought how about describe algorithms from mathematical notation in a way such that a mathematician could understand it and a web developer too.
Do you think it is a good idea ? (IF all the grammars and structure, symbols and modelings of it will be well explained and it is compact)
Example:
Binary Search
It even could help to simplify algorithm complexity if a mathematical analysis is done (I think)

Depends on the algorithm. For me, I know I would have never gotten the concept of trees if I didn't get a visual drawing. Also the concept of nodes, while a drawing is good, actually seeing the data structure written down is better for that case.
It's student to student basis. I personally see that example of the Binary Search as the worst type of example but am sure some math individual would maybe understand that better.

Related

Effective simulation of large scale Modelica models by automatic translation to Modia [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
this is more of an hypothetical question, but might have great consequences. Lot of us in Modelica community are dealing with large scale systems with expensive simulation times. This is usually not an obstacle for bugfixing and development, but speeding up the simulation might allow for better and faster optimizations.
Recently I came across Modia possibilities, claiming to have superb numerical solvers, achieving better simulation times than Dymola, a state-of-the-art Modelica compiler. The syntax seemed to cover all important bits. Recreating large scale component models in Modia is unfeasible, but what about automatically translating the flattenized Modelica to Modia? Is that realistic? Would that provide a speed up? Has anyone tried before? I have searched for some
This might also hopefully improve integration of Modelica models and postprocesssing / identificaiton tooling within one language, instead of using FMI or invoking a separate executable.
Thanks for any suggestions.
For those interested, we might as well start developing this.
We in the Modia team agrees that the modeling know how in Modelica libraries must be reused. So we are working on a translator (brief details given in https://ep.liu.se/ecp/157/060/ecp19157060.pdf) from Modelica to Modia. The plan is to initially provide translated versions of Modelica.Blocks, Modelica.Electrical.Analog and Modelica.Mechanics together with Modia.

Recursion When to use it? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
So we just finished the subject recursion in school and I am still wondering "why?".
I feel like I have just learned a hell of a lot about math in a programming way with the sole purpose of passing an exam later on and then never again.
So what I want to know is when to use it? I can only find people saying "when you want to call a function within itself" but why would you do that?
Recursion is the foundation of computation, every possible program can be expressed as a recursive function (in the lambda calculus). Hence, understanding recursion gives you a deeper understanding of the principles of computation.
Second, recursion is also a tool for understanding on the meta level: Lots of proofs over the natural numbers follow a pattern called "natural induction", which is a special case of structural induction which in turn allows you to understand properties of very complex systems in a relatively simple way.
Finally, it also helps to write good (i.e. readable) algorithms: Whenever there is data to store/handle in a repetitive calculation (i.e. more than incrementing a counter), you can use a recursive function to implicitly manage a stack for you. This is also often very efficient since most systems come with a machine stack at hand.

Explain how the functional programming model differs from the procedural or object orientated models [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can anyone explain how the functional programming model differs from the procedural or object orientated models.
I cannot conclude a good answer myself.
in my opinion FP is about pure functions (that is functions in a mathematical sense) - which implies referential transparency and, if you continue the thouhgt, immutable data.
This is the biggest difference I see: you don't mutate data - and most other aspects either directly follow from this or from cool type-systems (which are not necessary for a language to be called functional) and the academic nature.
But of course there is far more to it and you can read papers, complete books or just wikipedia about it.
please note that you can dispute the pure property and then things get a lot more fuzzy ... which should not surprise you, as most functional languages in wide to allow for mutation (Clojure, Scala, F#, Ocaml, ...) and there are not many pure ones.
In this case the biggest difference might be the way you abstract things with higher-order-functions (at least functions should be first class citizens - meaning you can pass them around and have them as values).
But overall this question is really opinionated and will very likely be closed as to broad or something - maybe you should ask for details instead of the big picture

What are the algorithmic/programming optimizations that make data.table fast? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have done some searching around the Internet and SO looking for an introduction or analysis of what makes data.table so fast, but I've only found a lot of (very helpful) manuals, no breakdown of what goes into the programming. (I am more or less completely floored that I can't locate a published paper for data.table, not even something from JStatSoft.)
I've had an algorithms class so I know about sorts and linked lists and binary trees and such, but I don't want to make any amateur guesses (especially when I go to explain to academic people why it's a good idea to use it). Can anyone offer a short, topical summary with references? This question references a slide presentation which is cool, but the info comes in pieces (and even the documentation for, say, setkey() doesn't cite a data.table reference, but goes to Wikipedia).
What I am looking for is something that is both not the source code and not a list of Wikipedia topics, but an ideally "official", sourced answer (thus making it canonical, which could help a lot with all the questions orbiting around this topic).
(It would be great if there was a technical paper out there I could cite for this (the citation() for data.table is just the manual, but of course it's not directly relevant to the question as far as SO is concerned.)

is Haskell suitable for statistic analysis [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The question is in the Title. Basically I'm looking for an alternative to R.
I've been using R a bit, there are some really good stuff about it (especially data.frame plyr and ggplot), however I really love Haskell and type inference, so I was wondering if using Haskell to do "simple" statistic analysis would be a good choice.
My basic needs are :
read/write CSV
import SQL table
do some basic 'mapReduce' on the data. Which where R is great but I assume Haskell should be equally good.
However my experience with Haskell is everything is fine until you process realworld data. You always encounter performance issue (and soonish) because even though in theory you should write functional code and don't worry about what's the computer is doing, if you don't and don't use the appropriate library and are not an Haskell expert, stuff are damned slow.

Resources