As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
is MPI widely used today in HPC?
A substantial majority of the multi-node simulation jobs that run on clusters everywhere is MPI. The most popular alternatives include things like GASNet which support PGAS languages; the infrastructure for Charm++; and probably linda spaces get an honourable mention, just due to the number of core-hours being spent running Gaussian. In HPC, UPC, co-array fortran/HPF, PVM etc ends up dividing up the tiny fraction that is left.
Any time you read in the science news about a simulation of a supernova, or of formula-one racing teams using simulation to "virtual wind-tunnel" their cars before making design changes, there's an excellent chance that it is MPI under the hood.
It's arguably a shame that it is so widely used by technical computing people - that there aren't more popular general-purpose higher-level tools which get the same uptake - but that's where we are at the moment.
I worked for 2 years in the HPC area and can say that 99% of cluster applications was written using MPI.
MPI is widely used in high performance computing, but some machines try to boost performance by combining deploying shared memory compute nodes, which usually use OpenMP. In those cases the application would uses MPI and OpenMP to get optimal performance. Also some systems use GPUs to improve performance, I am not sure about how well MPI supports this particular execution model.
But the short answer would be yes. MPI is widely used in HPC.
It's widely used on clusters. Often it's the only way that a certain machine supports multi-node jobs. There are other abstractions like UPC or StarP, but those are usually implemented with MPI.
Yes, for example, Top500 super computers are benchmarked using LINPACK (MPI based).
Speaking about HPC, MPI is the main tool even nowdays. Although GPUs are strongly hitting HPC, MPI is still top 1.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am looking for some advice on how to choose a workstation. My budget is around $5000.
I simulate structural economic models using Julia. My codes typically use big arrays (through which I iterate using for loops), involve large Monte Carlo simulations, and minimisation algorithms. I parallelise as much as I can.
As I understand it, it would be beneficial for me to have a machine with as many cores as possible and quite a lot of RAM. However, I am not sure how to balance these two. What is the trade-off? Also, does the quality of the cores matter?
Is there anything else that I should take into account apart from RAM and cores/CPU?
Any help is appreciated.
Thanks!
It depends on wheter your algorithms is parallelizable and if yes, whether it is constrained by memory bandwidth or compute power. Most algorithms are bandwidth-restricted. Large arrays sounds also GPU-) parrallelizable, provided the array components are independent of each other.
Largest performance here offer CPUs with as much memory channels as possible. Normal desktop CPUs usually have 2-channel memory, AMD Threadripper have 4-channel and Threadripper Pro have 8-channel. So a ~24-core Threadripper Pro with 8x8GB /8x16GB memory may be suitable in your budget.
If you parallelize a lot, maybe consider using a GPU. Julia also supports GPU parallelization. When running very parallelizable code, a single GPU can be about as powerful as 2000 CPU cores. The speedup really is substantial. Memory bandwidth for GPUs is also orders of magnitude larger than for CPUs.
The main crux is that GPUs have very limited, non-expandable memory, and GPUs with a lot of memory tend to become disproportunately expensive. If 24GB is enough for your workloads, go for an RTX3090. If you parallelize most of your code on the GPU, the CPU does not matter nearly as much, and you can choose a normal desktop cpu, for example the 16-core AMD Ryzen 5950X with 4x16GB (2-channel), and stick entirely with consumer / gamer hardware which is much more powerful for much less money.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to compare the following software design processes.
Waterfall model
V-Model
Unified Process
The V-Model has test phases for each specification phase, the waterfall model doesn't.
The Unified Process is iterative and incremental, the others aren't.
Are those the main differences? Is there something to add?
I only need the main differences, not too detailed.
The water fall model is not iterative.
V-Model is iterative in the sense that
a. It uses unit testing to verify procedural design
b. It uses integration testing to verify architectural (system) design
c. It uses acceptance testing to validate the requirements
d. If problems are found during verification and validation, the left side of the V can be re-executed before testing on the right side is re-enacted
Unified Process Model is iterative
a. System delivered in pieces.
b. Allows production system and development system to run in parallel.
c. Reduces risk and uncertainty in the development
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Neural networks are usually characterized by a huge amount of data and necessity to use parallel computing. Does it make functional languages more suitable for building neural networks?
Not really. Functional languages usually make parallelization trivial if you stick to immutability (or more precisely avoid any kind of uncontrolled side effects). If you do not, then it's not really easier to make things parallel then in non functional languages.
In this case you have two options:
use side effects, but in a localized fashion, so parallel threads have no business with each other: e.g. you evaluate a lot of NN-s, each of them can happen on it's own thread (using a thread pool with not much more threads than the number of CPU cores is a good idea).
for non localized side effects you need to rely on synchronization or some other ways to control it. One such example is the computation model of actors (quite popular in functional language users, but also available for java, see http://akka.io/) which usually let you have your side effects within your actor, but the interaction of the actors has its strict rules. This will free you from the business of low level thread handling.
Another thing that you should consider is that it's not too difficult to have a moderately performant NN implementation, it's also not very complicated to have a purely functional one, but doing both at the same time can be a challenging task. So - unless you are experienced with functional languages - I think it's easier to write a parallelized NN in a non functional language, or at least in a non pure way.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Is anyone knowledgeable on programming language implementation of algorithmic trading?
I am going to propose a research project on functional programming and algorithmic trading.
My proposal is here: http://pastebin.com/wcigd5tk
Any comments would be very appreciated.
What do you think the future of functional languages in the financial field is? I see many job postings that ask for experience in java and C++, and i dont understand why.
Jane Street is very well known for using OCaml for their trading software. Here you can find some reasons why they decided to use functional languages rather than imperative ones. They also have a blog describing several specific solutions to problems they encountered during development.
C++ is the most popular in that field.
java , python ,haskall ,c# are all runners up
haskall and c# are functional with haskell being purely functional
eventually the field will move to a more "modern language" like c# or haskall but right now c++ has so much support the libraries are already made and its implementation is the easiest .
For Trading application, it usually has Real time, multithread, low latency, high availability to consider too. I was working in a company developing a trading application using both(mixed) C++ and Java as it fit with the behaviour of the application.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was wondering is it worthwhile learning a language like Erlang. I am looking at learning a new language and the following questions are on my mind :
Will learning functional programming help me/improve as a programmer?
Industry Usage of Erlang/ projects using erlang ?
Any real future in learning erlang from a career point of view?
Possible advantages/disadvantages of erlang over python/PHP/ etc
Thanks,
Vicky
Coming from Scheme here so I can't speak much for Erlang. But as Scheme is functional as well, let me try to convince you.
Will learning functional programming help you as a programmer?
Absolutely. I've heard a lot of reasons for this, some I can attest to, others not (yet). Actually, in my opinion, it's moot trying to convince people that functional programming is worth learning at all, especially when there's a lot of press about Python/PHP/Java(Script)/etc. They have to be convinced for themselves, when they learn it.
As for me, Scheme allowed me to appreciate recursion more. Sure you rarely use recursion in other languages due to limited stack space but as a lot of algorithms are described recursively (albeit implemented iteratively, that is, using loop constructs), implementing them recursively should give you a better appreciation for them.
Industry use of Erlang?
Pass. Though I recommend looking around at Commercial Uses of Functional Programming.
Learning Erlang for a career?
I'm not the person to ask so I can't elaborate but I've heard that FP proved to be very useful in parallel computing. Talk about the future!
Advantages over Python/PHP/etc?
It's functional so it is a good practice for mathematical thinking about algorithms, which is not very apparent (at least for me) when you code in a procedural language. Also, look at the results of Programming Language Benchmarks. It seems that functional languages are faster than those you mentioned (LISP, Haskell, Erlang). Python goes after Erlang but Erlang sure is faster by around half of Python's time! And look at Erlang HiPE: 10.22 seconds! (I'm looking at x64 Ubuntu Intel Q6600 quad-core---parallel processing gave them a leg-up?)
TL;DR Go ahead and learn it. You'll thank me, I'm telling you ;D