CISC and RISC - synchronous and asynchronous - asynchronous

I have two opposing ideas. I think that all the instructions in a RISC take the same time what makes me believe RISC is associated with synchronous.
At the same time i think that CISC should be synchronous and RISC should be asynchronous.
Can you tell me if RISC is associated with asynchronous and CISC is associated with synchronous? WHY???
Thanks!

What do you mean by synchronous and asynchronous. If you are talking about logic clocking, then cisc vs risc vs a hard disk controller dont matter, it is just some logic that you are clocking. And when it comes to processors if you are thinking in whether they can perform operations in series or parallel, it also doesnt matter between the two because both can be implemented either way. Some CISC processors are a shell around a risc or vliw or some flavor of other processor. If you are thinking parallel execution vs serial, in order or out of order execution, again the two dont matter with either one if you have the right logic and the right pipelines you can look ahead and execute out of order and in parallel.
It is like saying there are red sweaters and green sweaters made from red and green yarn. they are still just a combination of knit and purl stitches. Logic is just a lot of primitive gates, be it a disk controller chip, sound card chip, or processor, you can implement the logic in as many ways as you can implement a program to solve a particular problem. Machine code is nothing more than a bunch of bits fed into a bunch of gates, there is nothing special about how risc works vs cisc other than the number of bits you feed and the number of gates to implement. One might be easier to implement than another but it is still just some number of logic gates.

Related

MPI overlapping communication and computation

my current understanding of MPI nonblocking routines is that they allow for the overlapping of communication and computation. However, I also understood that this overlapping is not guaranteed by the MPI implementation. Then, what could be the factors that inhibit the overlapping? Thanks.
Non-blocking routines were not primarily motivated by latency hiding (I'll use this as a shorter synonym of "overlap of computations and communication"): the prime use was to be able to write deadlock/serialization-free code. For the longest time, achieving actual performance improvement required periodically activating the MPI library by MPI_Iprobe or such tricks. The basic problem was that during your computation, there was no guarantee that the MPI layer would do anything at all.
The problem of forcing "MPI progress" still persists, but these days MPI implementations such as from Intel or mvapich (sorry, I don't know about OpenMPI) have environment variables with which you can force "progress threads". Also, network cards may be clever enough to work while your processor is otherwise engaged. And even with all this, improvement is not guaranteed because of the overhead you are introducing.

How to use non-blocking point-to-point MPI routines instead of collectives

In my programm, I would like to heavily parallelize many mathematical calculations, the results of which are then written to an output file.
I successfully implemented that using collective communication (gather, scatter etc.) but I noticed that using these synchronizing routines, the slowest among all processors dominates the execution time and heavily reduces overall computation time, as fast processors spend a lot of time waiting.
So I decided to switch to the scheme, where one (master) processor is dedicated to receiving chunks of results and handling the file output, and alle the other processors calculate these results and send them to the master using non-blocking send routines.
Unfortunately, I don't really know how to implement the master code; Do I need to run an infinite loop with MPI_Recv(), listening for incoming messages? How do I know when to stop the loop? Can I combine MPI_Isend() and MPI_Recv(), or do both method need to be non-blocking? How is this typically done?
MPI 3.1 provides non-blocking collectives. I would strongly recommend that instead of implementing it on your own.
However, it may not help you after all. Eventually you need the data from all processes, even the slow ones. So you are likely to wait at some point again. Non-blocking communication overlaps communication and computation, but it doesn't fix your load imbalances.
Update (more or less a long clarification comment)
There are several layers to your question, I might have been confused by the title as to what kind of answer you were expecting. Maybe the question is rather
How do I implement a centralized work queue in MPI?
This pops up regularly, most recently here. But that is actually often undesirable because a central component quickly becomes a bottleneck in large scale programs. So the actual problem you have, is that your work decomposition & mapping is imbalanced. So the more fundamental "X-question" is
How do I load balance an MPI application?
At that point you must provide more information about your mathematical problem and it's current implementation. Preferably in form of an [mcve]. Again, there is no standard solution. Load balancing is a huge research area. It may even be a topic for CS.SE rather than SO.

MPI_Send/Recv vs. MPI_Reduce

I was given a little excercise where I had to implement a Monte Carlo algorithm by using MPI to estimate the total volume of n spheres, having the coordinates of their center and radius in 3 dimensions. Even if we must use MPI, we can launch all the processes on our local machine, so there's no network overhead. I implemented two versions of this excericse:
One, using MPI_Send and MPI_Recv (where the process of rank 0 only waits for partial results from the others to perform the final sum)
http://pastebin.com/AV41hJqn
The other, using MPI_Reduce, also here process of rank 0 waits for partial results.
http://pastebin.com/8b0czv6a
I expected that both the programs would take the same time to finish, but I see that the one using MPI_Reduce is faster. Why this? Where's the difference?
There could be a lot of reasons depending on which MPI implementation you're using, what kind of hardware you're running on and how optimized the implementation is to take advantage of that. This Google Scholar search gives some idea of the variety of work done on this. To give you a few ideas of what it could be:
Since reductions can be completed in intermediate steps, it may be possible to use a different topology than the basic rank 0 collect-from-all approach, with tradeoffs in latency and bandwidth.
Within a compute node (or on your desktop or laptop if you're trying this with a toy problem), it may be possible to exploit locality within cores, between cores on a CPU socket or between sockets to order the computations and communication in a way that's more efficient for the hardware. It sounds from the abstract like this paper from IBM may give some concrete details about some of these design decisions. Alternatively, the implementation might choose a cache-oblivious scheme for better performance within a general compute node.
Persistent communication (MPI_Send_init and MPI_Recv_init) can be used under the hood in the MPI_Reduce implementation. These routines can perform better than their blocking and non-blocking counterparts due to providing the MPI implementation and hardware with extra details about how the program is grouping its communications.
This is not a comprehensive list, but hopefully it gets you started and provides some ideas for how to search out more details if you're interested.

any pipeline programing example showing efficiency of pipeline

I'm looking for programming example of pipeline where it execute the instruction/code using pipeline technique that could also be executed by nopipeline technique. so that I can see pipeline is better than nopipeline.
any help pls?
Your CPU pretends it does stuff in order, as far as people looking at it as a black box (that sets memory and reads memory) this is what it does. a pipeline is when it does a bit of every instruction in the pipeline and moves things along in it every cycle.
If your CPU only has one adder, then a second add instruction will stall the pipeline (the first will have to finish), compilers (GCC) are aware of this so use things like instruction scheduling to always keep the CPU busy.
You cannot "turn it off" to see that it is better. Instruction scheduling is one of the cheapest and most beneficial optimisations we can do, you get it free with assignment analysis and stuff.
You really want a book to talk about "before and after", it's also very hard to say "pipelining made this much of an increase" because it depends on what instructions were used, how many ALUs the CPU has, so forth.

Can someone suggest a good way to understand how MPI works?

Can someone suggest a good way to understand how MPI works?
If you are familiar with threads, then you treat each node as a thread (to an extend)
You send a message (work) to a node and it does some work and then returns you some results.
Similar behaviors between thread & MPI:
They all involve partitioning a work and process it separately.
They all would have overhead when more node/threads involved, MPI overhead is more significant compared to thread, passing messages around nodes would cause significant overhead if work is not carefully partitioned, you might end up with the time passing messages > computational time required to process job.
Difference behaviors:
They have different memory models, each MPI node does not share memory with others and does not know anything about the rest of world unless you send something to it.
Here you can find some learning materials http://www.mcs.anl.gov/research/projects/mpi/
Parallel programming is one of those subjects that is "intrinsically" complex (as opposed to the "accidental" complexity, as noted by Fred Brooks).
I used Parallel Programming in MPI by Peter Pacheco. This book gives a good overview of the basic MPI topics, available API's, and common patterns for parallel program construction.

Resources