Seeking for outdated intel optimization manuals (2010-2011) - intel

I am reading IntelĀ® 64 and IA-32 Architectures Optimization Reference Manual , from 2014 (not the latest),
but the latencies reported at the end of the manual (APPENDIX C: INSTRUCTION LATENCY AND THROUGHPUT) stop at the Sandy architecture.
I need Core 2 duo too but it is absent.
Actually, one can only get the latest edition. at every iteration, they remove "legacies" from their counting.
So I guess I need an older edition, maybe 2010-2011 to get those latencies.
May someone give me a link/ way to get it, please? they didn't answer on the Intel forum.
I know about Agner Fog's measurings but I want those from Intel.
Thanks

You could use archive.org to retrieve cached versions of a webpage
IntelĀ® 64 and IA-32 Architectures Optimization Reference Manual (June 2011)
https://web.archive.org/web/20110926200348/http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf

Related

MPICH2 Installation

Given the availability of a new workstation (Intell Xeon X5690, Windows 7 Professional, 64-bit) for numerical analysis of fluid dynamics models, I find it a shame not engage in parallel computing. So far, I have had no or little experience in this field.
What's the difference between MS-MPI and the latest release of MPICH suitable for Windows? I installed MPICH 1.4.1, but I cannot get a test program to work on Ifort. How am I supposed to compile the program? Do I have to change Ifort configurations somehow to add the libraries of MPICH? Isn't there any good manual available online that could meet my needs?
There's lots of questions in this one question, but it all boils down to one basic question: How do I install MPI on Windows?
MPICH has long since worked on Windows. The last version that supported it was 1.4.1p1 as you've found, but it doesn't have any support anymore from the MPICH developers so if you have trouble, you probably won't find much help. I haven't seen anyone on here step up to help with those questions so far.
MS-MPI is a good option if you want to use Windows. It's free to use and still has support directly from Microsoft. You'll have to read their documentation about how to set everything up correctly, but it's probably the right place to start if you want to use MPI on Windows.
Intel MPI also works on Windows, but it isn't free so you might not want to look at that right now.

Optimize mathematical library (libm)

Have anyone tried to compile glibc with -march=corei7 to see if there's any performance improvement over the version that comes by default with any Linux x68_64 distribution? GCC is compiled with -march=i686. I think (not sure) that the mathematical library is also compiled the same way. Can anybody confirm this?
Most Linux distributions for x86 compile using only i686 instructions, but asking for scheduling them for later processors. I haven't really followed later developments.
A long while back different versions of system libraries according to processor lines were common, but the performance differences were soon deemed too small for the cost. And machines got more uniform in performance meanwhile.
One thing that has to be remembered always is that today's machines are memory bound. I.e., today a memory access takes a few hundred times longer than an instruction, and the gap is growing. Not to mention that this machine (an oldish laptop, was top-of-the-line some 2 years back) has 4 cores (8 threads), all battling to get data/instructions from memory. Making the code run a tiny bit faster, so the CPU can wait longer for RAM, isn't very productive.

Is the SPARC architecture still relevant as a JIT compiler target on high-end servers?

X86 and AMD64 are the most important architectures for many computing environments (desktop, servers, and supercomputers). Obviously a JIT compiler should support both of them to gain acceptance.
Until recently, the SPARC architecture was the logical next step for a compiler, specially on high-end servers markets. But now that Sun is dead, things are not clear.
Oracle doesn't seem to be really interested in it, and some big projects are dropping support for that architecture (Ubuntu for example). But on the other hand, the OpenSPARC initiative intended to open source recent processors is quite promising, meaning that a lot of manufacturers could implement and use SPARC for free in the near future.
So, is SPARC still a good choice as the next target architecture for a JIT compiler? Or is it better to choose another one (POWER, ARM, MIPS, ...)?
I don't know any more than you about SPARC's future. I hope it has one; it's been tragic how many good architectures have died out while x86 has kept going.
But i would suggest you look at ARM as a target. It isn't present in big server hardware, but it's huge in the mobile market, and powers all sorts of interesting little boxes, like my NAS, my ADSL router, and so on.
Your next target architecture should definitely be ARM - power consumption in large datacenters is a huge issue and the next big thing will be trying to reduce that by using low-power CPUs; see Facebook's first attempt on this.

Is Intel Software Development Suite worth the cost?

(this is a partial duplicate of Are the Intel compilers worth it?)
The Intel Software Development Suite includes the C++ compiler, IPP, VTune, Thread Checker.
The Intel Parallel Studio includes the composer, the inspector and the amplifier.
Those two packages cost almost $4000, but I'll still be a student for 1 month and as such pas only $200. This is a /20 factor, but this still is $200.
I'd love to get them, even if it's only for hobby. Do you think it's worth it ? Has someone experience with it ?
Thanks
If you were a company I would say definitely yes, since the cost of software is much much less than the cost of people.
But since you are a student, I don't know, only you can answer this question: do you think it's worth $200? Is it something that it's just "cool" to have, or will it actually teach you new things that will help you in your professional career?

Are the Intel compilers worth it?

Prety straight forward, are the Intel compilers worth getting? I do mostly systems level and desktop work so I figure I might benefti. Can anyone with some more experience shed some light?
If you are on Windows, they do provide a nice speed boost over other compilers on Intel processors. There is a known behavior where they pick a very slow code path with non-Intel processors (AMD, VIA), and antitrust probes surrounding the issue.
If you use the thread building blocks or other features, you also risk tying your code to the Intel compiler long term as the functionality doesn't exist elsewhere.
GCC 4.5 on Linux is nearly on-par with the Intel compiler. There is no clear winner on that platform.
In the small experience I've had with intel compilers (C only), I would say their are vastly superior. Specifically the OpenMP library was much much faster than the open source version. "Worth it" depends on your situation though, they are expensive, but they are better IMO.
From the benchmarks I've seen, it does look like using the Intel specific compilers provide some performance/multithreading benefit over their Open Source alternatives.
if floating number precision is important to you then use Visual studio compiler and not intel compiler.
32 bit vs 64 bit application Can give you different result on calculation with Intel compiler. (checked).
Visual studio compiler result on 32 bit vs 64 bit will be same.
If you're comparing the numerical behavior of ICL vs. MSVC++ you must take into account the different behavior of the /fp: settings.
ICL /fp:source (less aggressive than default) is equivalent to MSVC /fp:fast (more aggressive than default).
Microsoft doesn't perform any of the optimizations which are enabled by ICL default. These include simd reductions (which usually improve accuracy, but by an unpredictable margin). ICL also violates the standard about parens by default. There still seems to be a controversy about whether to fix that by better performing means than /fp:source.

Resources