Benchmarks comparing Intel Xeon Phi and Nvidia Tesla K20 - opencl

To my surprise, I cannot find a comparison of these products using open source OpenCL benchmark suites, such as rodinia and SHOC. Such a comparison could be more interesting than comparisons of theoretical peak performance, or of performance in simple matrix multiplication kernels, which I have been able to find.
Does anyone know where such results might be available? Failing that, do any stack overflow users have access to one or both products, and the time and inclination to run the benchmarks and share the results? Results for any of the versions of either card would be interesting.

CLBenchmark.com now has some results for the Xeon Phi, and a complete set for the K20c.
Here is a side-by-side comparison.

Here is a comparison of the Xeon Phi with a GTX Titan.
http://clbenchmark.com/compare.jsp?config_0=14470292&config_1=15887974
The Xeon Phi basically gets completely destroyed in 10/12 benchmarks and is on par for the other 2. So the 300 watt 22 nm Phi part does not far well against the 250 watt 28 nm GPU.
Basically the Phi seems to be having major troubles utilizing it's bandwidth capacity, vectorizing the code seems to be another issue.

Here is a benchmark comparing sparse matrix multiplication performance:
http://uk.arxiv.org/abs/1302.1078
It partly answers my question, but I would rather see more than one algorithm, and I would like to see how portable OpenCL performance is, I will still accept any answers which can provide that information.

SHOC benchmark suite for Xeon Phi is on github here:
Intel Xeon Phi SHOC Benchmark Suite
Plenty of benchmark postings starting to go public and "googlable", but here is the standard Intel communication on benchmarking of Xeon Phi versus a dual socket E5-2670:
Intel Xeon Phi Performance Doc.
When looking to compare performance of Xeon Phi to a regular Xeon, or any other platform, make sure you're taking into account the power envelope of the platform (dual socket Xeon) and if the application was already tuned for a Xeon or not. One of the big sells on Xeon Phi is that you typically get Xeon improvements in addition to Xeon Phi improvements. Pretty sweet..

Related

Intel DAAL library compatible with KNC?

I am looking for a definitive answer as to whether the Intel DAAL libraries are compatible with the x100 Knights Corner Xeon Phi co-processor.
I have searched high and low on the internet can can't tell either way, and can't seem to make it work on my x100 Xeon Phi.
Okay. Found this. Only the KNL is mentioned in the list of Xeon Phi processors. It is not explicit though that the KNC is not supported.
From Daal supports KNL

OpenCL vs OpenMP, How much performance difference when dealing with LBM problems?

I would like to find a suitable GPU acceleration package for Lattice Boltzmann Method (LBM) or normal Navier-Stokes CFD.
CUDA is device dependent, which is already out of my vision.
OpenCL is around 3 times faster than OpenMP when doing CFD, according to https://arxiv.org/abs/1704.05316
But there is no comparison on LBM.
OpenCL is 2 times harder to code.
I am considering about OpenCL and OpenMP now, please tell me how much performance difference between these two will it be on LBM problems?
I have implemented LBM in OpenCL, see my masters thesis. From testing my code on various GPUs and CPUs, and by comparing performance with other multi-CPU implementations, I can say that LBM on 1 GPU is about as fast as on ~2000-7000 CPU cores. The performance benefit really is massive as LBM efficiency on CPUs is extremely poor for all CPU codes (~10-50%). On the GPU, LBM is solely bottlenecked by memory bandwidth, which is orders of magnitude larger than on CPUs.
Also, on the Nvidia A100/V100 I get 97%/100% hardware efficiency (8800/5250 MLUPs/s for D3Q19 and FP32), so you can't say you would have a performance disadvantage compared to CUDA.
I have verified that my code runs on Nvidia/AMD/Intel GPUs and Intel CPUs; it even runs on the Mali-G72 GPU of my smartphone.
So yes, I definitely recommend going with OpenCL for LBM.
Update: My LBM source code is now available on GitHub.

Advice about inversion of large sparse matrices

Just got a Windows box set up with two 64 bit Intel Xeon X5680 3.33 GHz processors (6 cores each) and 12 GB of RAM. I've been using SAS on some large data sets, but it's just too slow, so I want to set up R to do parallel processing. I want to be able to carry out matrix operations, e.g., multiplication and inversion. Most of my data are not huge, 3-4 GB range, but one file is around 50 GB. It's been a while since I used R, so I looked around on the web, including the CRAN HPC, to see what was available. I think a foreach loop and the bigmemory package will be applicable. I came across this post: Is there a package for parallel matrix inversion in R that had some interesting suggestions. I was wondering if anyone has experience with the HIPLAR packages. Looks like hiparlm adds functionality to the matrix package and hiplarb add new functions altogether. Which of these would be recommended for my application? Furthermore, there is a reference to the PLASMA library. Is this of any help? My matrices have a lot of zeros, so I think they could be considered sparse. I didn't see any examples of how to pass data fro R to PLASMA, and looking at the PLASMA docs, it says it does not support sparse matrices, so I'm thinking that I don't need this library. Am I on the right track here? Any suggestions on other approaches?
EDIT: It looks like HIPLAR and package pbdr will not be helpful. I'm leaning more toward bigmemory, although it looks like I/O may be a problem: http://files.meetup.com/1781511/bigmemoryRandLinearAlgebra_BryanLewis.pdf. This article talks about a package vam for virtual associative matrices, but it must be proprietary. Would package ff be of any help here? My R skills are just not current enough to know what direction to pursue. Pretty sure I can read this using bigmemory, but not sure the processing will be very fast.
If you want to use HiPLAR (MAGMA and PLASMA libraries in R), it is only available for Linux at the moment. For this and many other things, I suggest switching your OS to the penguin.
That being said, Intel MKL optimization can do wonders for these sort of operations. For most practical uses, it is the way to go. Python built with MKL optimization for example can process large matrices about 20x faster than IDL, which was designed specifically for image processing. R has similarly shown vast improvements when built with MKL optimization. You can also install R Open from Revolution Analytics, which includes MKL optimization, but I am not sure that it has quite the same effect as building it yourself using Intel tools: https://software.intel.com/en-us/articles/build-r-301-with-intel-c-compiler-and-intel-mkl-on-linux
I would definitely consider the type of operations one is looking to perform. GPU processes are those that lend well to high parallelism (many of the same little computations running at once, as with matrix algebra), but they are limited by bus speeds. Intel MKL optimization is similar in that it can help use all of your CPU cores, but it is really optimized to Intel CPU architecture. Hence, it should provide basic memory optimization too. I think that is the simplest route. HiPLAR is certainly the future, as it is CPU-GPU by design, especially with highly parallel heterogeneous architectures making their way into consumer systems. Most consumer systems today cannot fully utilize this though I think.
Cheers,
Adam

inverse FFT in shader language?

does anyone know an implementation of the inverse FFT in HLSL/GLSL/cg ... ?
It would save me much work.
Best,
heinrich
Do you already have a FFT implementation? You may already be aware, but the inverse can be computed by reversing the order of the N inputs, taking the FFT over those, and dividing the result by N.
DirectX11 comes with a FFT example for compute shaders (see DX11 August SDK Release Notes). As PereAllenWebb points out, this can be also used for inverse FFT.
Edit: If you just want a fast FFT, you could try the CUFFT, which runs on the GPU. It's part of the CUDA SDK. The AMCL from AMD also has a FFT, which is currently not GPU accelerated, but this will be likely added soon.
I implemented a 1D FFT on 7800GTX hardware back in 2005. This was before CUDA etc so I had to resort to using Cg and manually implementing the FFT.
I have two FFT implementations. One is a Radix2 Decimation in Time FFT and the other a Stockham Autosort FFT. The stockham would perform around 2-4x faster than a CPU (at the time 3GHz P4 single core) for larger sizes (> 8192) but for smaller sizes the CPU was faster as it doesn't have to shift data to/from the GPU.
If you're interested in the shader code feel free to contact me and I'll send it over by email. It was from a personal project so not covered by any commercial copyright. I would imagine that CUDA (and similar) implementations would massively outperform my implementation, however from a learning perspective you can't get better than to write or study the code yourself!
Maybe you could take a look at OpenCL which is a standard for general purpose computing on graphics (and other) hardware.
The wikipedia article contains a OpenCL example for a standard FFT:
http://en.wikipedia.org/wiki/OpenCL#Example
If you are on a Mac with OS X 10.6, you just need to install the developer tools to get started with OpenCL development.
I also heard that hardware vendors already provide basic OpenCL driver support on Windows.

Intel MKL vs. AMD Math Core Library

Does anybody have experience programming for both the Intel Math Kernel Library and the AMD Math Core Library? I'm building a personal computer for high performance statistical computations and am debating on the components to buy. An appeal of the AMD Math Core library is that it is free, but I am in academia so the MKL is not that expensive. But I'd be interested in hearing thoughts on:
Which provides a better API?
Which provides better performance, on average, per dollar, including licensing and hardware costs.
Is the AMCL-GPU a factor I should consider?
Intel MKL and ACML have similar APIs but MKL has a richer set of supported functionality including BLAS (and CBLAS)/LAPACK/FFTs/Vector and Statistical Math/Sparse direct and iterative solvers/Sparse BLAS, and so on. Intel MKL is also optimized for both Intel and AMD processors and has an active user forum you can turn to for help or guidance. An independent assessment of the two libraries is posted here: (http://www.advancedclustering.com/company-blog/high-performance-linpack-on-xeon-5500-v-opteron-2400.html)
• Shane Corder, Advanced Clustering, (also carried by HPCWire: Benchmark Challenge: Nehalem Versus Istanbul): “In our recent testing and through real world experience, we have found that the Intel compilers and Intel Math Kernel Library (MKL) usually provide the best performance. Instead of just settling on Intel's toolkit we tried various compilers including: Intel, GNU compilers, and Portland Group. We also tested various linear algebra libraries including: MKL, AMD Core Math Library (ACML), and libGOTO from the University of Texas. All of the testing showed we could achieve the highest performance when using both the Intel Compilers and Intel Math Library--even on the AMD system--so these were used them as the base of our benchmarks.” [Benchmark testing showed 4-core Nehalem X5550 2.66GHz at 74.0GFs vs. Istanbul 2435 2.6GHz at 99.4GFs; Istanbul only 34% faster despite 50% more cores]
Hope this helps.
In fact, there are two versions of LAPACK routines in ACML. The ones without trailing underscore (_) are the C-version routines, which as Victor said, don't require workspace arrays and you can just pass values instead of references for the parameters. The ones with the underscore however are just vanilla Fortran routines. Do a "dumpbin /exports" on libacml_dll.dll and you'll see.
I have used AMCL for its BLAS/LAPACK routines, so this will probably not answer your question, but I hope it's useful for someone. Comparing them to vanilla BLAS/LAPACK, their performance was a factor of 2-3 better in my particular use case. I used it for dense nonsymmetric complex matrices, for both linear solves and eigensystem computations. You should know that the function declarations are not identical to the vanilla routines. This required a substantial amount of preprocessor macros to allow me to freely switch between the two. In particular all LAPACK routines in AMCL do not require work arrays. This is a major convenience if AMCL is the only library you will use.

Resources