How to profile an openmp code natively on Intel MIC? - intel

I have an openmp code written in C. I executed the code on Intel MIC on Stampede. I want to profile the code to find the hotspots in the code so that it will be helpful for me to optimize the code further. I tried to use the profiler gprof but I read somewhere that gprof cannot be used on MIC directly. I tried to use perf by going through tutorial. I could go till a certain step after which when the perf annotate step comes and I execute the code, it gives me the error ")" unexpected. So I am not knowing how to proceed to profile my code. Can anybody please help ??
This is the site where I referred to the perf tutorial : sandsoftwaresound.net/perf/perf-tutorial-hot-spots/ .

80% of optimization for the Xeon Phi is the same as for the host (Xeon). Use gprof, printf, compiler options, and the rest of your toolkit and carry your optimization as far as you can executing your code on the host only. After you can do no more, then focus on specific Xeon Phi optimizations.
As you are on Stampede, I assume you are using the Intel compiler. The compiler has a lot of diagnostic capabilities to profile your code and even provide suggestions. I'd provide you with more specific URLs but am on vacation with limited bandwidth.
Though this isn't specific to your question, here are some other suggestions. If you aren't, you'll most likely get a substantial boost using it. Intel compilers are danged good at optimizations, especially on Intel architectures. Also, you should use Intel MKL where possible. All of MKL's routines are optimized for the different IA architectures, and the most relevant to HPC are optimized specifically for MIC.

You have a few options.
The heavyweight approach is to use Intel Vtune. Firstly add -g to your compiler flags.
I use Vtune from the host command line quite a bit, here is the command I use to profile an application on the MIC. (This is executed on the host machine, Vtune on the host uses ssh
to launch the application on the MIC.)
amplxe-cl -collect knc-hotspots -source-search-dir=/mysrc/dir -search-dir=/mybin/dir -- ssh mic0 /home/me/myapp
Assume the app on the MIC is at /home/me/myapp, and the source dir and source search dir on the host. (With Vtune update 15 at least, I need to specify both of these separately in order to get the Vtune GUI to show me symbol info)
Once your app has finished, run the Vtune GUI on the host with amplxe-gui and open your result set.
There are also some simplified open source profiling tools developed by Intel that support the MIC, Speedometer and Overhead, you can find information about them here
Hopefully this is enough info to get you started.

Related

Can I check OpenCL kernel syntax at compilation time?

I'm working on some OpenCL code within a larger project. The code only gets compiled at run-time - but I don't want to deploy a version and start it up just for that. Is there some way for me to have the syntax of those kernels checked (even without consider), or even compile them, at least under some restrictions, to make it easier to catch errors earlier?
I will be targeting AMD and/or NVIDIA GPUs.
The type of program you are looking for is an "offline compiler" for OpenCL kernels - knowing this will hopefully help with your search. They exist for many OpenCL implementations, you should check availability for the specific implementation you are using; otherwise, a quick web search suggests there are some generic open source ones which may or may not fit the bill for you.
If your build machine is also your deployment machine (i.e. your target OpenCL implementation is available on your build machine), you can of course also put together a very basic offline compiler yourself by simply wrapping clBuildProgram() and friends in a basic command line utility.

Difference between Jstack and gcore when generating Core Dumps?

We all know that Core Dumps are an essential diagnostic tools for analysing various processes in Unix . I know both jstack and gcore are both used for generating Javacore files or Core Dumps but I have a doubt that Gcore is mainly used for Processes and Jstack is used for threads .
As from an Operating System perspective Process and Threads though interrelated (Process comprises of Threads only) they are relatively different from each other w.r.t memory/speed/execution . So is that gcore will diagnose the process and jstack will analyse the threads in that process ???
GCore act at OS level and you got a dump of native code that is currently running. From a java point of view, it is not really understandable.
JStack get you the stack trace at VM level (the java stack) of all thread your application have. You can find from that what is the real java code executed at a point.
Clearly, GCore is almost never used (too low level, native code...). Only really strange issue with native library or stuff like that will perhaps need this kind of tool.
There is also jmap that can generate a hprof file which is the heap data from you VM. A tool like 'Memory Analyser Tool' can open the hprof, and you can drill down on what was going on (on memory side).
If your VM crash because of a OutOfMemory, you can also set parameter to get the hprof when the event occurs. It helps to understand why (too many users, A DB query that fetch too much data...)
Last thing is the fact that you can add a debug option when your start your VM, so that you can connect to it, and put debug on running process. It can help if you have some strange issue that you are not able to reproduce in your local environment.

Why is Intel Haswell XEON CPU sporadically miscomputing FFTs and ART?

During the last days I observed a behaviour of my new workstation I couldn't explain. Doing some research on this problem, there might be a possible bug in the INTEL Haswell architecture as well as in the current Skylake Generation.
Before writing about the possible bug, let me give you an overview of the hardware used, the program code and the problem itself.
Workstation hardware specification
INTEL Xeon E5-2680 V3 2500MHz 30M Cache 12Core
Supermicro SC745 BTQ -R1K28B-SQ
4 x 32GB ECC Registered DDR4-2133 Ram
INTEL SSD 730 Series 480 GB
NVIDIA Tesla C2075
NVIDIA TITAN
Operating system & program code in question
I'm currently running Ubuntu 15.04 64bit Desktop version, latest updates and kernel stuff installed. Besides using this machine to develop CUDA Kernels and stuff, I recently tested a pure C program.
The program is doing sort of modified ART on quite large input sets of data. So the code executes some FFTs and consumes quite some time to finish calculation. I can't currently post / link to any source
code as this is ongoing research that cannot be published. If you're not familiar with ART, just a simple explanation what it does. ART is a technique used to reconstruct the data received from a computer tomograph machine to get
visible images for diagnosis. So our version of the code reconstructs data sets of sizes like 2048x2048x512. Up until now, nothing too special nor rocket science involved. After some hours of debugging and fixing errors, the code was tested
on reference results and we can confirm the code works as it is supposed to. The only library the code is using is standard math.h . No special compile parameters, no additional library stuff that might bring in additional problems.
Observing the problem
The code implements ART using a technique to minimize the projections needed for reconstructing the data. So let's assume we can reconstruct one slice of data involving 25 projections. The code is started with exactly the same input data on 12 cores. Please note that the
implementation is not based on multithreading, currently 12 instances of the program are launched. I know this isn't the best way to do it, involving proper thread management is heavily advised and this is already on the list of improvements :)
So when we run at least two instances of the program (every instance working on a separate data slice), the results are of some projections are wrong in a random fashion. To give you an idea of the results, please see Table1. Please note that the input data is always the same.
Running only one instance of the code involving one core of the CPU, the results are all correct. Even performing some runs involving one CPU core, the results remain correct. Only involving at least two or more cores generates a result pattern as seen in Table1.
Identifying the problem
Okay this took quite some hours to get an idea of what is actually going wrong. So we went through the whole code, most of those problems begin with a minor implementation mistake. But, well, no (of course we cannot proof the absence of bugs nor guarantee it). To verify our code, we used two different machines:
(Machine1) Intel Core i5 Quad-Core (Model from late 2009)
(Machine2) Virtual Machine running on Intel XEON 6core SandyBridge CPU
surprisingly, both Machine1 & Machine2 produce always correct results. Even using all CPU-cores, the results remain correct. Not even one wrong result in over 50 runs on every machine. Code was compiled on every target machine without optimization options or any specific compiler settings.
So, reading the news led to the following findings:
ArsTechnika - Skylake CPU freezes during complex workload
PcWorld - how to test your PC for the skylake bug
Intel Community - Simple instruction for freezing a Skylake Processor
So the folks over at Prime95 and the Mersenne Community seem to be the first ones to discover and identify this nasty bug. The referenced postings and news support the suspicion, that the problem only exists under heavy workload. Following my observation, I can confirm this behavior.
The question(s)
Have you / the community observed this problem on Haswell CPUs as well as on Skylake CPUs?
As gcc does per default AVX(2) optimization (whenever possible), turning off this optimization would help?
How can I compile my code and ensure, that any optimization that might be affected by this bug is turned off? So far I read only about a problem using the AVX2 command set in Haswell / Skylake architectures.
Solutions?
Okay I can turn off all AVX2 optimizations. But this slows down my code. Intel might release a BIOS update to mainboard manufactures that would modify the microcode in Intel CPUs. As it seems to be a hardware bug, this might become interesting even by updating the CPUs microcode. I think it might be a valid option, as Intel CPUs use some RISC to CISC translation mechanisms controlled by Microcode.
EDIT: Techreport.com - Errata prompts Intel to disable TSX in Haswell, early Broadwell CPUs Will check the microcode version in my CPU.
EDIT2: As of now (19.01.2016 15:39 CET) Memtest86+ v4.20 is running and testing the memory. As this seems to take quite some time to finish, I'll update the post tomorrow with results.
EDIT3: As of now (21.01.2016 09:35 CET) Memtest86+ finished two runs and passed. Not even one memory error. Updated the microcode of the CPU from revision=0x2d to revision=0x36. Currently preparing source code for releasing here. Problem with the wrong results consists. As I'm not the author of the code in question, I have to double check not to post code I'm not allowed to. I'm also using the workstation and maintaining it.
EDIT4: (22.01.2016) (12:15 CET) Here ist the Makefile used to compile the sourcecode:
# VARIABLES ==================================================================
CC = gcc
CFLAGS = --std=c99 -Wall
#LDFLAGS = -lm -lgomp -fast -s -m64
LDFLAGS = -lm
OBJ = ArtReconstruction2Min.o
# RULES AND DEPENDENCIES ====================================================
# linking all object files
all: $(OBJ)
$(CC) -o ART2Min $(OBJ) $(LDFLAGS)
# every o-file depends on the corresonding c-file, -g Option bedeutet Debugging Informationene setzen
%.o: %.c
$(CC) -c -g $< $(CFLAGS)
# MAKE CLEAN =================================================================
clean:
rm -f *.o
rm -f main
and the gcc -v output:
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.9/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.9.2-10ubuntu13' --with-bugurl=file:///usr/share/doc/gcc-4.9/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.9 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.9 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.9-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
EDIT: Problem solved. I have to shout out a huge Sorry to the community and a big thank you for your hints. Sorry to user anonymous, who seems to be involved into kernel development. What happened? We spent another 2 days debugging and fiddling around with the program code. No implementation problems were found. BUT: the main code involves another helper program. This helper program calculates weights for the ART algorithm on demand. So after debugging and testing, this helper program messed up, when running at least 4 processes. So this was NOT a Kernel / hardware problem, but a software (memory access) problem.
Lessons learned:
Debug every tool that is involved into the calculation process.
Microcode was outdated. SuperMicro is informed about this.
Ubuntu 15.04 possibly needs additional tools, so that all Cores of the CPU run at full speed. Achieved this by installing Ubuntu 14.04 - all cores running at 2,5GHz.
I need to spent some beer if we ever meet up at a conference.
So after three days of thinking, testing and fiddling around with the machine, I discovered the following observations today:
Ubuntu 15.04 runs the CPU with 420 - 650 MHz per Core. Okay I thought this is an Energy-saving option, so I followed various guides to set the speed to the maximum (2.50 GHz). It didn't work. Checked with cpufreq-utils.
Results still remained wrong after several tests on this machine. Other (i5, i7, XEON) machines produced correct results.
I read that other users experienced issues with Ubuntu 15.04 and the CPU frequency. So I decided to plug in a SSD and install Ubuntu 14.04. Checked again what the CPU frequency is now.. and it showed 2.50 GHz as I expected it.
Again started the reconstruction algorithm (which was now like 4-5 times faster than on Ubuntu 15.04) and waited for the results. Okay. Results are correct now! I double checked, started 9 processes and compared results. Still correct.
So I can only assume that there might be a problem in Ubuntu 15.04 / kernel using Speedstep in this CPU. CPU in 15.04 ran all the time between 420 - 650 MHz, while the min CPU speed is expected to be 1,20 GHz and the max CPU speed is 3,30 GHz. If somebody wants the check, I can offer the source code and example data leading to this problem.
Sorry for suspecting this be a CPU bug.
EDIT: after some more testing, the problem is only solved for some scenarios but not yet for all. I'll do more testing.
The Skylake-S/U prime95 erratum is in the AVX (not AVX2) unit. It is fixed on microcodes 0x56 (probably) and 0x6a (for sure). Such erratum in Haswell is unlikely, but possible (especially on post-2014 Intel, where "validation" became an unwelcome cost instead of a tenant for quality).
Haswell has errata linked to the AVX unit, although HSE58 is rather unlikely to be at play (it only slows down the AVX unit). However, do try to place a few MFENCE instructions before the AVX2 computations. If this fixes it, report back immediately, it means we need to MFENCE all IRET in the kernel (HSE105).
Your processor has signature 0x306f2. Ensure you have microcode revision 0x36 or later, this microcode is in Intel's "Linux microcode update pack" from 2015-11-06.
EDIT: this wasn't really an answer, so I should have made it a comment, instead. I apologise. Since the microcode update was not sufficient to fix the issue, it could still be a new errata, an old, but unworked-around errata, or something else entirely (such as code bug or gcc code generation bug).

are compiler and shell internal part of Unix?

I had this question on my exam, now in diagrams I saw, we have : hardware, kernel, system call interface to the kernel, then (compilers, shells, sys.libs) and on top some applications. Does OS scope include only kernel, and everything else is just some additional functions we choose to install , or does a Unix OS include everything from the list I gave above?
OS have more or less 2 definitions :
academic : OS is soft for doing a abstraction layer between
hardware and software
pragmatic : OS is soft that come with hardware when we buy it.
Compiler and shell don't enter in definition 1. It can be enter in definition 2.
And usually, users that are interesting by a compiler or a shell prefer to consider OS as asbtraction layer (academic definition).
Simple answer, No. They are not an internal part of Unix but additional functionality to help make the Operating System more usable.
The OS scope applies primarily to the kernel only.
Whilst you need a compiler to build the kernel, you don't necessarily require one for the general day to day use of the system. Most operating systems don't ship the compiler by default and instead, the kernel and applications is built on one machine and then the resulting binarys are packaged and distributed either with the computer directly (Windows/Unix) or via the internet for others to download and install (Linux/BSD)
Likewise with the shell. Although all operating systems ship with a default one (sh/bash/dash on Linux|Unix systems, Command Prompt/Powershell on Windows), most general users can go their entire lives without using it.
Having said that, if you were to delete the shell, you'll almost certainly find your system won't boot up. This is because a lot of core start-up scripts rely on the shell to stop / start the services presenting interfaces between the user and the kernel.
In summary:
You need a compiler to build the kernel and applications but not for running the OS.
You need a shell to execute applications (which also includes the compiler)

intel_iommu , what is it?

One of my customers had a problem with a Xeon E5 machine: they were having one gpu (I believe it was an NVIDIA one) hanging and they solved by adding the
intel_iommu = igfx_off
in the grub loader.
What is this value and what does it? I read around but couldn't just figure that out in simple terms
Quoting from the "Intel-IOMMU.txt" file included in the Linux kernel documentation:
"If you encounter issues with graphics devices, you can try adding option intel_iommu=igfx_off to turn off the integrated graphics engine. If this fixes anything, please ensure you file a bug reporting the problem."
Apparently the GPU in this case was not working properly with the DMAR (DMA Remapping) feature provided by the Intel chipset. Using the "igfx_off" parameter allows the GPU to access the physical memory directly without going through the DMAR.
The purpose of the DMAR feature is to enable things like direct assignment of hardware to virtualized guests. If you have to use the "igfx_off" parameter then you probably won't be able to use this GPU in such a direct-assigned virtualization scenario.

Resources