I had to replace mpich2 with OpenMPI due to dependency of OpenFOAM on OpenMPI.
Earlier (when using mpich2) in my code I was using gethostname() function to get the name of the machine for debugging purpose. However this function does not seem to be a standard MPI function, and is not working anymore with OpenMPI libraries. Is there any other function for getting the host name in OpenMPI, or MPI standard? I am using mpicc for compiling and mpirun for running the code.
Thanks,
Sourabh
gethostname() is defined in unistd.h that was included by mpi.h, in the previous version. That's not a feature you should rely on, since you should always explicitly include the files which define the symbols you use. Clearly you were relying on it without realizing.
However if your MPI code is supposed to run on POSIX systems only, its safe to add
#include <unistd.h>
gethostname() is POSIX2001.1 standard.
However the MPI portable solution is MPI_Get_processor_name() as shown in the comment by High Performance Mark
Related
I'm working on some OpenCL code within a larger project. The code only gets compiled at run-time - but I don't want to deploy a version and start it up just for that. Is there some way for me to have the syntax of those kernels checked (even without consider), or even compile them, at least under some restrictions, to make it easier to catch errors earlier?
I will be targeting AMD and/or NVIDIA GPUs.
The type of program you are looking for is an "offline compiler" for OpenCL kernels - knowing this will hopefully help with your search. They exist for many OpenCL implementations, you should check availability for the specific implementation you are using; otherwise, a quick web search suggests there are some generic open source ones which may or may not fit the bill for you.
If your build machine is also your deployment machine (i.e. your target OpenCL implementation is available on your build machine), you can of course also put together a very basic offline compiler yourself by simply wrapping clBuildProgram() and friends in a basic command line utility.
I want to know that if a shell script is created and tested on ubuntu kernel, then will it always without fail also run on RHEL kernel provided the correct shell is invoked for running the script.
Ways in which the execution may differ when used on different distributions and different kernels:
Differences in the version and configuration of the Linux kernel - this may affect presence and format of the contents of files such as those in /proc and /sys, or the presence of particular device drivers.
Differences in the version of the shell used - /bin/sh may be Bash on one system and Dash on another, or Bash 3.x on one system and Bash 4.x on the other.
Differences in the installed programs your script invokes (and, if you got your package dependencies wrong, whether those programs are even present - what's "essential" on one distribution may be "optional" on another).
In short, different distributions have the same issues as different versions of one distribution, but more so.
It depends on what shell/interpreter it was written for and versions of the particular shell it was written for. For example, a bash script written using bash-4.4 may not work in bash-2.0 and so on. It's not quite related to to the distribution/kernel version you use but the shell you use.
So, without details, it's not possible to assert whether a script that works on Ubuntu will work on RHEL. If you use the same shell and same version
on both machines then yes, it's going to work as expected (barring some very odd cases).
How can my MPI program detect, if it was launched as a standalone application or via mpirun?
Considering the answer and comments by semiuseless and Hristo Iliev, there is no general and portable way to do this. As a workaround, you can check for environment variables that are set by mpirun. See e.g.:
http://www.open-mpi.org/faq/?category=running#mpi-environmental-variables
There is no MPI standard way to tell the difference between an MPI application that is launched directly, or as a single rank with mpirun. See "Singleton MPI_Init" for more on this kind of MPI job.
The environment variable checking answer from Douglas is a reasonable hack...but is not portable to any other MPI implementation.
I have C source with MPI calls.
I wonder can I get sequential program from the source by linking with some MPI stub library? Where can I get this lib?
Most correctly-written MPI programs should not depend on the number of processes they use to get a correct answer -- eg, if you run them on one process (mpirun -np 1 ./a.out) they should still work. So you oughtn't need a stub library - just use MPI. (If for some reason you just don't want extraneous libraries kicking around, it's certainly possible to write stubs and link against them -- I did this back in the day when setting up MPI on my laptop was a huge PITA, you could use this as a starting point and add any functionality you need. But these days, fiddling with the stub library is probably going to be more work than just using an existing MPI implementation.)
If your MPI program does not currently work correctly on one processor, a stub library probably won't help; you'll need to find the special cases that it's not handling and fix them.
I don't think this is possible. Contrary to OpenMP, programs using MPI don't necessarily run or produce the same result when you simply take away the MPI part.
PETSc contains a stub MPI library that works for one process (ie serial) execution:
http://www.mcs.anl.gov/petsc/petsc-3.2/include/mpiuni/mpi.h
I want to use #include statements in my OpenCL kernels but it appears Apple's OpenCL compiler caches kernels, so if you change the contents of an included file but not the file doing the including, the program will not change between runs.
I've coded up an example which illustrates this:
http://github.com/enjalot/adventures_in_opencl/tree/master/experiments/inc/
If you compile and run, it should work fine. Then if you comment out the struct definition in inc.cl it will still run just fine (or change anything in lvl2.cl)
Using the NVIDIA compiler on Ubuntu you get the expected behavior.
So is there someway to force clBuildProgram to recompile the kernel?
I got an answer from the perfoptimization-dev#apple.com mailing list
sudo killall cvmsServ
Doesn't seem very graceful, but oh well