I want to run an mpi program on multiple hosts on two sites (Rennes and Nancy in this example). I would like to provide one set of arguments to hosts on site Rennes and another set to hosts on site Nancy. I am trying to do this with the following command:
mpirun -configfile mpi_cfg.txt
where mpi_cfg.txt contains:
-machinefile conf/rennes/workernodes.txt parallel_wan_test conf/rennes/running.cfg
-machinefile conf/nancy/workernodes.txt parallel_wan_test conf/nancy/running.cfg
Now the problem is that it will launch the program correctly for line corresponding to rennes. But for nancy, instead of launching at hosts on nancy, it would launch at hosts on rennes with arguments for nancy.
Could somebody please point out to me the right way to do this.
Thanks in advance
If you really want to do this with just this one file, I think you're stuck.. MPI is going to read the first line and then try it, regradless fo what the second line says.
You can still automate with something like this:
1) have two files, mpi_nancy_cfg.txt and mpi_rennes_cfg.txt
2) then, in bash shell
mpirun -configfile mpi_$(hostname -s)_cfg.txt
Related
I'm trying to get nginx to serve more than one connection at a time, with a fasgcgi backend.
This stackoverflow answer might contain the answer, but neglects to say where that option could be configured. All the options I see are in config files. Where would I put command line options like "-c 2"? It's not nginx -c, that's config. I don't see anyplace that looks like it would take command line options.
Ok, it looks like I don't need the above linked answer. The setting is
FCGI_CHILDREN
And the reason I had a bit of finding this is that this setting is not in nginx's config, it's in fcgiwrap's config. That is (on my machine) in /etc/init.d/fcgiwrap. Change FCGI_CHILDREN to something larger than 1.
FCGI_CHILDREN="5"
Just changing that to greater than one allow me to run more than one request at a time.
The linked answer mentions
if you are using spawn-fcgi to launch fcgiwrap then you need to use -f "/usr/bin/fcgiwrap -c 5"
but I did not have to do that.
I have a ROCKS Cluster with 1 frontend and 2 nodes (compute-0-1, compute-0-4) and. I can run my code only in frontend but when I try to run my code whit the nodes of many ways:
output by console of frontend
It always returns me:
mpirun was unable to launch the specified application as it could not find an executable
machine_file is located in default path and I tried to put it in a path of my project and contains:
compute-0-1
compute-0-4
¿What am doing wrong?
I'm trying to use a Windows computer to SSH into a Mac server, run a program, and transfer the output data back to my Windows. I've been able to successfully do this manually using Putty.
Now, I'm attempting to automate the process using Plink. I've added Plink to my Windows Path, so if I open cmd and type in a command, I can successfully log in and pass commands to the server:
However, I'd like to automate this using R, to streamline the data analysis process. Based on some searching, the internet seems to think that the shell command is best suited to this task. Unfortunately, it doesn't seem to find Plink, though passing commands through shell to the terminal is working:
If I try the same thing but manually setting the path to Plink using shell, no output is returned, but the commands do not seem to run (e.g. TESTFOLDER is not created):
Does anyone have any ideas for why Plink is unavailable when I try to call it from R? Alternately, if there are other ideas for how this could be accomplished in R, that would also be appreciated.
Thanks in advance,
-sam
I came here looking for an answer to this question, so I only have so much to offer, but I think I managed to get PLINK's initial steps to work in R using the shell function...
This is what worked for me:
NOT in R:
Install PLINK and add its location to your PATH.
Download the example files from PLINK's tutorial (http://pngu.mgh.harvard.edu/~purcell/plink/tutorial.shtml) and put them in a folder whose path contains NO spaces (unless you know something I don't, in which case, space it up).
Then, in R:
## Set your working directory as the path to the PLINK program files: ##
setwd("C:/Program Files/plink-1.07-dos")
## Use shell to check that you are now in the right directory: ##
shell("cd")
## At this point, the command "plink" should be at least be recognized
# (though you may get a different error)
shell("plink")
## Open the PLINK example files ##
# FYI mine are in "C:/PLINK/", so replace that accordingly...
shell("plink --file C:\\PLINK\\hapmap1")
## Make a binary PED file ##
# (provide the full path, not just the file name)
shell("plink --file C:\\PLINK\\hapmap1 --make-bed --out C:\\PLINK\\hapmap1")
... and so on.
That's all I've done so far. But with any luck, mirroring the structure and general format of those lines of code should allow you to do what you like with PLINK from within R.
Hope that helps!
PS. The PLINK output should just print in your R console when you run the lines above.
All the best,
- CC.
Just saw Caitlin's response and it reminded me I hadn't ever updated with my solution. My approach was kind of a workaround, rather than solving my specific problem, but it may be useful to others.
After adding Plink to my PATH, I created a batch script in Windows which contained all my Plink commands, then called the batch script from R using the command shell:
So, in R:
shell('BatchScript.bat')
The batch script contained all my commands that I wanted to use in Plink:
:: transfer file to phosphorus
pscp C:\Users\Sam\...\file zipper#144.**.**.208:/home/zipper/
:: open connection to Dolphin using plink
plink -ssh zipper#144.**.**.208 Batch_Script_With_Remote_Machine_Commands.bat
:: transfer output back to local machine
pscp zipper#144.**.**.208:/home/zipper/output/ C:\Users\Sam\..\output\
Hope that helps someone!
I'm trying to understand how openmpi/mpirun handle script file associated with an external program, here a R process ( doMPI/Rmpi )
I can't imagine that I have to copy my script on each host before running something like :
mpirun --prefix /home/randy/openmpi -H clust1,clust2 -n 32 R --slave -f file.R
But, apparently it doesn't work until I copy the script 'file.R' on clusters, and then run mpirun. Then, when I do this, the results are written on cluster, but I expected that they would be returned to working directory of localhost.
Is there another way to send R job from localhost to multiple hosts, including the script to be evaluated ?
Thanks !
I don't think it's surprising that mpirun doesn't know details of how scripts are specified to commands such as "R", but the Open MPI version of mpirun does include the --preload-files option to help in such situations:
--preload-files <files>
Preload the comma separated list of files to the current working
directory of the remote machines where processes will be
launched prior to starting those processes.
Unfortunately, I couldn't get it to work, which may be because I misunderstood something, but I suspect it isn't well tested because very few use that option since it is quite painful to do parallel computing without a distributed file system.
If --preload-files doesn't work for you either, I suggest that you write a little script that calls scp repeatedly to copy the script to the cluster nodes. There are some utilities that do that, but none seem to be very common or popular, which I again think is because most people prefer to use a distributed file system. Another option is to setup an sshfs file system.
I'm trying to run a command-line MPI program from within Xcode, which requires I run my program as so:
/usr/bin/mpiexec -np 4 {my binary}
I'm trying to edit the scheme, using the Xcode 4 docs and xcodebuild -showBuildSettings from the command line as my guide to locate the proper variables. I have a scheme that runs mpiexec and passes the following arguments:
-np 4 $CONFIGURATION_BUILD_DIR/$TARGET_NAME
Which, if I go by their values in xcodebuild, should give me this:
/Users/<excluding full path>/MyProject/build/Debug/MyTarget
However, when running inside Xcode, I get this:
build/Debug/MyTarget
Prefixing this as so:
-np 2 ${PROJECT_DIR}/${TARGET_BUILD_DIR}/${TARGET_NAME}
results in the full path up to the first space in the path name and then nothing more, which tells me there may be some issue with space escaping.
which is not enough to allow mpiexec to locate my binary. What is the proper way to identify the absolute path of my built executable using Xcode schemes and arguments?
It works fine if I do this:
-np 2 "${PROJECT_DIR}/${TARGET_BUILD_DIR}/${TARGET_NAME}"
Perhaps it just needed the double-quotes to avoid the space in the path?
I'm leaving this unaccepted for a while in case someone posts a proper answer, this feels like a hack to me