How "make" command locates makefile - unix

I am trying to understand the working of "make" command (just started on this command). I have an ".sh" file which has a script to execute "make" command as shown below:
source /somepath/environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi
make arch=arm toolchainPrefix=arm-poky-linux-gnueabi- xeno=off mode=Debug all
The directory where the script file is located has a file named "makefile". but there is nothing specified in the script file above regarding this "makefile". After executing the script file, all the script withing "makefile" is executed automatically. Can someone explain the working of "make xyz all" command in few words.
Thanks

As often with UNIX systems the command works to some degree by conventions. make (the GNU version of make at least) will search the working directory for files called GNUmakefile, makefile, and Makefile in that order or you can use the -f (or --file) option to give it a specific file.

Related

How to "source" (not run line by line) an R script from the Windows command line

I am creating an R script myscript.R which manipulates an excel file by means of XLConnectpackage.
The point is it refers to several external files: the excel file itself and another R file (functions library), so I need to set a working directory in the location of the script (so that relative paths to external files work properly).
I am using the following in my script
new_wd <- dirname(sys.frame(1)$ofile)
setwd(new_wd)
When I source the script from my RStudio it gets the job done. The problem is that the script is to be used by non-programmers, non-Rstutio-users, so I create .bat file (which I want to turn into an .exe one)
"C:\Program Files\R\R-4.0.3\bin\Rscript.exe" "C:\my\path\to\myscript.R"
It executes the script line by line but sys.frame(1) only works when sourcing.
How could I solve it?
Thanx
I have found a solution and it works properly.
From CMD command line or from a .bat file one can add an argument -e to the command, so that you can use a R language.
absolute\path\to\Rscript.exe -e "source('"relative\path\to\myscript.R"')"
It worked for me.
Besides, as Compo commented, I think there's no need for a .exe file, since a .bat does the job.

How to save output when running job on cluster using SLURM

I want to run an R script using SLURM. I have created the R script, "test.R" as shown:
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
I created a bash script to run this R script "submit.sh"
#!/bin/bash
#sbatch --job-name=test.job
#sbatch --output=.out/abc.out
Rscript /home/abc/job_sub_test/test.R
And I submitted the job on the cluster
sbatch submit.sh
I am not sure where my output is saved. I looked in the home directory but no output file.
Edit
I also set my working directory in test.R, but nothing different.
setwd("/home/abc")
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
When I run the script without SLURM Rscript test.R, it worked fine and saved the output according to the set path.
Slurm will set the job working directory to the directory which was the working directory when the sbatch command was issued.
Assuming the /home directory is mounted on all compute nodes, you can change explicitly the working directory with cd in the submission script, or setwd() in the R syntax. But that should not be necessary.
Three possibilities:
either the job did not start at all because of a configuration or hardware issue ; that you can find out with the sacct command, looking at the state column.
either the file was indeed created but on the compute node on a filesystem that is not shared; in that case the best option is to SSH to the compute node (which you can find out with sacct) and look for the file there; or
the script crashed and the file was not created at all, in that case you should look into the output file of the job (.out/abc.out). Beware that the .out directory must be present before the job starts, and that, as it starts with a ., it will be a hidden file, revealed in ls only with the -a argument.
The --output argument to sbatch is relative to the folder you submitted the job from. setwd inside the R script wouldn't affect it, because Slurm has already parsed that argument and started piping output to the file by the time the R script is running.
First, if you want the output to go to /home/abc/.out/ make sure you're in your homedir when you submit the script, or specify the full path to the --output argument.
Second, the .out folder has to exist; I tested this and Slurm does not create it if it doesn't.

Equivilant of $MODULE_DIR$ in unix terminals

I am trying to run a programme on a debain dedi. Using the following code.
java -cp bin:lib/* rs.Server false 43594
However it gives me a file not found error (even though the files are present). I fixed this error in intellij by picking the $MODULE_DIR$ option. Is there a equivalent to this in unix terminals?
The problem looks to be that the directory your are in when you run the command is wrong. You either need to cd to the directory containing the bin and lib directories or specify the full path to the directories in the command line.

How do I create a directory with a file in it, in one step?

In the terminal, is there a way to create a directory with a file in it in one step?
Currently I do this in 2 steps:
1. mkdir foo
2. touch foo/bar.txt
Apparently, touch foo/bar.txt doesn't work.
With only standard unix tools, the most direct way to create a directory and a file in this directory is
mkdir foo && touch foo/bar.txt
Unix is built around the philosophy of simple, single-purpose tools with the shell as a glue to combine them. So to create a directory and a file, you instruct a shell to run the directory creation utility then the file creation utility.
I won't swear that there isn't some bizarre way of using a standard tool that lets you do it with a single command. (In fact, there is: unpack an archive — except that you'll need to provide that archive as a file, with predefined owner, date and other metadata, or else use another command to build an archive.) But whatever it is would be convoluted.

Qt qmake - how to stop it adding rules to delete the target

I am trying to add a unit test to a group of other tests. All the tests are in their own subdirectories, each with it's own .pro file and the .cpp file which contains the tests themselves. Running qmake in one of the subdirectories creates a Makefile, and then running make runs the compiler to make the TARGET. The tests are actually run by the 'check' target - ie with 'make check'.
The test I'm trying to add is different, but it is trying to pretend to behave the same way.
It is different because it is a perl script and so doesn't need to be compiled. It does, however, need to be run - so 'make check' needs to work.
I had a .pro file working for the most part - 'qmake', 'make', 'make check', and 'make clean' would work, but 'make distclean' removed my script (since it assumes it can be regenerated by compiling something).
So, the question is, how do I stop it from removing my script?
Perhaps there's some other approach I should be taking. I had tried the 'subdirs' TEMPLATE, but that does more than just remove the line in Makefile that deletes the TARGET.
Ideas?
Using Ubuntu Linux with Qt 4.6.0.
I would look into the custom target capabilities for your script. Maybe something like this:
check.commands = <scriptname>
check.depends = <any dependencies>
QMAKE_EXTRA_TARGETS += check
Doing things this way will run the check command when the dependencies change, but as long as you don't specify check.target then it shouldn't remove anything. (If your script does produce output, then perhaps that should be in check.target.) Also, since it is specified as an "extra" command, qmake shouldn't create commands to delete your script in a distclean.
This is assuming that your script is in its own subdirectory (which you state), and is the only "check" command that needs run in that subdirectory (kind of implied by the question, but not directly stated).

Resources