I have raw micro-array expression data files with extension .CEL, but for few files the extension is somehow \ .CEL (ie. name space dot CEL). I have made a simple shell script that replaces file names correctly on ubuntu terminal, but I do not have any idea of using it in R environment. I have even tried it using system() command to execute shell script, but did not work for me.
Shell Script I have written is as follows:
for file in *.CEL;
do mv "$file" "${file//[[:space:]]}";
done
Related
I am creating an R script myscript.R which manipulates an excel file by means of XLConnectpackage.
The point is it refers to several external files: the excel file itself and another R file (functions library), so I need to set a working directory in the location of the script (so that relative paths to external files work properly).
I am using the following in my script
new_wd <- dirname(sys.frame(1)$ofile)
setwd(new_wd)
When I source the script from my RStudio it gets the job done. The problem is that the script is to be used by non-programmers, non-Rstutio-users, so I create .bat file (which I want to turn into an .exe one)
"C:\Program Files\R\R-4.0.3\bin\Rscript.exe" "C:\my\path\to\myscript.R"
It executes the script line by line but sys.frame(1) only works when sourcing.
How could I solve it?
Thanx
I have found a solution and it works properly.
From CMD command line or from a .bat file one can add an argument -e to the command, so that you can use a R language.
absolute\path\to\Rscript.exe -e "source('"relative\path\to\myscript.R"')"
It worked for me.
Besides, as Compo commented, I think there's no need for a .exe file, since a .bat does the job.
I want to run an R script using SLURM. I have created the R script, "test.R" as shown:
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
I created a bash script to run this R script "submit.sh"
#!/bin/bash
#sbatch --job-name=test.job
#sbatch --output=.out/abc.out
Rscript /home/abc/job_sub_test/test.R
And I submitted the job on the cluster
sbatch submit.sh
I am not sure where my output is saved. I looked in the home directory but no output file.
Edit
I also set my working directory in test.R, but nothing different.
setwd("/home/abc")
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
When I run the script without SLURM Rscript test.R, it worked fine and saved the output according to the set path.
Slurm will set the job working directory to the directory which was the working directory when the sbatch command was issued.
Assuming the /home directory is mounted on all compute nodes, you can change explicitly the working directory with cd in the submission script, or setwd() in the R syntax. But that should not be necessary.
Three possibilities:
either the job did not start at all because of a configuration or hardware issue ; that you can find out with the sacct command, looking at the state column.
either the file was indeed created but on the compute node on a filesystem that is not shared; in that case the best option is to SSH to the compute node (which you can find out with sacct) and look for the file there; or
the script crashed and the file was not created at all, in that case you should look into the output file of the job (.out/abc.out). Beware that the .out directory must be present before the job starts, and that, as it starts with a ., it will be a hidden file, revealed in ls only with the -a argument.
The --output argument to sbatch is relative to the folder you submitted the job from. setwd inside the R script wouldn't affect it, because Slurm has already parsed that argument and started piping output to the file by the time the R script is running.
First, if you want the output to go to /home/abc/.out/ make sure you're in your homedir when you submit the script, or specify the full path to the --output argument.
Second, the .out folder has to exist; I tested this and Slurm does not create it if it doesn't.
The jar file I am using has an output of a bunch of text files. I would like to use these files for text mining. Rather than execute the jar file separately and then use the output for my R script, I want my R script to initiate the jar execution as well. Is there a way to do this?
Figured how to do this. I used the system() command like so
system ("java -jar tika-app-1.14.jar -t -i test.pdf")
I am trying to understand the working of "make" command (just started on this command). I have an ".sh" file which has a script to execute "make" command as shown below:
source /somepath/environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi
make arch=arm toolchainPrefix=arm-poky-linux-gnueabi- xeno=off mode=Debug all
The directory where the script file is located has a file named "makefile". but there is nothing specified in the script file above regarding this "makefile". After executing the script file, all the script withing "makefile" is executed automatically. Can someone explain the working of "make xyz all" command in few words.
Thanks
As often with UNIX systems the command works to some degree by conventions. make (the GNU version of make at least) will search the working directory for files called GNUmakefile, makefile, and Makefile in that order or you can use the -f (or --file) option to give it a specific file.
I am using R from a distant linux server. I want to make an R script file, open an scripts files and edit them. However, I can just use the R command line. I have no idea how I can create an R script file like in RGui, or how to open such files for editing.
There are multiple non graphical editors to edit your file that you have access to in command line, like nano or vim.
To create an Rscript file, simply add #!/usr/bin/env Rscript at the top of your .R file and make it executable.
Also see Run R script from command line