send output to file from within shell script - unix

I'm creating a script for users to run. I need to redirect the output to a file I'm creating from inside the script (hostname-date).
I have all the pieces except for how to copy the output of the script from inside the same script. All the examples I can find call the script and > it into the log, but this isn't an option.
-Alex

Add the following at the top of your script:
exec &> output.txt
It will make both stdin and stderr of the commands in the rest of your script go into the file output.txt.

exec in bash allows you to permanently redirect a FD (say, stdout) to a file.

A shell that calls a shell.
Have the first shell create the variable (hostname-date)
and call the second shell redirecting the output to the file.

Related

How to save output when running job on cluster using SLURM

I want to run an R script using SLURM. I have created the R script, "test.R" as shown:
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
I created a bash script to run this R script "submit.sh"
#!/bin/bash
#sbatch --job-name=test.job
#sbatch --output=.out/abc.out
Rscript /home/abc/job_sub_test/test.R
And I submitted the job on the cluster
sbatch submit.sh
I am not sure where my output is saved. I looked in the home directory but no output file.
Edit
I also set my working directory in test.R, but nothing different.
setwd("/home/abc")
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
When I run the script without SLURM Rscript test.R, it worked fine and saved the output according to the set path.
Slurm will set the job working directory to the directory which was the working directory when the sbatch command was issued.
Assuming the /home directory is mounted on all compute nodes, you can change explicitly the working directory with cd in the submission script, or setwd() in the R syntax. But that should not be necessary.
Three possibilities:
either the job did not start at all because of a configuration or hardware issue ; that you can find out with the sacct command, looking at the state column.
either the file was indeed created but on the compute node on a filesystem that is not shared; in that case the best option is to SSH to the compute node (which you can find out with sacct) and look for the file there; or
the script crashed and the file was not created at all, in that case you should look into the output file of the job (.out/abc.out). Beware that the .out directory must be present before the job starts, and that, as it starts with a ., it will be a hidden file, revealed in ls only with the -a argument.
The --output argument to sbatch is relative to the folder you submitted the job from. setwd inside the R script wouldn't affect it, because Slurm has already parsed that argument and started piping output to the file by the time the R script is running.
First, if you want the output to go to /home/abc/.out/ make sure you're in your homedir when you submit the script, or specify the full path to the --output argument.
Second, the .out folder has to exist; I tested this and Slurm does not create it if it doesn't.

Short cut for invoking shell script in unix

I have below two file to start and stop my spring-boot application. Is it possible to have this installed as server etc in unix? so that I can just type start app or stop app to start or stop my application from any location?
startApplication.sh
stopApplication.sh
You can always define alias in your bash, do as below:
sudo vim ~/.bashrc
go at the end of file and add this line
alias start-app='bash /<path-to-script>/startApplications.sh'
save and exit and resource it with source command
source ~/.bashrc
now if you type in your terminal start-app it will execute your script. create one for stop-app too.

Added Alias to .bashrc but no results

I added an alias (alias homedir='cd /export/home/file/myNmae'
) to .bashrc in my home directory and restarted the session. When I run the alias it says homedir: command not found.
Please advice.
This is because .bashrc is not sourced everytime, only for interactive non login shells .bashrc is sourced.
From the bash man page.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/pro-
file, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the
first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
When a login shell exits, bash reads and executes commands from the files ~/.bash_logout and /etc/bash.bash_logout, if the files exists.
When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the
--norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.
i found the solution - i added it to the .profile file and restarted the session - it worked

Is it possible to add custom commands to tmux?

I have some commands in mind that I don't want to create keybinds for and would prefer to use command mode for them. For example, I want something like:
<C-a>:restart-guard
That I can have run a script to run some commands in my guard window
Is this possible?
You can't define user defined commands directly
But you can always call a tmux script with so (shortest alias of source-file) or a program with ru (shortest alias of run-shell)
For so, you need to give the path to the command or to have the tmux server to start in the folder where your custom commands are
Here is a simple example, you put your restart-guard script in ~/.tmux/commands
you start tmux using a scipt :
#!/bin/bash
cd ~/.tmux/commands
tmux
then inside tmux, do
<C-a>:so restart-guard
I am currently looking for a way to have the directory where you started tmux and not the ~/.tmux/commands directory when starting
That is unfortunately not possible with tmux at this moment.

Permissions when iterating over files in directory Unix TCSH scripting

I'm writing a script that will print the file names of every file in a subdirectory of my home directory. My code is:
foreach file (`~/.garbage`)
echo "$file"
end
When I try to run my script, I get the following error:
home/.garbage: Permission denied.
I've tried setting permissions to 755 for the .garbage directory and my script, but I can't get over this error. Is there something I'm doing incorrectly? It's a tcsh script.
Why not just use ls ~/.garbage
or if you want each file on a separate line, ls -1 ~/.garbage
backtic will try to execute whatever is inside them. You are getting this error since you are giving a directory name within backtic.
You can use ls ~/.garbage in backtics as mentioned by Mark or use ~/.garbage/* in quotes and rely on the shell to expand the glob for you. If you want to get only the filename from a full path; use the basename command or some sed/awk magic

Resources