unix command line execute with . (dot) vs. without - unix

At a unix command line, what's the difference between executing a program by simply typing it's name, vs. executing a program by typing a . (dot) followed by the program name? e.g.:
runme
vs.
. runme

. name sources the file called name into the current shell. So if a file contains this
A=hello
Then if you sources that, afterwards you can refer to a variable called A which will contain hello. But if you execute the file (given proper execution rights and #!/interpreterline), then such things won't work, since the variable and other things that script sets will only affects its subshell it is run in.
Sourcing a binary file will not make any sense: Shell wouldn't know how to interpret the binary stuff (remember it inserts the things appearing in that file into the current shell - much like the good old #include <file> mechanism in C). Example:
head -c 10 /dev/urandom > foo.sh; . foo.sh # don't do this at home!
bash: �ǻD$�/�: file or directory not found
Executing a binary file, however, does make a lot of sense, of course. So normally you want to just name the file you want to execute, and in special cases, like the A=hello case above, you want to source a file.

Using "source" or "." causes the commands to run in the current process. Running the script as an executable gives it its own process.
This matters most if you are trying to set environment variable in current shell (which you can't do in a separate process) or want to abort the script without aborting your shell (which you can only do in a separate process).

The first executes the command. The second is shorthand for including a shell script inside another.

This syntax is used to "load" and parse a script. It's most useful when you have a script that has common functionality to a bunch of other scripts, and you can just "dot include" it. See http://tldp.org/LDP/abs/html/internal.html for details (scroll down to the "dot" command).

Running "runme" will create a new process which will go on its merry little way and not affect your shell.
Running ". runme" will allow the script "runme" to change your environment variables, change directories, and all sorts of other things that you might want it to do for you. It can only do this because it's being interpreted by the shell process that's already running for you. As a consequence, if you're running bash as your login shell, you can only use the "." notation with a bash script, not (for example) a binary on C shell script.

Related

Setup Unix Environment from Current Directory

I have a SunOS system, which is what complicates this situation from being simple.
I have some scripts that run from different paths (unavoidable) and the system has a path structure that has the "System Environment" in the path, which I can then extract from the path. I have a simple script which is called before or sourced from every other script to get the Environment and set several other common variables. The problem is, now that there are 3 different areas that may be calling this script, it doesn't properly extract the Environment from the path.
Here are simple examples of the 3 paths that might exist:
/dir1/dir2/${ENV}/bin/script1.ksh
/dir1/dir2/${ENV}/services/service_name/script2.ksh
/dir1/dir2/${ENV}/services/service_name/log/script3.ksh
I'd like to have 1 script, that would be able to get ${ENV}, not matter which one of the paths was provided as opposed to my current strategy of 3 separate ones.
Here is how I currently get the first ${ENV}:
#!/bin/ksh
export BASE_DIR=${0%/*/*}
export ENV=${BASE_DIR##*/}
2nd Script:
#!/bin/ksh
export CURR_DIR=$( cd -- "$(dirname -- "$(command -v -- "$0")")" && pwd)
export BASE_DIR=${CURR_DIR%/*/*}
export ENV=${BASE_DIR##*/}
As I stated, this is a SunOS system, so it has an old limited version of KSH. No set -A or substitution.
Any ideas on the best strategy to limit my repetitiveness of scripts?
Thanks.
It looks from your example that your ${ENV} directory is a fixed depth from root, in which case you can easily get the name of the directory by starting from the other end;
export ENV=`pwd | sed -e "s%\(/dir1/dir2/\)\([^/]*\).*%\2%"`
I'm using '%' so I can match '/' without escaping. Without knowing specifics about what version of SunOS/Solaris you're using I can't be certain how compliant your sed is but Bruce Barnett includes it in his tutorials which are very closely aligned with late SunOS and early Solaris versions.
If your scripts are all called by the same user, then you might want to include the above in that user's .profile, then the ENV variable will be accessible to all scripts owned/executed by that user.
UPDATE: Lee E. McMahon's "SED -- A Non-Interactive Text Editor" - written in 1978 - includes pattern grouping using escaped parentheses, so it should work for you on SunOS. :)

Using execlp after jailing a process

Basically I want to execute a shell command inside a jailed process. When I try the below code(both as a normal user & root user), it produced no output
if(!(pid=fork)){
chroot("./jail_folder");
chdir("/");
execl("/bin/ls","ls",NULL);
}
I tried the perror() function and it gave me a "No such file or directory" error.
Is it possible to run a shell command in a jailed process? If so, how do we do that?
Yes, it is possible, but you have to make it accessible to the jail (typically, it means copying the desired program + all its libraries to the jail; symlinking wouldn't work, hardlinking is OK). Otherwise, it's no surprise that if you confine a program to part of the directory tree without /bin, you can't access /bin/ls.

How can I check syntax for Make but be sure I am not executing?

We work with Make files and want to create a precommit check in HG to check Makefile syntax. Originally, our check was just going to be
make -n FOO.mk
However, we realized that if a Makefile were syntactically correct but required some environment variable to be set, the test could fail.
Any ideas? Our default is to resort to writing our own python scripts to check for a limited subset of common Makefile mistakes.
We are using GNUmake.
$ make --dry-run > /dev/null
$ echo $?
0
The output is of no value to me so I always redirect to /dev/null (often stderr too) and rely on exit code. The man page https://linux.die.net/man/1/make explains:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them.
A syntax error would result in the sample output:
$ make --dry-run > /dev/null
Makefile:11: *** unterminated variable reference. Stop.
It is not a good idea to have makefiles depend on environment variables. Precisely because of the issue you mentioned.
Variables from the Environment:
... use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.
References to an environment variable in the recipe need a $$ prefix so it is not that hard to find references to the pattern '[$][$][{] or the pattern [$][$][A-Z] which will find the direct references. A pretty simple perl filter (sed script) finds them all.
To find the indirect ones I would try the recipe with only PATH set and HOME set to /dev/null, and SHELL set to /bin/false. Make's macro SHELL is not the environment $SHELL, so you can get the recipes to run, you'll have to set SHELL=/bin/sh in the recipe file to run the command from the recipe. That should shake out enough data to help you find the depends.
What you do about the results is another issue.

Is there a way to wrap arbitary commands located under a subdirctory in a shell script

I have a bunch of customizations and would like to run my test program in a pristine environment.
Sure I could use a tiny shell script to wrap and pass of arguments but it would be cool and useful if I could invoke a pre and possibly post script only to commands located under certain sub directories. The shell I'm using is zsh.
I don't know what you include in your “pristine environment”.
If you want to isolate yourself from the whole system, then maybe chroot is what you're after. You can set up a complete new system, with its own /etc, /bin and so on, but sharing the kernel, networking and other non-filesystem stuff with your running system. Root's cooperation is required (the chroot system call is reserved to root).
If you want to isolate yourself from your dot files, run the program with a different value for the HOME environment variable:
HOME=~/test-environment /path/to/test-program
HOME=~/test-environment zsh
If this is specifically about zsh's configuration files, you can set the ZDOTDIR environment variable before starting it to tell zsh to run its own dot files from a directory other than $HOME (or zsh --no-rcs to not load any dot file).
If by pristine environment you mean a fully controlled set of environment variables, then the env program does this.
env -i PATH=$PATH HOME=$HOME program args
will run program args with only the environment variables you specified.

Set environment variables using SSH

I am trying to execute unix command using SSH from cygwin. My set of command would navigate to a certain directory, source a particular file. Based on the variables sourced from that file, I would try to launch application. But somehow the variables are not getting sourced as echo does not return any values. Could someone let me know what am I missing here
Contents of the environment variables file (myenv) are
export TEST_DATA="DATA1:DATA2"
and I am executing the following command
$ ssh kunal#kspace "ls; cd /disk1/kunal/env; . ./myenv; echo $TEST_DATA; "
Double quotes do not inhibit expansion. Use single quotes instead.
That is, the variable is being expanded on your side, not on the server you're SSHing to.
Because $TEST_DATA is in double quotes, its value is being interpolated by your local shell (the one you called ssh in). If you use single quotes it will pass the literal $TEST_DATA across as part of the ssh command to be executed on the far side.

Resources