How to create options in KSH script - unix

I am creating a KSH interface script that will call other scripts based on the users input. The other scripts are Encrypt and Decrypt. Each one of these scripts receive parameters. I have seen someone execute a script before using "-" + first letter of a script name before. How do I do this for my script? So for example if my script is called menu and the user typed in : menu -e *UserID Filename.txt* the script would run and the encrypt script would be executed along with the associated parameters. So far my script takes in the encrypt/decrypt script option as a parameter. Here is my script:
#!/bin/ksh
#I want this parameter to become an
action=$1
if [ $1 = "" ]
then
print_message "Parameters not satisfied"
exit 1
fi
#check for action commands
if [ $1 = "encrypt" ]
then
dest=$2
fileName=$3
./Escript $dest $fileName
elif [ $1 = "decrypt" ]
then
outputF=$2
encryptedF=$3
./Dscript $outputF $encryptedF
else
print "Parameters not satisfied. Please enter encrypt or decrypt plus-n arguments"
fi
Thanks for the help!

There isn't any kind of automatic way to turn a parameter into another script to run; what you're doing is pretty much how you would do it. Check the parameter, and based on the contents, run the two different scripts.
You can structure it somewhat more nicely using case, and you can pass the later parameters directly through to the other script using "$#", with a shift to strip off the first parameter. Something like:
[ $# -ge 1 ] || (echo "Not enough parameters"; exit 1)
command=$1
shift
case $command in
-e|--encrypt) ./escript "$#" ;;
-d|--decrypt) ./dscript "$#" ;;
*) echo "Unknown option $command"; exit 1 ;;
esac
This also demonstrates how you can implement both short and long options, by providing two different strings to match against in a single case statement (-e and --encrypt), in case that's what you were asking about. You can also use globs, like -e*) to allow any option starting with -e such as -e, -encrypt, -elephant, though this may not be what you're looking for.

Related

Recursive function based on 2 conditions

I want to write a recursive function which exits on 2 conditions.
Let's say I want to make a directory and ask the user for input. He may enter something like this:
Valid: /existing-dir1/existing-DIR2/non-existence-dir1/non-existence-dir2
Invalid: /existing-dir1/existing-FILE1/non-existence-dir1/non-existence-dir2
To loop through the filename, I have the function dirname() which take /foo/bar and returns /foo. I also have function exist() to check if a filename exist and isdir() to see if it is a file or directory.
Basically, I need to loop recursively from the end of the filename, ignore non-existence nodes, and check if any node is a file - which is invalid. The recursion ends when one of the 2 conditions happens, whichever comes first:
A file is found
dirname() returns /
I am not familiar with recursion, and 2 conditions is a bit too much for me. I am using POSIX script but code samples in C++ / Java / C# are all good.
Edit: I know I can do a mkdir -p and get its status code, but it will create the directory. Nontheless, I want to do that in recursion for the purpose of learning.
In JS, you might write the recursion like this:
const isValid = (path) =>
path === '/'
? 'valid'
: exist(path) && !isdir(path)
? 'invalid'
: isValid(dirname(path))
You might be able to skip the exist check depending upon how your isdir function works.
This should show the logic, but I'm not sure how to write this in your Posix script environment.
I solved it myself. Written in POSIX script, but it is quite easy to read and port to other languages:
RecursivelyCheckFilename ()
{
if [ -e "$1" ] # if exists
then
if [ -d "$1" ] # if is directory
then
if [ "$1" = "/" ]
then
return 0;
else
RecursivelyCheckFilename "$(dirname -- "$1")";
return $?; # returns the value returned by previous function
fi
else
return 1;
fi
else
RecursivelyCheckFilename "$(dirname -- "$1")";
return $?;
fi
}
There are still a few issues with filename ending with a trailing slash, which I must find a way to deal with. But the code above is how I want it to work.

Zsh returning `<function>:<linenumber> = not found`

I used the have the following tmux shortcut function defined in a separate script and aliased, which worked fine but was messy. I decided to move it to my .zshrc where it naturally belongs, and encountered a problem I wasn't able to figure out.
function t () {re='^[0-9]+$'
if [ "$1" == "kill" ]
then
tmux kill-session -t $2
elif [[ "$1" =~ "$re" ]]
then
tmux attach-session -d -t $1
fi}
I source my .zshrc, call the function, and get:
t:1: = not found
I know the function is defined:
╭─bennett#Io [~] using
╰─○ which t
t () {
re='^[0-9]+$'
if [ "$1" == "kill" ]
then
tmux kill-session -t $2
elif [[ "$1" =~ "$re" ]]
then
tmux attach-session -d -t $1
fi
}
I'm assuming this is complaining about the first line of the function. I've tried shifting the first line of the function down several lines, which doesn't change anything except which line the error message refers to. Any clue what's going on? I haven't found anything relating to this specific issue on SO.
The command [ (or test) only supports a single = to check for equality of two strings. Using == will result in a "= not found" error message. (See man 1 test)
zsh has the [ builtin mainly for compatibility reasons. It tries to implement POSIX where possible, with all the quirks this may bring (See the Zsh Manual).
Unless you need a script to be POSIX compliant (e.g. for compatibility with other shells), I would strongly suggest to use conditional expressions, that is [[ ... ]], instead of [ ... ]. It has more features, does not require quotes or other workarounds for possibly empty values and even allows to use arithmetic expressions.
Wrapping the first conditional in a second set of square-brackets seemed to resolve the issue.
More information on single vs double brackets here:
Is [[ ]] preferable over [ ] in bash scripts?

Why can't I read user input properly inside a UNIX while loop?

I'm using the bourne shell in UNIX, and am running into the following problem:
#!/bin/sh
while read line
do
echo $line
if [ $x = "true" ]
then
echo "something"
read choice
echo $choice
else
echo "something"
fi
done <file.txt
The problem I have here is that UNIX will not wait for user input in the read command - it just plows on through instead of waiting for what the user types in. How can I make unix wait for user input?
It is because you are asking the program to read from the file file.txt:
done <file.txt
Also looks like you have a typo here:
if [ $x = "true" ]
^^
which should be "$line". Also note the ", without them your program will break if the word read from the file has a space in it.
The redirection of standard input by the <file.txt at the end of while ... done <file.txt affects the whole while loop, including the read choice as well as the read line. It's not just failing to stop - it's consuming a line of your input file too.
Here's one way to solve the problem...
You can save the original standard input by using the somewhat obscure (even by shell standards):
exec 3<&0
which opens file descriptor 3 to refer to the original file descriptor 0, which is the original standard input. (File descriptors 0, 1 and 2 are standard input, output and error respectively.) And then you can redirect just the input of read choice to come from file descriptor 3 by doing read choice <&3.
Complete working script (I wasn't sure where x was supposed to come from, so I just bodged it to make it work):
#!/bin/sh
x=true # to make the example work
exec 3<&0
while read line
do
echo $line
if [ $x = "true" ]
then
echo "something"
read choice <&3
else
echo "something"
fi
done <file.txt
I don't do much shell scripting, but i think 'read choice' should be 'read $choice'

'tee' and exit status

Is there an alternative to tee which captures standard output and standard error of the command being executed and exits with the same exit status as the processed command?
Something like the following:
eet -a some.log -- mycommand --foo --bar
Where "eet" is an imaginary alternative to "tee" :) (-a means append and -- separates the captured command). It shouldn't be hard to hack such a command, but maybe it already exists and I'm not aware of it?
This works with Bash:
(
set -o pipefail
mycommand --foo --bar | tee some.log
)
The parentheses are there to limit the effect of pipefail to just the one command.
From the bash(1) man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
I stumbled upon a couple of interesting solutions at Capture Exit Code Using Pipe & Tee.
There is the $PIPESTATUS variable available in Bash:
false | tee /dev/null
[ $PIPESTATUS -eq 0 ] || exit $PIPESTATUS
And the simplest prototype of "eet" in Perl may look as follows:
open MAKE, "command 2>&1 |" or die;
open (LOGFILE, ">>some.log") or die;
while (<MAKE>) {
print LOGFILE $_;
print
}
close MAKE; # To get $?
my $exit = $? >> 8;
close LOGFILE;
Here's an eet. Works with every Bash I can get my hands on, from 2.05b to 4.0.
#!/bin/bash
tee_args=()
while [[ $# > 0 && $1 != -- ]]; do
tee_args=("${tee_args[#]}" "$1")
shift
done
shift
# now ${tee_args[*]} has the arguments before --,
# and $* has the arguments after --
# redirect standard out through a pipe to tee
exec | tee "${tee_args[#]}"
# do the *real* exec of the desired program
exec "$#"
(pipefail and $PIPESTATUS are nice, but I recall them being introduced in 3.1 or thereabouts.)
This is what I consider to be the best pure-Bourne-shell solution to use as the base upon which you could build your "eet":
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; echo $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, echo will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor three.
While command1 is running, its stdout is being piped to command2 (echo's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor one – because we want file descriptor one clear for when we bring the echo output on file descriptor three back down into file descriptor one so that the command substitution (the backticks) can capture it.
The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor four as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor four as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor one.
(The exec 4>&1 has to be a separate command to work with many common shells. In some shells it works if you just put it on the same line as the variable assignment, after the closing backtick of the substitution.)
(I use compound commands ({ ... }) in my example, but subshells (( ... )) would also work. The subshell will just cause a redundant forking and awaiting of a child process, since each side of a pipe and the inside of a command substitution already normally implies a fork and await of a child process, and I don't know of any shell being coded to recognize that it can skip one of those forks because it's already done or is about to do the other.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the echo's output jumps over command2 so that command2 doesn't catch it, and then command2's output jumps over and out of the command substitution just as echo lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its way to the standard output, just as in a normal pipe.
Also, as I understand it, at the end of this command, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out.
A caveat is that it is possible that command1 will at some point end up using file descriptors three or four, or that command2 or any of the later commands will use file descriptor four, so to be more hygienic, we would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; echo $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- makes sure that command1 will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
Almost no programs uses pre-opened file descriptor three and four directly, so you almost never have to worry about it, but the latter is probably best to keep in mind and use for general-purpose cases.
{ mycommand --foo --bar 2>&1; ret=$?; } | tee -a some.log; (exit $ret)
KornShell, all in one line:
foo; RET_VAL=$?; if test ${RET_VAL} != 0;then echo $RET_VAL; echo Error occurred!>/tmp/out.err;exit 2;fi |tee >>/tmp/out.err ; if test ${RET_VAL} != 0;then exit $RET_VAL;fi
#!/bin/sh
logfile="$1"
shift
exec 2>&1
exec "$#" | tee "$logfile"
Hopefully this works for you.
Assuming Bash or Z shell (zsh),
my_command >>my_log 2>&1
N.B. The sequence of redirection and duplication of standard error onto standard output is significant!
I didn't realise you wanted to see the output on screen as well. This will of course direct all output to the file my_log.

ksh: how to probe stdin?

I want my ksh script to have different behaviors depending on whether there is something incoming through stdin or not:
(1) cat file.txt | ./script.ksh (then do "cat <&0 >./tmp.dat" and process tmp.dat)
vs. (2) ./script.ksh (then process $1 which must be a readable regular file)
Checking for stdin to see if it is a terminal[ -t 0 ] is not helpful, because my script is called from an other script.
Doing "cat <&0 >./tmp.dat" to check tmp.dat's size hangs up waiting for an EOF from stdin if stdin is "empty" (2nd case).
How to just check if stdin is "empty" or not?!
EDIT: You are running on HP-UX
Tested [ -t 0 ] on HP-UX and it appears to be working for me. I have used the following setup:
/tmp/x.ksh:
#!/bin/ksh
/tmp/y.ksh
/tmp/y.ksh:
#!/bin/ksh
test -t 0 && echo "terminal!"
Running /tmp/x.ksh prints: terminal!
Could you confirm the above on your platform, and/or provide an alternate test setup more closely reflecting your situation? Is your script ultimately spawned by cron?
EDIT 2
If desperate, and if Perl is available, define:
stdin_ready() {
TIMEOUT=$1; shift
perl -e '
my $rin = "";
vec($rin,fileno(STDIN),1) = 1;
select($rout=$rin, undef, undef, '$TIMEOUT') < 1 && exit 1;
'
}
stdin_ready 1 || 'stdin not ready in 1 second, assuming terminal'
EDIT 3
Please note that the timeout may need to be significant if your input comes from sort, ssh etc. (all these programs can spawn and establish the pipe with your script seconds or minutes before producing any data over it.) Also, using a hefty timeout may dramatically penalize your script when there is nothing on the input to begin with (e.g. terminal.)
If potentially large timeouts are a problem, and if you can influence the way in which your script is called, then you may want to force the callers to explicitly instruct your program whether stdin should be used, via a custom option or in the standard GNU or tar manner (e.g. script [options [--]] FILE ..., where FILE can be a file name, a - to denote standard input, or a combination thereof, and your script would only read from standard input if - were passed in as a parameter.)
This strategy works for bash, and would likely work for ksh. Poll 'tty':
#!/bin/bash
set -a
if [ "$( tty )" == 'not a tty' ]
then
STDIN_DATA_PRESENT=1
else
STDIN_DATA_PRESENT=0
fi
if [ ${STDIN_DATA_PRESENT} -eq 1 ]
then
echo "Input was found."
else
echo "Input was not found."
fi
Why not solve this in a more traditional way, and use the command line argument to indicate that the data will be coming from stdin?
For an example, consider the difference between:
echo foo | cat -
and
echo foo > /tmp/test.txt
cat /tmp/test.txt

Resources