DRYer prompt and assignment in Zsh - zsh

I would like to do something that accomplishes the same thing that this does:
[[ -z "$TICKET_NUMBER" ]] && read "TICKET_NUMBER?Ticket Number? "
but that is more condensed, along these lines (but which actually works):
: ${TICKET_NUMBER:=$(read "TICKET_NUMBER?Ticket: ")}
I've looked in the Zsh docs for read to see if there's a way to pass the input to read to STDOUT, but nothing looks like it can do that.
The ideal would be some command that passes the value directly with as little ceremony and repetition as possible. Imagine a get_value command:
: ${TICKET_NUMBER:=$(get_value "Ticket: ")}

The last argument of the previous command is stored in the _ parameter, so you can capture the argument to the -v operator.
test -v TICKET_NUMBER || read "$_?Ticket? "

This is a little ugly, but it works:
: ${TICKET_NUMBER:=$(read "?Ticket: "; echo "$REPLY")}
It's certainly not the ideal I'm going for, but it might be one step closer.

Related

When to use spawn.with_shell and when spawn is only needed?

I'm confused when i should use awful.spawn and when to use awful.spawn.with_shell. To me these look and work the same.
The only difference I see is that in awful.spawn you can set client rules and make a callback.
I would appreciate any examples or rules on when to use each one.
awful.spawn.with_shell really does not do more than spawning the given command with a shell: https://github.com/awesomeWM/awesome/blob/c539e0e4350a42f813952fc28dd8490f42d934b3/lib/awful/spawn.lua#L370-L371
function spawn.with_shell(cmd)
if cmd and cmd ~= "" then
cmd = { util.shell, "-c", cmd }
return capi.awesome.spawn(cmd, false)
end
end
So, why would one want that? Some things are done by shells. For example, output redirections (echo hi > some_file), command sequences (echo 1; echo 2) or pipes (echo hello | grep ell) are all done by a shell. None of these work when starting a process correctly.
Why would one not want a shell? For example, argument escaping is way more complicated when a shell is involved. When you e.g. want to start print a pipe symbol (no idea why one would need that), then awful.spawn({"echo", "|"}) just works, while with a shell you need to escape the pipe symbol the appropriate number of times. I guess that awful.spawn.with_shell("echo \\\|") would work, but I am not sure and this is the point.
Also, a shell that does nothing is an extra process and is a tiny bit slower than without a shell, but this difference is really unimportant.

Parsing variable in loop incorrectly [duplicate]

I want to run certain actions on a group of lexicographically named files (01-09 before 10). I have to use a rather old version of FreeBSD (7.3), so I can't use yummies like echo {01..30} or seq -w 1 30.
The only working solution I found is printf "%02d " {1..30}. However, I can't figure out why can't I use $1 and $2 instead of 1 and 30. When I run my script (bash ~/myscript.sh 1 30) printf says {1..30}: invalid number
AFAIK, variables in bash are typeless, so how can't printf accept an integer argument as an integer?
Bash supports C-style for loops:
s=1
e=30
for i in ((i=s; i<e; i++)); do printf "%02d " "$i"; done
The syntax you attempted doesn't work because brace expansion happens before parameter expansion, so when the shell tries to expand {$1..$2}, it's still literally {$1..$2}, not {1..30}.
The answer given by #Kent works because eval goes back to the beginning of the parsing process. I tend to suggest avoiding making habitual use of it, as eval can introduce hard-to-recognize bugs -- if your command were whitelisted to be run by sudo and $1 were, say, '$(rm -rf /; echo 1)', the C-style-for-loop example would safely fail, and the eval example... not so much.
Granted, 95% of the scripts you write may not be accessible to folks executing privilege escalation attacks, but the remaining 5% can really ruin one's day; following good practices 100% of the time avoids being in sloppy habits.
Thus, if one really wants to pass a range of numbers to a single command, the safe thing is to collect them in an array:
a=( )
for i in ((i=s; i<e; i++)); do a+=( "$i" ); done
printf "%02d " "${a[#]}"
I guess you are looking for this trick:
#!/bin/bash
s=1
e=30
printf "%02d " $(eval echo {$s..$e})
Ok, I finally got it!
#!/bin/bash
#BSD-only iteration method
#for day in `jot $1 $2`
for ((day=$1; day<$2; day++))
do
echo $(printf %02d $day)
done
I initially wanted to use the cycle iterator as a "day" in file names, but now I see that in my exact case it's easier to iterate through normal numbers (1,2,3 etc.) and process them into lexicographical ones inside the loop. While using jot, remember that $1 is the numbers amount, and the $2 is the starting point.

Capturing and testing output command in ZSH

I have tried countless ways to get what I want, but nothing seems to work. I always end up with something like 2:not found.
I want to capture the output of a command, and then test if it equals "!", like so:
function test() {
local testv=$(command) 2>/dev/null
if [ $(#testv) == "!" ]; then
echo "Exclamation mark!"
else
echo "No exclamation mark."
fi
}
How should I rewrite the code above to avoid the error test:2: = not found?
This should work:
if [ $testv = '!' ]; then
There were several problems here:
$(...) runs a command and substitutes its output; you want to substitute a variable value, so use $var or ${var}.
I have no idea what the # was doing there. ${#var} will get the length of $var, but that's not what you want here.
The test command (which [ is a synonym for) doesn't understand ==, so use = (if you're a C programmer that'll look wrong, but this is shell not C).
I don't think this is a problem in a script, but for interactive input "!" doesn't do what you expect. I used '!' to make sure the exclamation mark couldn't be interpreted as a history option.
Alternately, you could use [[ ]] instead of [ ], since it understands == (and has somewhat cleaner syntax in general):
if [[ $testv == '!' ]]; then
BTW, I'm trusting from the tag that this script is running in zsh; if not, the syntax will be a bit different (basic shells don't have [[ ]], and anything other than zsh will do unwanted parsing on the value of $testv unless it's in double-quotes). If you're not sure (or want it to be portable), here's a version that should work in any posix-compliant shell:
if [ "$testv" = '!' ]; then
Try with this:
local testv=$(command 2>/dev/null)
since it's the output of the command you want to redirect.
(I have no idea what you mean by $(#testv) though.)

Unix ksh loop and put the result into variable

I have a simple thing to do, but I'm novice in UNIX.
So, I have a file and on each line I have an ID.
I need to go through the file and put all ID's into one variable.
I've tried something like in Java but does not work.
for variable in `cat myFile.txt`
do
param=`echo "${param} ${variable}"`
done
It does not seems to add all values into param.
Thanks.
I'd use:
param=$(<myFile.txt)
The parameter has white space (actually newlines) between the names. When used without quotes, the shell will expand those to spaces, as in:
cat $param
If used with quotes, the file names will remain on separate lines, as in:
echo "$param"
Note that the Korn shell special-cases the '$(<file)' notation and does not fork and execute any command.
Also note that your original idea can be made to work more simply:
param=
for variable in `cat myFile.txt`
do
param="${param} ${variable}"
done
This introduces a blank at the front of the parameter; it seldom matters. Interestingly, you can avoid the blank at the front by having one at the end, using param="${param}${variable} ". This also works without messing things up, though it looks as though it jams things together. Also, the '${var}' notation is not necessary, though it does no harm either.
And, finally for now, it is better to replace the back-tick command with '$(cat myFile.txt)'. The difference becomes crucial when you need to nest commands:
perllib=$(dirname $(dirname $(which perl)))/lib
vs
perllib=`dirname \`dirname \\\`which perl\\\`\``/lib
I know which I prefer to type (and read)!
Try this:
param=`cat myFile.txt | tr '\n' ' '`
The tr command translates all occurrences of \n (new line) to spaces. Then we assign the result to the param variable.
Lovely.
param="$(< myFile.txt)"
or
while read line
do
param="$param$line"$'\n'
done < myFile.txt
awk
var=$(awk '1' ORS=" " file)
ksh
while read -r line
do
t="$t $line"
done < file
echo $t

Commenting out a set of lines in a shell script

I was wondering if there is a way to comment out a set of lines in a shell script.
How could I do that? We can use /* */ in other programming languages.
This is most useful when I am converting/using/modifying another script
and I want to keep the original lines instead of deleting.
It seems a cumbersome job to find and prefix # for all the lines which are not used.
Lets say there are 100 lines in the script in consequent lines which are not to used.
I want to comment them all out in one go. Is that possible?
The most versatile and safe method is putting the comment into a void quoted
here-document, like this:
<<"COMMENT"
This long comment text includes ${parameter:=expansion}
`command substitution` and $((arithmetic++ + --expansion)).
COMMENT
Quoting the COMMENT delimiter above is necessary to prevent parameter
expansion, command substitution and arithmetic expansion, which would happen
otherwise, as Bash manual states and POSIX shell standard specifies.
In the case above, not quoting COMMENT would result in variable parameter
being assigned text expansion, if it was empty or unset, executing command
command substitution, incrementing variable arithmetic and decrementing
variable expansion.
Comparing other solutions to this:
Using if false; then comment text fi requires the comment text to be
syntactically correct Bash code whereas natural comments are often not, if
only for possible unbalanced apostrophes. The same goes for : || { comment text }
construct.
Putting comments into a single-quoted void command argument, as in :'comment
text', has the drawback of inability to include apostrophes. Double-quoted
arguments, as in :"comment text", are still subject to parameter expansion,
command substitution and arithmetic expansion, the same as unquoted
here-document contents and can lead to the side-effects described above.
Using scripts and editor facilities to automatically prefix each line in a
block with '#' has some merit, but doesn't exactly answer the question.
if false
then
...code...
fi
false always returns false so this will always skip the code.
You can also put multi-line comments using:
: '
comment1comment1
comment2comment2
comment3comment3
comment4comment4
'
As per the Bash Reference for Bourne Shell builtins
: (a colon)
: [arguments]
Do nothing beyond expanding arguments and performing redirections. The return status is zero.
Thanks to Ikram for pointing this out in the post Shell script put multiple line comment
You can use a 'here' document with no command to send it to.
#!/bin/bash
echo "Say Something"
<<COMMENT1
your comment 1
comment 2
blah
COMMENT1
echo "Do something else"
Wikipedia Reference
Text editors have an amazing feature called search and replace. You don't say what editor you use, but since shell scripts tend to be *nix, and I use VI, here's the command to comment lines 20 to 50 of some shell script:
:20,50s/^/#/
: || {
your code here
your code here
your code here
your code here
}
What if you just wrap your code into function?
So this:
cd ~/documents
mkdir test
echo "useless script" > about.txt
Becomes this:
CommentedOutBlock() {
cd ~/documents
mkdir test
echo "useless script" > about.txt
}
As per this site:
#!/bin/bash
foo=bar
: '
This is a test comment
Author foo bar
Released under GNU
'
echo "Init..."
# rest of script
Depending of the editor that you're using there are some shortcuts to comment a block of lines.
Another workaround would be to put your code in an "if (0)" conditional block ;)
This Perl one-liner comments out lines 1 to 3 of the file orig.sh inclusive (where the first line is numbered 0), and writes the commented version to cmt.sh.
perl -n -e '$s=1;$e=3; $_="#$_" if $i>=$s&&$i<=$e;print;$i++' orig.sh > cmt.sh
Obviously you can change the boundary numbers as required.
If you want to edit the file in place, it's even shorter:
perl -in -e '$s=1;$e=3; $_="#$_" if $i>=$s&&$i<=$e;print;$i++' orig.sh
Demo
$ cat orig.sh
a
b
c
d
e
f
$ perl -n -e '$s=1;$e=3; $_="#$_" if $i>=$s&&$i<=$e;print;$i++' orig.sh > cmt.sh
$ cat cmt.sh
a
#b
#c
#d
e
f

Resources