When there is no files inside the folder the below script goes inside the for loop. Not sure what i can modify so that it doesn't go inside the for loop. Also when there is no files inside the directory exit status should be success. Wrapper script checks the exit status of the below script
FILESRAW ="/exp/test1/folder" .
for fspec in "$FILESRAW"/* ; do
echo "$fspec"
if [[ -f ${fspec} ]] ; then
..... processing logic
else
... processing logic
fi
done
if using bash,
you can set nullglob
shopt-s nullglob
if you have hidden files,
shopt -s dotglob
with ksh,
#!/bin/ksh
set -o noglob
for file in /path/*
do
....
done
for fspec in `dir $FILESRAW` ; do
To exit if $FILESRAW is empty:
[ $( ls "$FILESRAW" | wc -l ) -eq 0 ] && exit 0
If this test precedes the loop, it will prevent execution from reaching the for loop if $FILESRAW is empty.
When $FILESRAW is empty, "$FILESRAW"/* expands to "/exp/test1/folder/*", as ghostdog74 points out, you can change this behavior by setting nullglob with
shopt -s nullglob
If you want hidden files, set dotglob as well:
shopt -s dotglob
Alternately, you could use ls instead of globing. This has the advantage of working with very full directories (using a pipe, you won't reach the maximum argument limit):
ls "$FILESRAW" | while read file; do
echo "$file"
This becomes messier if you want hidden files, since you'll need to exclude . and .. to emulate globing behavior:
ls -a "$FILESRAW" | egrep -v '^(\.|\.\.)$' | while read file; do
echo "$file"
if you are using ksh,
try putting this in front of for loop so that it won't go inside it.
"set -noglob"
Even I have got the same problem, but I was able to resolve it by doing this.
Related
I'm trying to run this command called codemaker which takes a filename as input but then writes the output to stdout instead of back to the file, so I have to redirect stdout to that file. That works fine, but I want to do this for a whole bunch of files at once, so I came up with this (based on https://stackoverflow.com/a/845928/65387):
ctouch() {
xargs -t -i -0 sh -c 'codemaker "$1" > "$1"' -- {} <<<"${(ps:\0:)#}"
}
But I can't quite get the syntax right. It looks like it's treating everything as a single arg still:
❯ ctouch foo.h bar.cc
sh -c 'codemaker "$1" > "$1"' -- 'foo.h bar.cc'$'\n'
Whereas I just want to run 2 commands:
codemaker foo.h > foo.h
codemaker bar.cc > bar.cc
How do I make an alias/function for that?
(And no, I'm not sure about that <<<"${(ps:\0:)#}" bit either. Really hard to Google. I want the usual "$#" to expand with null separators to feed to xargs)
I don't see a compelling reason to use xargs in your case. You just create additional processes unnecessarily (one for xargs, plus for each argument, one shell process).
A simpler solution (and IMO easier to understand) would be to do it with this zsh-function:
ctouch() {
for f
do
codemaker $f >$f
done
}
I think this is a lot easier to just do with printf.
ctouch() {
printf -- '%s\0' "$#" | xargs -t -i -0 sh -c 'codemaker "$1" > "$1"' -- {}
}
I noticed that tab completion for the source command in Zsh tries to complete a LOT of files. Maybe everything in $PATH? I tried using a blank .zshrc file to make sure it wasn't anything in there.
ubuntu% source d
zsh: do you wish to see all 109 possibilities (16 lines)?
I did find this file that seems to control that: /usr/share/zsh/functions/Completion/Zsh/_source
#compdef source .
if [[ CURRENT -ge 3 ]]; then
compset -n 2
_normal
else
if [[ -prefix */ && ! -o pathdirs ]]; then
_files
elif [[ $service = . ]]; then
_files -W path
else
_files -W "(. $path)"
fi
fi
If I change the line in that last "else" statement from _files -W "(. $path)" to _files, it works the way I want it to. The tab completion only looks at files & directories in the current dir.
It doesn't seem like altering this file is the best way to go. I'd rather change something in my .zshrc file. But my knowledge of Zsh completions is a bit lacking and the searching I've done thus far hasn't led me to an answer for this.
Maybe everything in $PATH?
Yes, that is correct. It offers those, because source will search your the current dir and your $PATH for any file name you pass it.
To apply your change without modifying the original file, add this to your .zshrc file after calling compinit:
compdef '
if [[ CURRENT -ge 3 ]]; then
compset -n 2
_normal
else
_files
fi
' source
This tells the completion system to use the inline function you specified for the command source (instead of the default function).
Alternatively, to see file completions for the current dir only, you can type
$ source ./<TAB>
I have a text file that lists a large number of file paths. I need to copy all these files from the source directory (mentioned in the path in the file one every line) to a destination directory.
Currently, the command line I tried is
while read line; do cp $ line dest_dir; done < my_file.txt
This seems to be a bit slow. Is there a way to parallelise this whole thing or speed it up ?
You could try GNU Parallel as follows:
parallel --dry-run -a fileList.txt cp {} destinationDirectory
If you like what it says, remove the --dry-run.
You could do something like the following (in your chosen shell)
#!/bin/bash
BATCHSIZE=2
# **NOTE**: check exists with -f and points at the right place. you might not need this. depends on your own taste for risk.
ln -s `which cp` /tmp/myuniquecpname
# **NOTE**: this sort of thing can have limits in some shells
for i in `cat test.txt`
do
BASENAME="`basename $i`"
echo doing /tmp/myuniquecpname $i test2/$BASENAME &
/tmp/myuniquecpname $i test2/$BASENAME &
COUNT=`ps -ef | grep /tmp/myuniquecpname | grep -v grep | wc -l`
# **NOTE**: maybe need to put a timeout on this loop
until [ $COUNT -lt $BATCHSIZE ]; do
COUNT=`ps -ef | grep /tmp/myuniquecpname | grep -v grep | wc -l`
echo waiting...
sleep 1
done
done
What is the most elegant way in zsh to test, whether a file is either a readable regular file?
I understand that I can do something like
if [[ -r "$name" && -f "$name" ]]
...
But it requires repeating "$name" twice. I know that we can't combine conditions (-rf $name), but maybe some other feature in zsh could be used?
By the way, I considered also something like
if ls ${name}(R.) >/dev/null 2>&1
...
But in this case, the shell would complain "no matches found", when $name does not fulfil the criterium. Setting NULL_GLOB wouldn't help here either, because it would just replace the pattern with an empty string, and the expression would always be true.
In very new versions of zsh (works for 5.0.7, but not 5.0.5) you could do this
setopt EXTENDED_GLOB
if [[ -n $name(#qNR.) ]]
...
$name(#qNR.) matches files with name $name that are readable (R) and regular (.). N enables NULL_GLOB for this match. That is, if no files match the pattern it does not produce an error but is removed from the argument list. -n checks if the match is in fact non-empty. EXTENDED_GLOB is needed to enable the (#q...) type of extended globbing which in turn is needed because parenthesis usually have a different meaning inside conditional expressions ([[ ... ]]).
Still, while it is indeed possible to write something up that uses $name only once, I would advice against it. It is rather more convoluted than the original solution and thus harder to understand (i.e. needs thinking) for the next guy that reads it (your future self counts as "next guy" after at most half a year). And at least this solution will work only on zsh and there only on new versions, while the original would run unaltered on bash.
How about make small(?) shell functions as you mentioned?
tests-raw () {
setopt localoptions no_ksharrays
local then="$1"; shift
local f="${#[-1]}" t=
local -i ret=0
set -- "${#[1,-2]}"
for t in ${#[#]}; do
if test "$t" "$f"; then
ret=$?
"$then"
else
return $?
fi
done
return ret
}
and () tests-raw continue "${#[#]}";
or () tests-raw break "${#[#]}";
# examples
name=/dev/null
if and -r -c "$name"; then
echo 'Ok, it is a readable+character special file.'
fi
#>> Ok, it is...
and -r -f ~/.zshrc ; echo $? #>> 0
or -r -d ~/.zshrc ; echo $? #>> 0
and -r -d ~/.zshrc ; echo $? #>> 1
# It could be `and -rd ~/.zshrc` possible.
I feel this is somewhat overkill though.
I'm trying to write a (sh -bourne shell) script that processes lines as they are written to a file. I'm attempting to do this by feeding the output of tail -f into a while read loop. This tactic seems to be proper based on my research in Google as well as this question dealing with a similar issue, but using bash.
From what I've read, it seems that I should be able to break out of the loop when the file being followed ceases to exist. It doesn't. In fact, it seems the only way I can break out of this is to kill the process in another session. tail does seem to be working fine otherwise as testing with this:
touch file
tail -f file | while read line
do
echo $line
done
Data I append to file in another session appears just file from the loop processing written above.
This is on HP-UX version B.11.23.
Thanks for any help/insight you can provide!
If you want to break out, when your file does not exist any more, just do it:
test -f file || break
Placing this in your loop, should break out.
The remaining problem is, how to break the read line, as this is blocking.
This could you do by applying a timeout, like read -t 5 line. Then every 5 second the read returns, and in case the file does not longer exist, the loop will break. Attention: Create your loop that it can handle the case, that the read times out, but the file is still present.
EDIT: Seems that with timeout read returns false, so you could combine the test with the timeout, the result would be:
tail -f test.file | while read -t 3 line || test -f test.file; do
some stuff with $line
done
I don't know about HP-UX tail but GNU tail has the --follow=name option which will follow the file by name (by re-opening the file every few seconds instead of reading from the same file descriptor which will not detect if the file is unlinked) and will exit when the filename used to open the file is unlinked:
tail --follow=name test.txt
Unless you're using GNU tail, there is no way it'll terminate of its own accord when following a file. The -f option is really only meant for interactive monitoring--indeed, I have a book that says that -f "is unlikely to be of use in shell scripts".
But for a solution to the problem, I'm not wholly sure this isn't an over-engineered way to do it, but I figured you could send the tail to a FIFO, then have a function or script that checked the file for existence and killed off the tail if it'd been unlinked.
#!/bin/sh
sentinel ()
{
while true
do
if [ ! -e $1 ]
then
kill $2
rm /tmp/$1
break
fi
done
}
touch $1
mkfifo /tmp/$1
tail -f $1 >/tmp/$1 &
sentinel $1 $! &
cat /tmp/$1 | while read line
do
echo $line
done
Did some naïve testing, and it seems to work okay, and not leave any garbage lying around.
I've never been happy with this answer but I have not found an alternative either:
kill $(ps -o pid,cmd --no-headers --ppid $$ | grep tail | awk '{print $1}')
Get all processes that are children of the current process, look for the tail, print out the first column (tail's pid), and kill it. Sin-freaking-ugly indeed, such is life.
The following approach backgrounds the tail -f file command, echos its process id plus a custom string prefix (here tailpid: ) to the while loop where the line with the custom string prefix triggers another (backgrounded) while loop that every 5 seconds checks if file is still existing. If not, tail -f file gets killed and the subshell containing the backgrounded while loop exits.
# cf. "The Heirloom Bourne Shell",
# http://heirloom.sourceforge.net/sh.html,
# http://sourceforge.net/projects/heirloom/files/heirloom-sh/ and
# http://freecode.com/projects/bournesh
/usr/local/bin/bournesh -c '
touch file
(tail -f file & echo "tailpid: ${!}" ) | while IFS="" read -r line
do
case "$line" in
tailpid:*) while sleep 5; do
#echo hello;
if [ ! -f file ]; then
IFS=" "; set -- ${line}
kill -HUP "$2"
exit
fi
done &
continue ;;
esac
echo "$line"
done
echo exiting ...
'