Renaming multiple files using parameter unix - unix

I have a rename script below rename.sh. I want to introduce a variable such that I can pass a date argument when executing the script like./rename.sh 20151103 such that 20151103 replaces 20140306 in the script.
for f in *.CDR*; do
echo mv "$f" "${f/-20140306/-0-20140306}"
done
Thinking of automating this as I don't want to manually edit the script each time i'm doing a rename. Any other method will be highly welcomed.

#!/bin/bash
pattern="$1"
for f in *.CDR*; do
echo mv "$f" "${f/-${pattern}/-0-${pattern}}"
done
Explanation:
The #!-line says we're running this as a bash script.
The script will populate the variables $1, $2 etc. with the arguments handed to it on the command line. These are called the positional parameters ($0 usually holds the name of the script).
We take $1, because we know that should contain the pattern we're replacing, and assign it to the variable $pattern. In much more complex scripts, here is where we would handle command line switches (with getopts, but that's an answer for another day).
We quote $1, just because. (It's good practice to quote user input, just to be sure no shell-globbing characters, such as * gets expanded).
The rest is the script like you had from before, but with the string 20140306 replaced by ${pattern}. I'm using ${pattern} rather than $pattern here for readability only. In general, you need to use ${a} rather than $a if you, for example, interpolate a string like "${a}nospaceafter".
Then it should just be a matter of making the script executable before testing it:
$ chmod +x rename.sh

This is the one of the method you can consider:
#!/bin/bash
input=$1
for f in *.CDR*; do
echo mv "$f" "${f/-$input/-0-$input}"
done

Related

Parsing variable in loop incorrectly [duplicate]

I want to run certain actions on a group of lexicographically named files (01-09 before 10). I have to use a rather old version of FreeBSD (7.3), so I can't use yummies like echo {01..30} or seq -w 1 30.
The only working solution I found is printf "%02d " {1..30}. However, I can't figure out why can't I use $1 and $2 instead of 1 and 30. When I run my script (bash ~/myscript.sh 1 30) printf says {1..30}: invalid number
AFAIK, variables in bash are typeless, so how can't printf accept an integer argument as an integer?
Bash supports C-style for loops:
s=1
e=30
for i in ((i=s; i<e; i++)); do printf "%02d " "$i"; done
The syntax you attempted doesn't work because brace expansion happens before parameter expansion, so when the shell tries to expand {$1..$2}, it's still literally {$1..$2}, not {1..30}.
The answer given by #Kent works because eval goes back to the beginning of the parsing process. I tend to suggest avoiding making habitual use of it, as eval can introduce hard-to-recognize bugs -- if your command were whitelisted to be run by sudo and $1 were, say, '$(rm -rf /; echo 1)', the C-style-for-loop example would safely fail, and the eval example... not so much.
Granted, 95% of the scripts you write may not be accessible to folks executing privilege escalation attacks, but the remaining 5% can really ruin one's day; following good practices 100% of the time avoids being in sloppy habits.
Thus, if one really wants to pass a range of numbers to a single command, the safe thing is to collect them in an array:
a=( )
for i in ((i=s; i<e; i++)); do a+=( "$i" ); done
printf "%02d " "${a[#]}"
I guess you are looking for this trick:
#!/bin/bash
s=1
e=30
printf "%02d " $(eval echo {$s..$e})
Ok, I finally got it!
#!/bin/bash
#BSD-only iteration method
#for day in `jot $1 $2`
for ((day=$1; day<$2; day++))
do
echo $(printf %02d $day)
done
I initially wanted to use the cycle iterator as a "day" in file names, but now I see that in my exact case it's easier to iterate through normal numbers (1,2,3 etc.) and process them into lexicographical ones inside the loop. While using jot, remember that $1 is the numbers amount, and the $2 is the starting point.

Make zsh complete arguments from a file

zsh is great but its completion system is very diverse. And the documentation lacks good examples. Is there a template for completing for a specific application. The completion would get its match data from a file, separated by newlines?
I tried modifying an older example of mine that takes match data "live":
~ % cat .zsh/completers/_jazzup
#compdef jazz_up
_arguments "2: :(`mpc lsplaylists|sed -e 's# #\\\\ #g'`)"
I could supply cat my_file there instead of mpc invocation and so on but would there be a more elegant way to do this simple task? And that completion there is placement-specific: can you provide an example where zsh would attempt to complete at any point after the program name is recognized?
The match data will have whitespaces and so on, the completion should escape the WS. Example of that:
Foo bar
Barbaric
Get it (42)
Now if that completion would be configured for a command Say, we should get this kind of behaviour out of zsh:
$ Say Fo<TAB>
$ Say Foo\ bar
$ Say Ge<TAB>
$ Say Get\ it\ \(42\)
Simple completion needs are better addressed with _describe, it pairs an array holding completion options and a description for them (you can use multiple array/description pairs, check the manual).
(_arguments is great but too complex.)
[...]
First create a file
echo "foo\nbar\nbaz\nwith spac e s\noh:noes\noh\:yes" >! ~/simple-complete
Then create a file _simple somewhere in your $fpath:
#compdef simple
# you may wish to modify the expansion options here
# PS: 'f' is the flag making one entry per line
cmds=( ${(uf)"$(< ~/simple-complete)"} )
# main advantage here is that it is easy to understand, see alternative below
_describe 'a description of the completion options' cmds
# this is the equivalent _arguments command... too complex for what it does
## _arguments '*:foo:(${cmds})'
then
function simple() { echo $* }
autoload _simple # do not forget BEFORE the next cmd!
compdef _simple simple # binds the completion function to a command
simple [TAB]
it works. Just make sure the completion file _simple is placed somewhere in your fpath.
Notice that : in the option list is supposed to be used for separating an option from their (individual) description (oh:noes). So that won't work with _describe unless you quote it (oh\:yes). The commented out _arguments example will not use the : as a separator.
Without changing anything further in .zshrc (I already have autoload -Uz compinit
compinit) I added the following as /usr/local/share/zsh/site-functions/_drush
#compdef drush
_arguments "1: :($(/usr/local/bin/aliases-drush.php))"
Where /usr/local/bin/aliases-drush.php just prints a list of strings, each string being a potential first argument for the command drush. You could use ($(< filename)) to complete from filename.
I based this on https://unix.stackexchange.com/a/458850/9452 -- it's surprising how simple this is at the end of the day.

Program fails to move file

I'm trying to move file from one place to another directory...So my program will read Log_Deleter, use parameters given in each line to delete the file.
When I execute the file, it seems like it runs fine (no errors) but non of the files are moved... I'm not sure why it's not moving the file nor display any error...
Can someone please identify the error?
my attempt:
#!/bin/ksh
while read -r line ; do
v=$line
set -- $v
cd /
$(find "$1" -type f -name "$2" -mtime +"$3" -exec mv {} "$4" \;)
done < Log_Deleter.txt
Log_Deleter.txt
/usr/IBM/WebSphere/AppServer/profiles/AppSrvSIT1/logs/Server1 'SystemOut_*' 5 /backup/Abackuptest1
/usr/IBM/WebSphere/AppServer/profiles/AppSrvSIT1/logs/Server2 'SystemOut_*' 5 /backup/Abackuptest2
Thanks for your help!
Find is looking for files that have a literal ' in the name. You need to remove the single quotes from $2 before invoking find. Try:
#!/bin/ksh
while read -r path name mtime dest ; do
name=$( echo $name | tr -d "'" )
find "$path" -type f -name "$name" -mtime +"$mtime" -exec mv {} "$dest" \;
done < Log_Deleter.txt
The problem is that you are trying to match a file whose name actually has the single quotes in it.
Barring other problems, I think your script will probably work once you take the quotes out of Log_Deleter.txt.
The quotes are only meaningful when the shell is parsing command input. This is not what the read builtin does. And even when reading command input, once the quotes get into a variable they stay there forever unless reread at the shells CLI layer via eval.
The shell is not exactly a macro processor. It's a complicated hybrid that a little bit CLI, a little bit programming language, and a little bit macro processor.
And, speaking of eval, it's not necessary to wrap the find in an eval-like construct. Simplify your script to run find directly and you will find it easier to debug and understand.

Unix ksh loop and put the result into variable

I have a simple thing to do, but I'm novice in UNIX.
So, I have a file and on each line I have an ID.
I need to go through the file and put all ID's into one variable.
I've tried something like in Java but does not work.
for variable in `cat myFile.txt`
do
param=`echo "${param} ${variable}"`
done
It does not seems to add all values into param.
Thanks.
I'd use:
param=$(<myFile.txt)
The parameter has white space (actually newlines) between the names. When used without quotes, the shell will expand those to spaces, as in:
cat $param
If used with quotes, the file names will remain on separate lines, as in:
echo "$param"
Note that the Korn shell special-cases the '$(<file)' notation and does not fork and execute any command.
Also note that your original idea can be made to work more simply:
param=
for variable in `cat myFile.txt`
do
param="${param} ${variable}"
done
This introduces a blank at the front of the parameter; it seldom matters. Interestingly, you can avoid the blank at the front by having one at the end, using param="${param}${variable} ". This also works without messing things up, though it looks as though it jams things together. Also, the '${var}' notation is not necessary, though it does no harm either.
And, finally for now, it is better to replace the back-tick command with '$(cat myFile.txt)'. The difference becomes crucial when you need to nest commands:
perllib=$(dirname $(dirname $(which perl)))/lib
vs
perllib=`dirname \`dirname \\\`which perl\\\`\``/lib
I know which I prefer to type (and read)!
Try this:
param=`cat myFile.txt | tr '\n' ' '`
The tr command translates all occurrences of \n (new line) to spaces. Then we assign the result to the param variable.
Lovely.
param="$(< myFile.txt)"
or
while read line
do
param="$param$line"$'\n'
done < myFile.txt
awk
var=$(awk '1' ORS=" " file)
ksh
while read -r line
do
t="$t $line"
done < file
echo $t

Why did my use of the read command not do what I expected?

I did some havoc on my computer, when I played with the commands suggested by vezult [1]. I expected the one-liner to ask file-names to be removed. However, it immediately removed my files in a folder:
> find ./ -type f | while read x; do rm "$x"; done
I expected it to wait for my typing of stdin:s [2]. I cannot understand its action. How does the read command work, and where do you use it?
What happened there is that read reads from stdin. When you put it at the end of a pipe, it read from that pipe.
So your find becomes
file1
file2
and so on; read reads that and replaces x successively with file1 then file2, and so your loop becomes
rm "file1"
rm "file2"
and sure enough, that rm's every file starting at the current directory ".".
A couple hints.
You didn't need the "/".
It's better and safer to say
find . -type f
because should you happen to type ". /" (ie, dot SPACE slash) find will start at the current directory and then go look starting at the root directory. That trick, given the right privileges, would delete every file in the computer. "." is already the name of a directory; you don't need to add the slash.
The find or rm commands will do this
It sounds like what you wanted to do was go through all the files in all the directories starting at the current directory ".", and have it ASK if you want to delete it. You could do that with
find . -type f -exec rm -i {} \;
or
find . -type f -ok rm {} \;
and not need a loop at all. You can also do
rm -r -i *
and get nearly the same effect, except that it will try to delete directories too. If the directory is empty, that'll even work.
Another thought
Come to think of it, unless you have a LOT of files, you could also do
rm -i `find . -type f`
Now the find in backquotes will become a bunch of file names on the command line, and the '-i' interactive flag on rm will ask the yes or no question.
Charlie Martin gives you a good dissection and explanation of what went wrong with your specific example, but doesn't address the general question of:
When should you use the read command?
The answer to that is - when you want to read successive lines from some file (quite possibly the standard output of some previous sequence of commands in a pipeline), possibly splitting the lines into several separate variables. The splitting is done using the current value of '$IFS', which normally means on blanks and tabs (newlines don't count in this context; they separate lines). If there are multiple variables in the read command, then the first word goes into the first variable, the second into the second, ..., and the residue of the line into the last variable. If there's only one variable, the whole line goes into that variable.
There are many uses. This is one of the simpler scripts I have that uses the split option:
#!/bin/ksh
#
# #(#)$Id: mkdbs.sh,v 1.4 2008/10/12 02:41:42 jleffler Exp $
#
# Create basic set of databases
MKDUAL=$HOME/bin/mkdual.sql
ELEMENTS=$HOME/src/sqltools/SQL/elements.sql
cat <<! |
mode_ansi with log mode ansi
logged with buffered log
unlogged
stores with buffered log
!
while read dbs logging
do
if [ "$dbs" = "unlogged" ]
then bw=""; cw=""
else bw="-ebegin"; cw="-ecommit"
fi
sqlcmd -xe "create database $dbs $logging" \
$bw -e "grant resource to public" -f $MKDUAL -f $ELEMENTS $cw
done
The cat command with a here-document has its output sent to a pipe, so the output goes into the while read dbs logging loop. The first word goes into $dbs and is the name of the (Informix) database I want to create. The remainder of the line is placed into $logging. The body of the loop deals with unlogged databases (where begin and commit do not work), then run a program sqlcmd (completely separate from the Microsoft new-comer of the same name; it's been around since about 1990) to create a database and populate it with some standard tables and data - a simulation of the Oracle 'dual' table, and a set of tables related to the 'table of elements'.
Other scripts that use the read command are bigger (by far), but generally read lines containing one or more file names and some other attributes of relevance, and then apply an appropriate transform to the files using the attributes.
Osiris JL: file * | grep 'sh.*script' | sed 's/:.*//' | xargs wgrep read
esqlcver:read version letter
jlss: while read directory
jlss: read x || exit
jlss: read x || exit
jlss: while read file type link owner group perms
jlss: read x || exit
jlss: while read file type link owner group perms
kb: while read size name
mkbod: while read directory
mkbod:while read dist comp
mkdbs:while read dbs logging
mkmsd:while read msdfile master
mknmd:while read gfile sfile version notes
publictimestamp:while read name type title
publictimestamp:while read name type title
Osiris JL:
'Osiris JL: ' is my command line prompt; I ran this in my 'bin' directory. 'wgrep' is a variant of grep that only matches entire words (to avoid words like 'already'). This gives some indication of how I've used it.
The 'read x || exit' lines are for an interactive script that reads a response from standard input, but exits if the command gets EOF (for example, if standard input comes from /dev/null).

Resources