How can i rename a directory by interchanging the digits and word in directory name.
e.g.
FRA-DEV_007583-K4C-rdf-1
FRA-DEV_007583-K4C-source-8
FRA-DEV_007584-K4C-rdf-19
FRA-DEV_007584-K4C-rdf-8
output should be
FRA-DEV_007583-K4C-1-rdf
FRA-DEV_007583-K4C-8-source
FRA-DEV_007584-K4C-9-rdf
FRA-DEV_007584-K4C-8-rdf
If you have all those files in the same directory, with no other files in there, you could use this script:
#! /bin/bash
nums=(`ls $1 | cut -d- -f5`)
words=(`ls $1 | cut -d- -f4`)
files=(`ls $1 | cut -d- -f1-3`)
complete_files=(`ls $1`)
len=${#complete_files[#]}
for (( i=0; i<${len}; i++ ));
do
newname=${files[$i]}-${nums[$i]}-${words[$i]}
mv $1${complete_files[$i]} $1$newname
done
Save this script as rename.sh in a directory OUTSIDE of the one where your files are. Then execute: bash rename.sh path/to/your/files/ don't forget the final slash, and make a backup first just in case.
Related
I have been trying to get to count all the empty folders in a certain directory. sub-directories excluded. i used the code below but i don't know how to define empty folders or folders that contain files.
echo "$(ls -l | egrep -l $1/* | wc -l)"
the $1 will be the user argument in the command line. example: ./script.sh ~/Desktop/backups/March2021.
Edit - im not allowed to use find command
Edit 2 - ls -l * | awk '/total 0/{print last}{last=$0}' | wc -l this script works but lists all folders even if the directory contains files and data or if the directory is empty.
What about this:
grep -v "." *
I mean the following: "." means any character (I'm not sure the syntax is correct), so basically you look for every file which not even contain any character.
You should not parse ls (directories or file names with newlines), so this solution is only for the assignment:
ls -d */ */* | cut -d/ -f1 | sort | uniq -u | wc -l
Explanation:
ls -d */ shows all directories. This is combined with ls -d */* which will also show contents in the directories.
The resulting output will show all directories.
Empty directories will be shown only once, so you want to look for unique lines.
With the cut you only see the name of the directory, not the files in the directory.
The sort could be skipped here, the ls will give sorted output. When you change the solution to find (next assignment?) the sort might be needed.
uniq can look for lines that occur once. The flag -u removes all lines that have duplicates, so it will show the unique lines in the output.
I'm trying to list some files, but I only want the file names, in order of file date. I've tried a few commands but they don't see to work.
I know that using this code I can list only the file names:
ls -f *
And I know that using this command I can list the files sorted by date:
ls -ltr *
So I have tried using this command to list the file names only, sorted by file date, but it doesn't sort by date:
ls -ltr -f *
That last command simply lists the file names, but sorted by file name, not date.
Any ideas how I can do this with a simple ls command?
FYI, once I get this working my ultimate goal is to only list the most recently created 10 file names, using something like this:
ls -ltr -f * | tail -10
You could try the following command:
ls -ltr | awk '{ print $9 }' | tail -n +2
It extracts the file names from the ls -ltr command.
According to the manual for ls, the -f flag is used to,
-f do not sort, enable -aU, disable -ls --color
One way of extracting only files would be,
ls -p | grep -v /
The option -p is used to append a '/' to a directory name, we can grep for lines not containing a '/'.
To extract 10 most recently used files you could do the following
ls -ptr * | grep -v / | tail -10
I have thousands of files named "DOCUMENT.PDF" and I want to rename them based on a numeric identifier in the path. Unfortunately, I don't seem to have access to the rename command.
Three examples:
/000/000/002/605/950/ÐÐ-02605950-00001/DOCUMENT.PDF
/000/000/002/591/945/ÐÐ-02591945-00002/DOCUMENT.PDF
/000/000/002/573/780/ÐÐ-02573780-00002/DOCUMENT.PDF
To be renamed as, without changing their parent directory:
2605950.pdf
2591945.pdf
2573780.pdf
Use a for loop, and then use the mv command
for file in *
do
num=$(awk -F "/" '{print $(NF-1)}' file.txt | cut -d "-" -f2);
mv "$file" "$num.pdf"
done
You could do this with globstar in Bash 4.0+:
cd _your_base_dir_
shopt -s globstar
for file in **/DOCUMENT.PDF; do # loop picks only DOCUMENT.PDF files
# here, we assume that the serial number is extracted from the 7th component in the directory path - change it according to your need
# and we don't strip out the leading zero in the serial number
new_name=$(dirname "$file")/$(cut -f7 -d/ <<< "$file" | cut -f2 -d-).pdf
echo "Renaming $file to $new_name"
# mv "$file" "$new_name" # uncomment after verifying
done
See this related post that talks about a similar problem: How to recursively traverse a directory tree and find only files?
I want to write a script that add '0' at the end of the files that doesn't have it.
This is what I wrote:
#!/bin/bash
for file in $1
do
echo $file
ls $file | grep "\0$"
if ["$?"="1"]
then
fi
done
I don't know hot to target the files in a way I can rename them
for file in *[!0]; do mv "$file" "${file}0"; done
For each name that does not end 0, rename it so it does. Note that this handles names with spaces etc in them.
I want to give the script a directory, and it will rename the files in it that do not end in 0. How can I use this in a way I can tell the script which directory to work with?
So, make the trivial necessary changes, working with a single directory (and not rejecting the command line if more than one directory is specified; just quietly ignoring the extras):
for file in "${1:?}"/*[!0]; do mv "$file" "${file}0"; done
The "${1:?}" notation ensures that $1 is set and is not empty, generating an error message if it isn't. You could alternatively write "${1:-.}" instead; that would work on the current directory instead of a remote directory. The glob then generates the list of file names in that directory that do not end with a 0 and renames them so that they do. If you have Bash, you can use shopt -s nullglob you won't run into problems if there are no files without the 0 suffix in the directory.
You can generalize to handle any number of arguments (all supposed to be directories, defaulting to the current directory if no directory is specified):
for dir in "${#:-.}"
do
for file in "$dir"/*[!0]; do mv "$file" "${file}0"; done
done
Or (forcing directories):
for dir in "${#:-.}"
do
(cd "$dir" && for file in *[!0]; do mv "$file" "${file}0"; done)
done
This has the merit of reporting which arguments are not directories, or are inaccessible directories.
There are endless variations of this sort that could be made; some of them might even be useful.
Now, I want to do the same but, instead of the file ending with '0', the script should rename files that do not end with '.0' so that they do end with '.0'?
This is slightly trickier because of the revised ending. Simply using [!.][!0] is insufficient. For example, if the list of files includes 30, x.0, x0, z.9, and z1, then echo *[!.][!0] only lists z1 (omitting 30, x0 and z.9 which do not end with .0).
I'd probably use something like this instead:
for dir in "${#:-.}"
do
(
cd "$dir" &&
for file in *
do
case "$file" in
(*.0) : skip it;;
(*) mv "$file" "${file}0";;
esac
done
)
done
The other alternative lists more glob patterns:
for dir in "${#:-.}"
do
(cd "$dir" && for file in *[!.][!0] *.[!0] *[!.]0; do mv "$file" "${file}0"; done)
done
Note that this rapidly gets a lot trickier if you want to look for files not ending .00 — there would be a 7 glob expressions (but the case variant would work equally straight-forwardly), and shopt -s nullglob becomes increasingly important (or you need [ -f "$file" ] && mv "$file" "${file}.0" instead of the simpler move command).
got a file like "index.txt" that contains a few lines of book titles like:
book 1.pdf<br>
book 1.opf<br>
book2.epub<br>
book3.opf<br>
and so more, 1 title = 1 line
Id'd like to do this thing in bash:
rm -rf from $dir IF $file IS NOT in index.txt
How I can do this?
You can use the below command.
find <dir> -name "*" | grep -vFf index.txt | xargs rm -rf
find : List all the files in the specified directory
grep-vFf : Do inverse grep with input from the file (will list files that are not found in the input file. In this case it is index.txt )
xargs rm -rf : Delete each of the file that are not found in the
list. This deletion list is obtained as the output of previous grep
command
Edit:
When files names contain white spaces use the below command.
find <dir> -name "*" | grep -vFf index.txt |sed 's/^/"/;s/$/"/' | xargs rm -rf
sed would add quotes to all the files name.
you can remove the listed files in a txt file using
#!/bin/sh
while read line;
do
rm `echo $dir/line`
done<list.txt