I am trying to work out a way to delete all folders but keep once, even if it is nested.
./release/test-folder
./release/test-folder2
./release/feature/custom-header
./release/feature/footer
If I run something like:
shopt -s extglob
rm -rf release/!(test-folder2)/
or
find ./release -type d -not -regex ".*test-folder2.*" -delete
get it OK, but in the cases when path is nested like feature/footer
Both command lines matches release/feature and it gets deleted.
Can you suggest any other option that would keep the folder, no matter how nested it is?
This is not best solution but it works.
# $1 = root path
# $2 = pattern
findex(){
# create temp dir
T=$(mktemp -d)
# find all dirs inside "$1"
# and save it in file "a"
find "$1" -type d >$T/a
# filtering file A by pattern "$2"
# and save it in file "b"
cat $T/a | grep "$2" >$T/b
# For each path in the file b
# add paths of the parent directories
# and save it in file "c"
cat $T/b | while read P; do
echo $P
while [[ ${#1} -lt ${#P} ]]; do
P=$(dirname "$P")
echo $P
done
done >$T/c
# make list in file "c" unique
# and save it in file "d"
cat $T/c | sort -u >$T/d;
# find from list "a" all the paths
# that are missing in the list "d"
awk 'NR==FNR{a[$0];next} !($0 in a)' $T/d $T/a
# remove temporary directory
rm -rf $T
}
# find all dirs inside ./path except matching "pattern"
# and remove it
findex ./path "pattern" | xargs -L1 rm
Test it
findex(){
T=$(mktemp -d)
find "$1" -type d >$T/a
cat $T/a | grep "$2" >$T/b
cat $T/b | while read P; do
echo $P
while [[ ${#1} -lt ${#P} ]]; do
P=$(dirname "$P")
echo $P
done
done >$T/c
cat $T/c | sort -u >$T/d;
# save result in file "e"
awk 'NR==FNR{a[$0];next} !($0 in a)' $T/d $T/a >$T/e
# output path of temporary directory
echo $T
}
cd $TMPDIR
for I in {000..999}; do
mkdir -p "./test/${I:0:1}/${I:1:1}/${I:2:1}";
done
T=$(findex ./test "5")
cat $T/a | wc -l # => 1111 dirs total
cat $T/d | wc -l # => 382 dirs matched
cat $T/e | wc -l # => 729 dirs to delete
rm -rf $T ./test
Related
I have 60 subdirs in a directory, example name of the directory: test/queues.
The subdirs:
test/queues/subdir1
test/queues/subdir2
test/queues/subdir3
(...)
test/queues/subdir60
I want a command that gives me the output of the number of files in each subdirectory, listed separately, example:
test/queues/subdir1 - 45 files
test/queues/subdir2 - 76 files
test/queues/subdir3 - 950 files
(...)
test/queues/subdir60 - 213 files
Through my researchs, I only got the command ls -lat test/queues/* | wc -l, but this command outputs me the total of files in all of these subdirs. For example, It returns me only 4587, that is the total number of files in all these 60 subdirs. I want the output listing separately, the quantity of files in each folder.
How can I do that ?
Use a loop to count the lines for every subdirectory individually:
for d in test/queues/*/
do
echo "$d" - $(ls -lat "$d" | wc -l)
done
Note that the output of ls -lat some_directory will contain a few additional lines like
total 123
drwxr-xr-x 1 user group 0 Feb 26 09:51 ../
drwxr-xr-x 1 user group 0 Jan 25 12:35 ./
If your ls command supports these options you can use
for d in test/queues/*/
do
echo "$d" - $(ls -A1 "$d" | wc -l)
done
You can apply ls | wc -l in a loop to all subdirs
for x in *; do echo "$x => $(ls $x | wc -l)"; done;
If you want to restrict the output to directories that are one level deep and you only want a count of regular files, you could do:
find . -type d -maxdepth 1 -exec sh -c '
printf "%s\t" "$0"; find "$0" -type f -maxdepth 1 | wc -l' {} \; \
| column -t
You can get the "name - %d files" format with:
find . -type d -maxdepth 1 -exec sh -c '
printf "%s - %d files\n" "$0" \
"$(find "$0" -type f -maxdepth 1 | wc -l)"' {} \;
Using find and awk:
find test/queues -maxdepth 2 -mindepth 2 -printf "%h\n" | awk '{ map[$0]++ } END { for (i in map) { print i" - "map[i]} }'
Use maxdepth and mindepth to ensure that we are only searching the directory structure one level down. Print only the leading directories thorough printf "%h" Pipe the output into awk and create an incrementing map array with the directories as the index. At the end, loop through the map array printing the directories and the counts.
On Unix in the case of not -printf option with find, use exec dirname instead:
find test/queues -maxdepth 2 -mindepth 2 -exec dirname {} \; | awk '{ map[$0]++ } END { for (i in map) { print i" - "map[i]} }'
I have a requirement to cut a file name into two parts.
So my file name is : 'SIC_ETL_MAIN_0.1.zip'
I want to cut the file name into parts and load into two variable separately
Expected Output:
SIC_ETL_MAIN - var1
0.1 - var2
using grep
$echo SIC_ETL_MAIN_0.1.zip | grep -o '[A-Z_]*[A-Z]'
SIC_ETL_MAIN
$echo SIC_ETL_MAIN_0.1.zip | grep -o '[0-9\.]*[0-9]'
0.1
$
Edit: Variable assignment
$var1=$(echo SIC_ETL_MAIN_0.1.zip | grep -o '[A-Z_]*[A-Z]')
$var2=$(echo SIC_ETL_MAIN_0.1.zip | grep -o '[0-9\.]*[0-9]')
$echo "Var1=${var1} Var2=${var2}"
Var1=SIC_ETL_MAIN Var2=0.1
$
If your shell happens to be bash (or another shell wit shubstrings)
a=SIC_ETL_MAIN_0.1.zip
b=${a%_*}
c=${a##*_}
d=${c%.*}
echo "$a | $b | $c | $d" # will output
SIC_ETL_MAIN_0.1.zip | SIC_ETL_MAIN | 0.1.zip | 0.1
There are lots of files in a directory and output to be group and sort like below,first exe files
without any file extension,then sql files ending with "body",then sql files ending with "spec",then
other sql files.then "sh" then "txt" files.
abc
1_spec.sql
1_body.sql
2_body.sql
other.sql
a1.sh
a1.txt
find . -maxdepth 1 -type f ! -name "*.*"
find . -type f -name "*body*.sql"
find . -type f -name "*spec*.sql"
Getting difficult to combine all and sorting group with order.
with ls, grep and sort you could do something like this script I hacked together:
#!/bin/sh
ls | grep -v '\.[a-zA-Z0-9]*$' | sort
ls | grep '_body.sql$' | sort
ls | grep '_spec.sql$' | sort
ls | grep -vE '_body.sql$|_spec.sql$' | grep '.sql$' | sort
ls | grep '.sh$' | sort
ls | grep '.txt$' | sort
normal ls:
$ ls -1
1_body.sql
1_spec.sql
2_body.sql
a1.sh
a1.txt
abc
bar.sql
def
foo.sh
other.sql
script
$
sorting script:
$ ./script
abc
def
script
1_body.sql
2_body.sql
1_spec.sql
bar.sql
other.sql
a1.sh
foo.sh
a1.txt
$
I'm trying to find a better way to determine which files in a given directory contain all of a given set of search strings. The way I'm currently doing it seems awkward.
For example, if I want to find which files contain "aaa", "bbb", "ccc", and "ddd" I would do:
grep -l "aaa" * > out1
grep -l "bbb" `cat out1` > out2
grep -l "ccc" `cat out2` > out3
grep -l "ddd" `cat out3` > out4
cat out4
rm out1 out2 out3 out4
As you can see, this seems clumsy. Any better ideas?
EDIT: I'm on a Solaris 10 machine
You can use xargs to chain the grep calls together:
grep -l "aaa" * | xargs grep -l "bbb" | xargs grep -l "ccc" | xargs grep -l "ddd"
something along this may help:
for file in * ; do
matchall=1
for pattern in aaa bbb ccc ddd ; do
grep "$pattern" "$file" >/dev/null || { matchall=0; break ; }
done
if [ "$matchall" -eq "1" ]; then echo "maching all : $file" ; fi
done
(you can add patterns by replacing aaa bbb ccc ddd with something like $(cat patternfile))
For the ones interrested : it 1) loop over each file, and 2) for each file: it assumes this will match all patterns, and loops over the patterns: as soon as a pattern doesn't appear in the file, that pattern loop is exited, the name of that file is not printed, and it goes to check the next file. ie, it only print a file which has been through all the patterns without any setting "matchall" to 0.
The Unix cut command takes a list of fields, but not the order that I need it in.
$ echo 1,2,3,4,5,6 | cut -d, -f 1,2,3,5
1,2,3,5
$ echo 1,2,3,4,5,6 | cut -d, -f 1,3,2,5
1,2,3,5
However, I would like a Unix shell command that will give me the fields in the order that I specify.
Use:
pax> echo 1,2,3,4,5,6 | awk -F, 'BEGIN {OFS=","}{print $1,$3,$2,$5}'
1,3,2,5
or:
pax> echo 1,2,3,4,5,6 | awk -F, -vOFS=, '{print $1,$3,$2,$5}'
1,3,2,5
Or just use the shell
$ set -f
$ string="1,2,3,4,5"
$ IFS=","
$ set -- $string
$ echo $1 $3 $2 $5
1 3 2 5
Awk based solution is elegant. Here is a perl based solution:
echo 1,2,3,4,5,6 | perl -e '#order=(1,3,2,5);#a=split/,/,<>;for(#order){print $a[$_-1];}'