How to compare 2 folders' permission on Unix? - unix

Given 2 folder: /folder1 and /folder2 and each folder has some files and subfolders inside.
I used following command to compare the file difference including sub folder :
diff -buf /folder1 /folder2
which found no difference in term of folder and file structural .
However, I found that there are some permission differences between these 2 folders' files. Is there simple way/command to compare the permission of each file under these 2 folders (including sub-folders) on Unix?
thanks,

If you have the tree command installed, it can do the job very simply using a similar procedure to the one that John C suggested:
cd a
tree -dfpiug > ../a.list
cd ../b
tree -dfpiug > ../b.list
cd ..
diff a.list b.list
Or, you can just do this on one line:
diff <(cd a; tree -dfpiug) <(cd b; tree -dfpiug)
The options given to tree are as follows:
-d only scans directories (omit to compare files as well)
-f displays the full path
-p displays permissions (e.g., [drwxrwsr-x])
-i removes tree's normal hierarchical indent
-u displays the owner's username
-g displays the group name

One way to compare permissions on your two directories is to capture the output of ls -al to a file for each directory and diff those.
Say you have two directories called a and b.
cd a
ls -alrt > ../a.list
cd ../b
ls -alrt > ../b.list
cd ..
diff a.list b.list
If you find that this gives you too much noise due to file sizes and datestamps you can use awk to filter out some of the columns returned by ls e.g.:
ls -al | awk {'printf "%s %s %s %s %s %s\n", $1,$2,$3,$4,$5,$9 '}
Or if you are lucky you might be able to remove the timestamp using:
ls -lh --time-style=+
Either way, just capture the results to two files as described above and use diff or sdiff to compare the results.

find /dirx/ -lsa |awk '{ print $6" "$6" " $11 }' 2 times the owner
find /dirx/ -lsa |awk '{ print $6" "$6" " $11 }' 2 times the group
find /dirx/ -lsa |awk '{ print $5" "$6" " $11 }' owner and group
Then you can redirect to a file for diff or just investigate piping to less ( or more ).
You can also pipe to grep and grep or "ungrep" (grep -v) to narrow the results.
Diff is not very useful if the dir contents are not the same

Related

find directories

I have been trying to get to count all the empty folders in a certain directory. sub-directories excluded. i used the code below but i don't know how to define empty folders or folders that contain files.
echo "$(ls -l | egrep -l $1/* | wc -l)"
the $1 will be the user argument in the command line. example: ./script.sh ~/Desktop/backups/March2021.
Edit - im not allowed to use find command
Edit 2 - ls -l * | awk '/total 0/{print last}{last=$0}' | wc -l this script works but lists all folders even if the directory contains files and data or if the directory is empty.
What about this:
grep -v "." *
I mean the following: "." means any character (I'm not sure the syntax is correct), so basically you look for every file which not even contain any character.
You should not parse ls (directories or file names with newlines), so this solution is only for the assignment:
ls -d */ */* | cut -d/ -f1 | sort | uniq -u | wc -l
Explanation:
ls -d */ shows all directories. This is combined with ls -d */* which will also show contents in the directories.
The resulting output will show all directories.
Empty directories will be shown only once, so you want to look for unique lines.
With the cut you only see the name of the directory, not the files in the directory.
The sort could be skipped here, the ls will give sorted output. When you change the solution to find (next assignment?) the sort might be needed.
uniq can look for lines that occur once. The flag -u removes all lines that have duplicates, so it will show the unique lines in the output.

Find when tar find n matches using wildcards

I'm trying to extract from a huge tar file some files from a list that are using wildcards. I'm using a loop to read the list but passing from one element in the list to the next one is taking too long, I'm guessing because is trying to match the element through the whole tar file. I want that after 2 matches for any element, the loop continues with the next one.
while read line;do
tar --wildcards -xzvf file.tar.gz "$line"
done <$file
And one line looks like this
dataset/0113947.*
I went aggresive and kill the tar process as soon as it finds two files. Here is my solution
file=list.txt
while read line;do
tar --wildcards --checkpoint=10000 --checkpoint-action=exec='sh stop.sh dummy.txt 1' -xzvf ny_file.tar.gz "$line" > dummy.txt
done <$file
where stop.sh checks if dummy.txt has more than two lines and kill the process.
n=$(wc -l < $1)
if [ $n -gt 1 ];then
kill $(ps aux|grep "[t]ar --wildcards*" | cut -d " " -f 4)
fi
I had to use cut to recover the ID process because the single quotes for awk were troubling

Rename files based on pattern in path

I have thousands of files named "DOCUMENT.PDF" and I want to rename them based on a numeric identifier in the path. Unfortunately, I don't seem to have access to the rename command.
Three examples:
/000/000/002/605/950/ÐÐ-02605950-00001/DOCUMENT.PDF
/000/000/002/591/945/ÐÐ-02591945-00002/DOCUMENT.PDF
/000/000/002/573/780/ÐÐ-02573780-00002/DOCUMENT.PDF
To be renamed as, without changing their parent directory:
2605950.pdf
2591945.pdf
2573780.pdf
Use a for loop, and then use the mv command
for file in *
do
num=$(awk -F "/" '{print $(NF-1)}' file.txt | cut -d "-" -f2);
mv "$file" "$num.pdf"
done
You could do this with globstar in Bash 4.0+:
cd _your_base_dir_
shopt -s globstar
for file in **/DOCUMENT.PDF; do # loop picks only DOCUMENT.PDF files
# here, we assume that the serial number is extracted from the 7th component in the directory path - change it according to your need
# and we don't strip out the leading zero in the serial number
new_name=$(dirname "$file")/$(cut -f7 -d/ <<< "$file" | cut -f2 -d-).pdf
echo "Renaming $file to $new_name"
# mv "$file" "$new_name" # uncomment after verifying
done
See this related post that talks about a similar problem: How to recursively traverse a directory tree and find only files?

Print labels using awk

On my FreeBSD 10.1 I'm writing a little piece of code that basically calls ls and automatically breaks the results down into something like this:
directory:
2.4M .git
528K src
380K dist
184K test
file:
856K CONDUCT.md
20K README.md
........
You will only need to list out directories and regular files, and you don't have to list out . .., but you have to list out hidden files, and sort them from largest to smallest separately.
The challenge is to complete it as a one-line command without using $(cmd), &&, ||, >, >>, <, ;, & and within 12 pipes (back quotes count as well).
Currently my progress is:
ls -Alh | sort -d -h -r |
awk 'BEGIN {print "Directories:"}
NR>1 {if(substr($1,1,1)~"d")print" "$5" "$9}'
which prints out only until the last directory item. But since the entire command will output once every record, I can't find a way to print files: only once, and then print out the remaining output.
Well, you may have to store the files in an array and print at the end:
ls -Alh|sed 1d|
sort -h -k5r|
awk 'BEGIN {print "Directories:"}
/^d/{print "\t"$5"\t"$9}
/^-/{f[n++]=sprintf("\t"$5"\t"$9)}
END{print "Files:";
for(i=0;i<n;++i)print f[i]}'
One additional problem you'll need to work out: files and dirs may have spaces in the name, and the simple $9 will be insufficient for that case.

Diff files present in two different directories

I have two directories with the same list of files. I need to compare all the files present in both the directories using the diff command. Is there a simple command line option to do it, or do I have to write a shell script to get the file listing and then iterate through them?
You can use the diff command for that:
diff -bur folder1/ folder2/
This will output a recursive diff that ignore spaces, with a unified context:
b flag means ignoring whitespace
u flag means a unified context (3 lines before and after)
r flag means recursive
If you are only interested to see the files that differ, you may use:
diff -qr dir_one dir_two | sort
Option "q" will only show the files that differ but not the content that differ, and "sort" will arrange the output alphabetically.
Diff has an option -r which is meant to do just that.
diff -r dir1 dir2
diff can not only compare two files, it can, by using the -r option, walk entire directory trees, recursively checking differences between subdirectories and files that occur at comparable points in each tree.
$ man diff
...
-r --recursive
Recursively compare any subdirectories found.
...
Another nice option is the über-diff-tool diffoscope:
$ diffoscope a b
It can also emit diffs as JSON, html, markdown, ...
If you specifically don't want to compare contents of files and only check which one are not present in both of the directories, you can compare lists of files, generated by another command.
diff <(find DIR1 -printf '%P\n' | sort) <(find DIR2 -printf '%P\n' | sort) | grep '^[<>]'
-printf '%P\n' tells find to not prefix output paths with the root directory.
I've also added sort to make sure the order of files will be the same in both calls of find.
The grep at the end removes information about identical input lines.
If it's GNU diff then you should just be able to point it at the two directories and use the -r option.
Otherwise, try using
for i in $(\ls -d ./dir1/*); do diff ${i} dir2; done
N.B. As pointed out by Dennis in the comments section, you don't actually need to do the command substitution on the ls. I've been doing this for so long that I'm pretty much doing this on autopilot and substituting the command I need to get my list of files for comparison.
Also I forgot to add that I do '\ls' to temporarily disable my alias of ls to GNU ls so that I lose the colour formatting info from the listing returned by GNU ls.
When working with git/svn or multiple git/svn instances on disk this has been one of the most useful things for me over the past 5-10 years, that somebody might find useful:
diff -burN /path/to/directory1 /path/to/directory2 | grep +++
or:
git diff /path/to/directory1 | grep +++
It gives you a snapshot of the different files that were touched without having to "less" or "more" the output. Then you just diff on the individual files.
In practice the question often arises together with some constraints. In that case following solution template may come in handy.
cd dir1
find . \( -name '*.txt' -o -iname '*.md' \) | xargs -i diff -u '{}' 'dir2/{}'
Here is a script to show differences between files in two folders. It works recursively. Change dir1 and dir2.
(search() { for i in $1/*; do [ -f "$i" ] && (diff "$1/${i##*/}" "$2/${i##*/}" || echo "files: $1/${i##*/} $2/${i##*/}"); [ -d "$i" ] && search "$1/${i##*/}" "$2/${i##*/}"; done }; search "dir1" "dir2" )
Try this:
diff -rq /path/to/folder1 /path/to/folder2

Resources