I am using Mac OS Big Sur 11.4 and recently switched to zsh, but I got some trouble using wildcard in it. Suppose I have a directory with files
1 2 3 1file 2file file1 file2 file3 and I want to list the files not starting with numbers. In bash it works fine as follows
Steves-Mac:test hengyuan$ cd test/dir3/
Steves-Mac:dir3 hengyuan$ ls
1 1file 2 2file 3 file1 file2 file3
Steves-Mac:dir3 hengyuan$ ls [[:digit:]]*
1 1file 2 2file 3
Steves-Mac:dir3 hengyuan$ ls [![:digit:]]*
file1 file2 file3
However, I got the following results in zsh
➜ dir3 ls
1 1file 2 2file 3 file1 file2 file3
➜ dir3 ls [[:digit:]]*
1 1file 2 2file 3
➜ dir3 ls [![:digit:]]*
zsh: event not found: [
Why did I get such strange results and how to fix them? Thank you.
With zsh ! invokes history expansion. You can use [^...] instead which means the same thing as [!...]. You can also disable history expansion by either quoting [\!...] or using the special sequence !" which disables history expansion fully for the current command line.
So these are all equivalent:
ls [^[:digit:]]*
ls [\![:digit:]]*
ls !" [![:digit:]]*
To completely disable history expansion you can run setopt nobanghist.
Related
I have 3 files
File1, File2, File3
I want to copy the content of File1 to File2 and File3 in a single command
Is it possible with CAT command
If yes how, if no then which command is used to do this task
maybe this code can help you.
cat file1.txt >> file2.txt && cat file1.txt >> file3.txt
Use tee:
$ cat file1 | tee file2 > file3
man tee:
NAME
tee - read from standard input and write to standard output and files
File 1 contains
a,b,c,d,e
1,2,3,4,5
0,0,0,1,2
file 2 contains
12,12,11,a,b,c,d,e,f,22,33,22
11,22,22,1,2,3,4,5,33,22,33,ww
I would like if the patterns from file 1 is found in file 2 then the entire line from file 2 be printed.
So far i have tried
grep -f file 1 file 2
grep -F
but they does not seems to work.
$ grep -Ff file1 file2
12,12,11,a,b,c,d,e,f,22,33,22
11,22,22,1,2,3,4,5,33,22,33,ww
Because your patterns appear to be fixed strings, not regular expressions, I added the -F flag.
Given 2 folder: /folder1 and /folder2 and each folder has some files and subfolders inside.
I used following command to compare the file difference including sub folder :
diff -buf /folder1 /folder2
which found no difference in term of folder and file structural .
However, I found that there are some permission differences between these 2 folders' files. Is there simple way/command to compare the permission of each file under these 2 folders (including sub-folders) on Unix?
thanks,
If you have the tree command installed, it can do the job very simply using a similar procedure to the one that John C suggested:
cd a
tree -dfpiug > ../a.list
cd ../b
tree -dfpiug > ../b.list
cd ..
diff a.list b.list
Or, you can just do this on one line:
diff <(cd a; tree -dfpiug) <(cd b; tree -dfpiug)
The options given to tree are as follows:
-d only scans directories (omit to compare files as well)
-f displays the full path
-p displays permissions (e.g., [drwxrwsr-x])
-i removes tree's normal hierarchical indent
-u displays the owner's username
-g displays the group name
One way to compare permissions on your two directories is to capture the output of ls -al to a file for each directory and diff those.
Say you have two directories called a and b.
cd a
ls -alrt > ../a.list
cd ../b
ls -alrt > ../b.list
cd ..
diff a.list b.list
If you find that this gives you too much noise due to file sizes and datestamps you can use awk to filter out some of the columns returned by ls e.g.:
ls -al | awk {'printf "%s %s %s %s %s %s\n", $1,$2,$3,$4,$5,$9 '}
Or if you are lucky you might be able to remove the timestamp using:
ls -lh --time-style=+
Either way, just capture the results to two files as described above and use diff or sdiff to compare the results.
find /dirx/ -lsa |awk '{ print $6" "$6" " $11 }' 2 times the owner
find /dirx/ -lsa |awk '{ print $6" "$6" " $11 }' 2 times the group
find /dirx/ -lsa |awk '{ print $5" "$6" " $11 }' owner and group
Then you can redirect to a file for diff or just investigate piping to less ( or more ).
You can also pipe to grep and grep or "ungrep" (grep -v) to narrow the results.
Diff is not very useful if the dir contents are not the same
I am using Md5sum command ,i get the file content in binary format
I want the result in without same content available in a file
for example
$ md5sum file1 file2 file3 file4
c8675a129a538248bf9b0f8104c8e817 file1
9d3df2c17bfa06c6558cfc9d2f72aa91 file2
9d3df2c17bfa06c6558cfc9d2f72aa91 file3
2e7261df11a2fcefee4674fc500aeb7f file4
I want the output for not matching in a file that mean
file1 and file2 I need .
c8675a129a538248bf9b0f8104c8e817 file1
2e7261df11a2fcefee4674fc500aeb7f file4
That file content in not same in another file that file only I need
Thanks In Advance
You can say:
md5sum file1 file2 file3 file4 | uniq -u -w33
in order to get the unique files.
Quoting man uniq:
-u, --unique
only print unique lines
EDIT: You seem to be looking for alternatives. Try
md5sum ... | sed ':a;$bb;N;/^\(.\).*\n\1[^\n]*$/ba;:b;s/^\(.\).*\n\1[^\n]*\n*//;ta;/./P;D'
Try this: BASH
find -type f -exec md5sum '{}' ';' | sort | uniq --all-repeated=separate -w 33 | cut -c 35-
Explanation:
Find all files, calculate their MD5SUM, find duplicates by comparing the MD5SUM, print the names
Read more here
I append multiple data files into a single data file using the cat command. How can I assign that single file value into a new file?
I am using the command:
cat file1 file2 file3 > Newfile.txt
AnotherFile=`cat Newfile.txt`
sort $AnotherFile | uniq -c
it showing error like can not open AnotherFile
How to assign this newfile value into another file?
Original answer to original question
Well, the easiest way is probably cp:
cat file1 file2 file3 > Newfile.txt
cp Newfile.txt AnotherFile.txt
Failing that, you can use:
cat file1 file2 file3 > Newfile.txt
AnotherFile=$(cat Newfile.txt)
echo "$AnotherFile" > AnotherFile.txt
Revised answer to revised question
The original question had echo "$AnotherFile" as the third line; the revised question has sort $AnotherFile | uniq -c as the third line.
Assuming that sort $AnotherFile is not sorting all the contents of the files mentioned in the list created from concatenating the original files (that is, assuming that file1, file2 and file3 do not contain just lists of file names), then the objective is to sort and count the lines found in the source files.
The whole job can be done in a single command line:
cat file1 file2 file3 | tee Newfile.txt | sort | uniq -c
Or (more usually):
cat file1 file2 file3 | tee Newfile.txt | sort | uniq -c | sort -n
which lists the lines in increasing order of frequency.
If you really do want to sort the contents of the files listed in file1, file2, file3 but only list the contents of each file once, then:
cat file1 file2 file3 | tee Newfile.txt | sort -u | xargs sort | sort | uniq -c
It looks weird having three sort-related commands in a row, but there is justification for each step. The sort -u ensures each file name is listed once. The xargs sort converts a list of file names on standard input into a list of file names on the sort command line. The output of this is the sorted data from each batch of files that xargs produces. If there are so few files that xargs doesn't need to run sort more than once, then the following plain sort is redundant. However, if xargs has to run sort more than once, then the final sort has to deal with the fact that the first lines from the second batch produced by xargs sort probably come before the last lines produced by the first batch produced by xargs sort.
This becomes a judgement call based on knowledge of the data in the original files. If the files are small enough that xargs won't need to run multiple sort commands, omit the final sort. A heuristic would be "if the sum of the sizes of the source files is smaller than the maximum command line argument list, don't include the extra sort".
You can probably do that in one go:
# Write to two files at once. Both files have a constantly varying
# content until cat is finished.
cat file1 file2 file3 | tee Newfile.txt> Anotherfile.txt
# Save the output filename, just in case you need it later
filename="Anotherfile.txt"
# This reads the contents of Newfile into a variable called AnotherText
AnotherText=`cat Newfile.txt`
# This is the same as "cat Newfile.txt"
echo "$AnotherText"
# This saves AnotherText into Anotherfile.txt
echo "$AnotherText" > Anotherfile.txt
# This too, using cp and the saved name above
cp Newfile.txt "$filename"
If you want to create the second file all in one go, this is a common pattern:
# During this process the contents of tmpfile.tmp is constantly changing
{ slow process creating text } > tmpfile.tmp
# Very quickly create a complete Anotherfile.txt
mv tmpfile.tmp Anotherfile.txt
make file and redirectin this in append mode.
touch Newfile.txt
cat files* >> Newfile.txt