How to move a number of files to a directory only if their name exists in another directory of files, in one line? - unix

Imagine that I have three directories like this:
Directory One: file1 file2 file3 file8
Directory Two (tags): file1 file3
Directory Three: empty
I want to check if the file exists in Directory Two move the file from Directory One to Directory Three, in one line if possible.
Final desired output:
Directory One: file1 file2 file3 file8
Directory Two (tags): file1 file3
Directory Three: file1 file3
Thanks in advance.

for file in directory2/*; do mv directory1/"$(basename "$file")" directory3/; done

Related

Extracting with subfolders but without parent folders

I have a zipfile structured like this:
- root
- subfolder
- subsubfolder
- file1
- file2
- file3
- file4
Using 7zip, I would like to extract the archive in such a way, that the resulting structure is:
- subsubfolder
- file1
- file2
- file3
- file4
My problem is, that neither 7z e nor 7z x accomplish this. When I use
7z x archive.zip root/subfolder
the structure doesn't change, as the files retain their full paths, and when I use
7z e archive.zip root/subfolder
the files retain no path at all, leaving me with a flat structure an an empty subsubfolder.
Edit: Unzip or any other kind of zip-tool I can use in Linux would also be fine.

Copy content of a file to multiple files using CAT command in UNIX

I have 3 files
File1, File2, File3
I want to copy the content of File1 to File2 and File3 in a single command
Is it possible with CAT command
If yes how, if no then which command is used to do this task
maybe this code can help you.
cat file1.txt >> file2.txt && cat file1.txt >> file3.txt
Use tee:
$ cat file1 | tee file2 > file3
man tee:
NAME
tee - read from standard input and write to standard output and files

How will find uniq file using md5sum cmmand?

I am using Md5sum command ,i get the file content in binary format
I want the result in without same content available in a file
for example
$ md5sum file1 file2 file3 file4
c8675a129a538248bf9b0f8104c8e817 file1
9d3df2c17bfa06c6558cfc9d2f72aa91 file2
9d3df2c17bfa06c6558cfc9d2f72aa91 file3
2e7261df11a2fcefee4674fc500aeb7f file4
I want the output for not matching in a file that mean
file1 and file2 I need .
c8675a129a538248bf9b0f8104c8e817 file1
2e7261df11a2fcefee4674fc500aeb7f file4
That file content in not same in another file that file only I need
Thanks In Advance
You can say:
md5sum file1 file2 file3 file4 | uniq -u -w33
in order to get the unique files.
Quoting man uniq:
-u, --unique
only print unique lines
EDIT: You seem to be looking for alternatives. Try
md5sum ... | sed ':a;$bb;N;/^\(.\).*\n\1[^\n]*$/ba;:b;s/^\(.\).*\n\1[^\n]*\n*//;ta;/./P;D'
Try this: BASH
find -type f -exec md5sum '{}' ';' | sort | uniq --all-repeated=separate -w 33 | cut -c 35-
Explanation:
Find all files, calculate their MD5SUM, find duplicates by comparing the MD5SUM, print the names
Read more here

Appending multiple files into one file

I append multiple data files into a single data file using the cat command. How can I assign that single file value into a new file?
I am using the command:
cat file1 file2 file3 > Newfile.txt
AnotherFile=`cat Newfile.txt`
sort $AnotherFile | uniq -c
it showing error like can not open AnotherFile
How to assign this newfile value into another file?
Original answer to original question
Well, the easiest way is probably cp:
cat file1 file2 file3 > Newfile.txt
cp Newfile.txt AnotherFile.txt
Failing that, you can use:
cat file1 file2 file3 > Newfile.txt
AnotherFile=$(cat Newfile.txt)
echo "$AnotherFile" > AnotherFile.txt
Revised answer to revised question
The original question had echo "$AnotherFile" as the third line; the revised question has sort $AnotherFile | uniq -c as the third line.
Assuming that sort $AnotherFile is not sorting all the contents of the files mentioned in the list created from concatenating the original files (that is, assuming that file1, file2 and file3 do not contain just lists of file names), then the objective is to sort and count the lines found in the source files.
The whole job can be done in a single command line:
cat file1 file2 file3 | tee Newfile.txt | sort | uniq -c
Or (more usually):
cat file1 file2 file3 | tee Newfile.txt | sort | uniq -c | sort -n
which lists the lines in increasing order of frequency.
If you really do want to sort the contents of the files listed in file1, file2, file3 but only list the contents of each file once, then:
cat file1 file2 file3 | tee Newfile.txt | sort -u | xargs sort | sort | uniq -c
It looks weird having three sort-related commands in a row, but there is justification for each step. The sort -u ensures each file name is listed once. The xargs sort converts a list of file names on standard input into a list of file names on the sort command line. The output of this is the sorted data from each batch of files that xargs produces. If there are so few files that xargs doesn't need to run sort more than once, then the following plain sort is redundant. However, if xargs has to run sort more than once, then the final sort has to deal with the fact that the first lines from the second batch produced by xargs sort probably come before the last lines produced by the first batch produced by xargs sort.
This becomes a judgement call based on knowledge of the data in the original files. If the files are small enough that xargs won't need to run multiple sort commands, omit the final sort. A heuristic would be "if the sum of the sizes of the source files is smaller than the maximum command line argument list, don't include the extra sort".
You can probably do that in one go:
# Write to two files at once. Both files have a constantly varying
# content until cat is finished.
cat file1 file2 file3 | tee Newfile.txt> Anotherfile.txt
# Save the output filename, just in case you need it later
filename="Anotherfile.txt"
# This reads the contents of Newfile into a variable called AnotherText
AnotherText=`cat Newfile.txt`
# This is the same as "cat Newfile.txt"
echo "$AnotherText"
# This saves AnotherText into Anotherfile.txt
echo "$AnotherText" > Anotherfile.txt
# This too, using cp and the saved name above
cp Newfile.txt "$filename"
If you want to create the second file all in one go, this is a common pattern:
# During this process the contents of tmpfile.tmp is constantly changing
{ slow process creating text } > tmpfile.tmp
# Very quickly create a complete Anotherfile.txt
mv tmpfile.tmp Anotherfile.txt
make file and redirectin this in append mode.
touch Newfile.txt
cat files* >> Newfile.txt

Merging file names into one text file

I have 8 files that need to be merged into one text file, with each of the file names being on a separate line.
The output should be as follows:
file.txt:
output1/transcripts.gtf
output2/transcripts.gtf
output3/transcripts.gtf
and so on...
I have read several other suggestions and I know it should be an easy fix. I have tried dir and awk but have only gotten results that has all files in one line. I am using unix.
How about this?
ls -1 output*/*.gtf > file.txt
or if the nesting of you sub directories is deeper and you want all files with names ending in ".gtf":
find . -type f -name "*.gtf" -print | cut -b 3- > file.txt

Resources