I would like to concatenate a number of text files into one large file in terminal. I know I can do this using the cat command. However, I would like the filename of each file to precede the "data dump" for that file. Anyone know how to do this?
what I currently have:
file1.txt = bluemoongoodbeer
file2.txt = awesomepossum
file3.txt = hownowbrowncow
cat file1.txt file2.txt file3.txt
desired output:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
Was looking for the same thing, and found this to suggest:
tail -n +1 file1.txt file2.txt file3.txt
Output:
==> file1.txt <==
<contents of file1.txt>
==> file2.txt <==
<contents of file2.txt>
==> file3.txt <==
<contents of file3.txt>
If there is only a single file then the header will not be printed. If using GNU utils, you can use -v to always print a header.
I used grep for something similar:
grep "" *.txt
It does not give you a 'header', but prefixes every line with the filename.
This should do the trick as well:
$ find . -type f -print -exec cat {} \;
./file1.txt
Content of file1.txt
./file2.txt
Content of file2.txt
Here is the explanation for the command-line arguments:
find = linux `find` command finds filenames, see `man find` for more info
. = in current directory
-type f = only files, not directories
-print = show found file
-exec = additionally execute another linux command
cat = linux `cat` command, see `man cat`, displays file contents
{} = placeholder for the currently found filename
\; = tell `find` command that it ends now here
You further can combine searches trough boolean operators like -and or -or. find -ls is nice, too.
When there is more than one input file, the more command concatenates them and also includes each filename as a header.
To concatenate to a file:
more *.txt > out.txt
To concatenate to the terminal:
more *.txt | cat
Example output:
::::::::::::::
file1.txt
::::::::::::::
This is
my first file.
::::::::::::::
file2.txt
::::::::::::::
And this is my
second file.
This should do the trick:
for filename in file1.txt file2.txt file3.txt; do
echo "$filename"
cat "$filename"
done > output.txt
or to do this for all text files recursively:
find . -type f -name '*.txt' -print | while read filename; do
echo "$filename"
cat "$filename"
done > output.txt
find . -type f -print0 | xargs -0 -I % sh -c 'echo %; cat %'
This will print the full filename (including path), then the contents of the file. It is also very flexible, as you can use -name "expr" for the find command, and run as many commands as you like on the files.
And the missing awk solution is:
$ awk '(FNR==1){print ">> " FILENAME " <<"}1' *
This is how I normally handle formatting like that:
for i in *; do echo "$i"; echo ; cat "$i"; echo ; done ;
I generally pipe the cat into a grep for specific information.
I like this option
for x in $(ls ./*.php); do echo $x; cat $x | grep -i 'menuItem'; done
Output looks like this:
./debug-things.php
./Facebook.Pixel.Code.php
./footer.trusted.seller.items.php
./GoogleAnalytics.php
./JivositeCode.php
./Live-Messenger.php
./mPopex.php
./NOTIFICATIONS-box.php
./reviewPopUp_Frame.php
$('#top-nav-scroller-pos-<?=$active**MenuItem**;?>').addClass('active');
gotTo**MenuItem**();
./Reviews-Frames-PopUps.php
./social.media.login.btns.php
./social-side-bar.php
./staticWalletsAlerst.php
./tmp-fix.php
./top-nav-scroller.php
$active**MenuItem** = '0';
$active**MenuItem** = '1';
$active**MenuItem** = '2';
$active**MenuItem** = '3';
./Waiting-Overlay.php
./Yandex.Metrika.php
you can use this simple command instead of using a for loop,
ls -ltr | awk '{print $9}' | xargs head
If the files all have the same name or can be matched by find, you can do (e.g.):
find . -name create.sh | xargs tail -n +1
to find, show the path of and cat each file.
If you like colors, try this:
for i in *; do echo; echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R
or:
tail -n +1 * | grep -e $ -e '==.*'
or: (with package 'multitail' installed)
multitail *
Here is a really simple way. You said you want to cat, which implies you want to view the entire file. But you also need the filename printed.
Try this
head -n99999999 * or head -n99999999 file1.txt file2.txt file3.txt
Hope that helps
If you want to replace those ugly ==> <== with something else
tail -n +1 *.txt | sed -e 's/==>/\n###/g' -e 's/<==/###/g' >> "files.txt"
explanation:
tail -n +1 *.txt - output all files in folder with header
sed -e 's/==>/\n###/g' -e 's/<==/###/g' - replace ==> with new line + ### and <== with just ###
>> "files.txt" - output all to a file
find . -type f -exec cat {} \; -print
AIX 7.1 ksh
... glomming onto those who've already mentioned head works for some of us:
$ r head
head file*.txt
==> file1.txt <==
xxx
111
==> file2.txt <==
yyy
222
nyuk nyuk nyuk
==> file3.txt <==
zzz
$
My need is to read the first line; as noted, if you want more than 10 lines, you'll have to add options (head -9999, etc).
Sorry for posting a derivative comment; I don't have sufficient street cred to comment/add to someone's comment.
I made a combination of:
cat /sharedpath/{unique1,unique2,unique3}/filename > newfile
and
tail -n +1 file1 file2
into this:
tail -n +1 /sharedpath/{folder1,folder2,...,folder_n}/file.extension | cat > /sharedpath/newfile
The result is a newfile that contains the content from each subfolder (unique1,unique2..) in the {} brackets, separated by subfolder name.
note unique1=folder1
In my case the file.extension has the same name in all subfolders.
If you want the result in the same format as your desired output you can try:
for file in `ls file{1..3}.txt`; \
do echo $file | cut -d '.' -f 1; \
cat $file ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
You can put echo -e before and after the cut so you have the spacing between the lines as well:
$ for file in `ls file{1..3}.txt`; do echo $file | cut -d '.' -f 1; echo -e; cat $file; echo -e ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
This method will print filename and then file contents:
tail -f file1.txt file2.txt
Output:
==> file1.txt <==
contents of file1.txt ...
contents of file1.txt ...
==> file2.txt <==
contents of file2.txt ...
contents of file2.txt ...
For solving this tasks I usually use the following command:
$ cat file{1..3}.txt >> result.txt
It's a very convenient way to concatenate files if the number of files is quite large.
First I created each file: echo 'information' > file1.txt for each file[123].txt.
Then I printed each file to makes sure information was correct:
tail file?.txt
Then I did this: tail file?.txt >> Mainfile.txt. This created the Mainfile.txt to store the information in each file into a main file.
cat Mainfile.txt confirmed it was okay.
==> file1.txt <==
bluemoongoodbeer
==> file2.txt <==
awesomepossum
==> file3.txt <==
hownowbrowncow
Related
Would like to print first 2 rows from all the files located in the directory along with File Name.
All are *.gz extension files. Having around 100 files in that directory.
sample_jan.csv.gz
10,Jan,100
30,Jan,300
50,Jan,500
sample_feb.csv.gz
10,Feb,200
20,Feb,400
40,Feb,800
60,Feb,1200
Expected Output:
Filename:sample_jan.csv.gz
10,Jan,100
30,Jan,300
Filename:sample_feb.csv.gz
10,Feb,200
20,Feb,400
Tried below command where as Filename appears Blank
zcat sample_jan.csv.gz | awk 'FNR==1{print "Filename:" FILENAME} FNR<3' > Output.txt
Filename:-
10,Jan,100
30,Jan,300
Tried below command where as Filename appears Wrong
awk 'FNR==1{print "Filename:" FILENAME} FNR<3' <(gzip -dc sample_jan.csv.gz) > Output.txt
Filename:/dev/fd/63
10,Jan,100
30,Jan,300
Looking for your suggestions, dont have perl & python.
You can use this one-liner,
for file in *.gz; do echo "Filename: $file"; zcat "$file" | head -2 ; done
#!/bin/bash
delete_file () {
for file in processor_list.txt currnet_username.txt unique_username.txt
do
if [ -e $file ] ;then
rm $file
fi
done
}
delete_file
ps -elf > processor_list.txt ; chmod 755 processor_list.txt
awk '{print $3}' processor_list.txt > currnet_username.txt ; chmod 755 currnet_username.txt
sort -u currnet_username.txt > unique_username.txt ;chmod 755 unique_username.txt
while read line ; do
if [ -e $line.txt ] ;then
rm $line.txt
fi
grep $line processor_list.txt >$line.sh ;chmod 755 $line.sh
awk '{if($4 == "$line") print $0;}' $line.sh > ${line}1.txt ; #mv ${line}1.txt $line.txt;chmod 755 $line.txt
done < unique_username.txt
I'm a beginner of unix shell scripting. please suggested, i am not getting expected results in ${line}1.txt.
For example, I have two UID like kplus , kplustp. what is my requirement is find "kplus" string from ps -elf command and create a file as same name like kplus.txt and redirect or move the data whatever found data using grep command.
But I am getting kplus and kplustp data in kplus.txt file. I need only kplus value based on UID column from ps –elf in kplus.txt file.
This is wrong way to read variable using awk
awk '{if($4 == "$line") print $0;}' $line.sh
Use:
awk '{if($4 == var) print $0;}' var="$line" $line.sh
Or shorten to
awk '$4==var' var="$line" $line.sh
default action is {print $0} if no action is specified.
If you need to search for the text $line escape the $ in regex
awk '$4==/\$line/' $line.sh
or in text it should work directly
awk '$4=="$line"' $line.sh
I have some files in a directory and sub directories. I need to search all the files and print the file name and the content between 2 matching patterns in the file.
For e.g. lets say my file looks like below.
File1.txt:
Pattern1
ABCDEFGHI
Pattern2
dafoaf
fafaf
dfadf
afadf
File2.txt
Pattern1
XXXXXXXXX
Pattern2
kdfaf
adfdaf
fdafad
I need to get following output
File1.txt:
ABCDEGHI
File2.txt:
XXXXXXXX
and so on for all the files under directory and sub directories separated by new line.
This might work for you:
find . \
-type f \
-exec awk 'BEGING {print FILENAME ":"} /Pattern1/ { p=1 ; next } /Pattern2/ {p=0} p==1 {print $0} END {print ""}' \{\} \;
Note, this prints the FILENAME, even if Pattern1 was not found!
This will work for you :
Create this shell script as my_grep.sh
#!/bin/sh
grep -nH "Pattern" $1 >>temp
if [ `grep -c $1 temp` -eq 2 ]; then
limits=`grep $1 temp | cut -f2 -d:`
lower_limit=`echo $limits | cut -f1 -d" "`
upper_limit=`echo $limits | cut -f2 -d" "`
echo "$1:"
head -`expr $upper_limit - 1` $1 | tail -`expr $upper_limit - $lower_limit - 1`
fi
Use find command to search files and fire this schell script:
$ find ./test -type f -exec ./my_grep {} \;
./test/File1.txt:
ABCDEFGHI
./test/File2.txt:
XXXXXXXXX
I want to grep version number in one file and replace it in another file. I want to grep 4.3.0.5 in file 1 and replace it in File 2 at 4.3.0.2. I have the below command to get the number , but how can I cut/replace it in second file??
File1 :
App :4.3.0.5 (or) App: 4.3.0.5-SNAPSHOT
File2: Before editing
grid_application_distribution_url=nexus://com.abcd.efge.ce/App/4.3.0.2/tar.gz/config
File 2 : after editing (Desired Result:)
If $VERISON in File is WITHOUT the word SNAPSHOT then in file 2
grid_application_distribution_url=nexus://com.abcd.efge.ce/App/4.3.0.5/tar.gz/config
If $VERSION has SNAPSHOT then line in file 2 should be
grid_application_distribution_url=nexus-snapshot://com.abcd.efge.ce/App/4.3.0.5/tar.gz/config
VER=$(awk -F: '/^App/{sub(/ .*$/, "", $2); print $2}'/path/file1.txt)
echo $VER
if ($vER ~ /SNAPSHOT/)
/usr/bin/ssh -t -t server2.com "sub("=nexus:", ":=nexus-snapshot") /path/file2" && sub(/[^\/]+\/tar\.gz/, $VER"/tar.gz") /path/file2
Something like this is all you need:
awk -F': +' 'NR==FNR{v=$2;next} {sub(/[^/]+\/tar.gz/,v"/tar.gz")} 1' File1 File2 > tmp && mv tmp File2
This awk script can do the job (this is an enhancement of above answer from #EDMorton):
Splitting the command in 2 as per OP's request
VER=$(awk -F' *: *' '/^App/{print $2}' file1)
awk -v v="$VER" '{
split(v, arr, "-");
sub(/[^\/]+\/tar\.gz/, arr[1]"/tar.gz");
if (arr[2] ~ /SNAPSHOT/)
sub("=nexus:", ":=nexus-snapshot")
}1' file2 > tmpFile
mv tmpFile > file2
You can try with this:
VERSION=($(grep -r "App:" /path/File1| awk '{print ($2)}'))
sed -i "s/4.3.0.2/$VERSION/" File2
it will look for "4.3.0.2" and change by value in $VERSION. File2 will be updated with this change.
If you want the file to keep the same, delete the flag -i:
sed "s/4.3.0.2/$VERSION/" File2
You will get the result in stdout.
As indicated in comments, 4.3.0.2 is not like this every time. Adapted for format X.Y.Z.W:
sed "s/\/[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]\(\/tar.gz\)/\/$VERSION\1/" File2
I'm searching through a number of directories for "searchstring", and then running a script on each $file:
for file in `find $dir -name ${searchstring}'*'`;
do
echo $file >> $debug
script.sh $file >> $output
done
My $debug file yields the following:
/root/0007_searchstring/out/filename_20120105_020000.log
/root/0006_searchstring/out/filename_20120105_010000.log
/root/0005_searchstring/out/filename_20120105_013000.log
(filename is _yyyymmdd_hhmmss.log)
...
Is there a way to get find to order by filename or by mktime? Should I pipe find to sort first? Make an array then sort it as per this question?
If you want to ignore the directory path and just use the file name, then you should be able to use:
for file in `find $dir -name ${searchstring}'*' | sort --field-separator=/ --key=4`;
'ls -t' if you need to regenerate the list based on timestamp.
'sort -n' if the list is fairly static?
To sort by modification time, you can use stat with find:
$ find . -exec stat {} -c '%Y %n' \; | sort -n | cut -d ' ' -f 2
You can pipe the output of find through sort to sort by filename:
find $dir -name "${searchstring}*" | sort | while read file
do
echo "$file" >> $debug
script.sh "$file" >> $output
done