Unix - how to merge filename inside content with cat and baseline? - unix

I want to a merge files (one line) with filename at the begining of the line.
'cat' does a merge on files and 'basename -a' gives filemanes but I don't know how to get both ?
$ echo "content1" > f1.txt
$ echo "content2" > f2.txt
$ cat f*.txt > all.txt
$ cat all.txt
content1
content2
$ basename -a f*.txt
f1.txt
f2.txt
Would like this result :
$ cat all.txt
f1.txt content1
f2.txt content2

Just use grep -H. Post process to change the delimiter:
$ for i in 1 2; do echo content$i > f$i.txt; done
$ grep -H . *.txt
f1.txt:content1
f2.txt:content2
$ grep -H . *.txt | sed 's/:/ /'
f1.txt content1
f2.txt content2
or,
$ awk '{printf "%s\t%s\n", FILENAME, $0}' *.txt
f1.txt content1
f2.txt content2

Use a loop. Iterate over every *.txt file, echo its name to the output file without a newline (done with echo -n,) append its contents to the output file, and finally append a newline. Note that >> appends. Using > would overwrite.
rm -f all.txt
for f in *.txt; do echo -n "$f " >> all.txt; cat "$f" >> all.txt; echo >> all.txt; done
If your input files already contain a newline at the end, then you skip the final echo and just do:
for f in *.txt; do echo -n "$f " >> all.txt; cat "$f" >> all.txt; done
If you're using tcsh instead of bash, then you can use foreach, but you can't write the whole loop in a single command. Normally you would write this in a script:
foreach f (*.txt)
echo -n "$f " >> all.txt
cat "$f" >> all.txt
end
To get this in a single command line you need something like this instead:
printf 'foreach f (*.txt)\n echo -n "$f " >> all.txt\n cat "$f" >> all.txt\n end' | tcsh

Related

awk change shell variable

I would like to modify several shell variables within awk:
echo "$LINE_IN" | awk '/pattern1/ {print $0; WRITTEN=1; REC=$REC+1}' >> $FILE1
I tried to put eval, but still does not work:
eval $( echo "$LINE_IN" | awk '/pattern1/ {print $0; WRITTEN=1; REC=$REC+1}' >> $FILE1 )
Any suggestion?
I would like to use k-shell script, thanks!
Count the hits when you are finished:
echo "${LINE_IN}" | grep -E 'pattern1' > "${FILE1}"
REC=$(wc -l < "${FILE1}")
if (( REC > 0 )); then
WRITTEN=1
fi
When you really want to use awk, you must let awk write the results to stdout and parse stdout:
echo "${LINE_IN}" | awk '/echo/ {print $0 > "x3"; WRITTEN=1; REC++}
END { print "WRITTEN=" WRITTEN; print "REC=" REC}'
WRITTEN=1
REC=6
And when you want the variables really set, wrap it:
source (echo "${LINE_IN}" | awk '/echo/ {print $0 > "x3"; WRITTEN=1; REC++}
END { print "WRITTEN=" WRITTEN; print "REC=" REC}')
Note: Get used to using lowercase variable names like written, file and rec.

using sed or awk to double quote comma separate and concatenate a list

I have the following list in a text file:
10.1.2.200
10.1.2.201
10.1.2.202
10.1.2.203
I want to encase in "double quotes", comma separate and join the values as one string.
Can this be done in sed or awk?
Expected output:
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203","10.1.2.204"
The easiest is something like this (in pseudo code):
Read a line;
Put the line in quotes;
Keep that quoted line in a stack or string;
At the end (or while constructing the string), join the lines together with a comma.
Depending on the language, that is fairly straightforward to do:
With awk:
$ awk 'BEGIN{OFS=","}{s=s ? s OFS "\"" $1 "\"" : "\"" $1 "\""} END{print s}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
Or, less 'wall of quotes' to define a quote character:
$ awk 'BEGIN{OFS=",";q="\""}{s=s ? s OFS q$1q : q$1q} END{print s}' file
With sed:
$ sed -E 's/^(.*)$/"\1"/' file | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g'
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
(With Perl and Ruby, with a join function, it is easiest to push the elements onto a stack and then join that.)
Perl:
$ perl -lne 'push #a, "\"$_\""; END{print join(",", #a)}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
Ruby:
$ ruby -ne 'BEGIN{#arr=[]}; #arr.push "\"#{$_.chomp}\""; END{puts #arr.join(",")}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
here is another alternative
sed 's/.*/"&"/' file | paste -sd,
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
awk -F'\n' -v RS="\0" -v OFS='","' -v q='"' '{NF--}$0=q$0q' file
should work for given example.
Tested with gawk:
kent$ cat f
10.1.2.200
10.1.2.201
10.1.2.202
10.1.2.203
kent$ awk -F'\n' -v RS="\0" -v OFS='","' -v q='"' '{NF--}$0=q$0q' f
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
$ awk '{o=o (NR>1?",":"") "\""$0"\""} END{print o}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"

Unix script to recursively search a directory and sub directories to grep and print content between 2 patterns in file

I have some files in a directory and sub directories. I need to search all the files and print the file name and the content between 2 matching patterns in the file.
For e.g. lets say my file looks like below.
File1.txt:
Pattern1
ABCDEFGHI
Pattern2
dafoaf
fafaf
dfadf
afadf
File2.txt
Pattern1
XXXXXXXXX
Pattern2
kdfaf
adfdaf
fdafad
I need to get following output
File1.txt:
ABCDEGHI
File2.txt:
XXXXXXXX
and so on for all the files under directory and sub directories separated by new line.
This might work for you:
find . \
-type f \
-exec awk 'BEGING {print FILENAME ":"} /Pattern1/ { p=1 ; next } /Pattern2/ {p=0} p==1 {print $0} END {print ""}' \{\} \;
Note, this prints the FILENAME, even if Pattern1 was not found!
This will work for you :
Create this shell script as my_grep.sh
#!/bin/sh
grep -nH "Pattern" $1 >>temp
if [ `grep -c $1 temp` -eq 2 ]; then
limits=`grep $1 temp | cut -f2 -d:`
lower_limit=`echo $limits | cut -f1 -d" "`
upper_limit=`echo $limits | cut -f2 -d" "`
echo "$1:"
head -`expr $upper_limit - 1` $1 | tail -`expr $upper_limit - $lower_limit - 1`
fi
Use find command to search files and fire this schell script:
$ find ./test -type f -exec ./my_grep {} \;
./test/File1.txt:
ABCDEFGHI
./test/File2.txt:
XXXXXXXXX

Concatenate multiple files but include filename as section headers

I would like to concatenate a number of text files into one large file in terminal. I know I can do this using the cat command. However, I would like the filename of each file to precede the "data dump" for that file. Anyone know how to do this?
what I currently have:
file1.txt = bluemoongoodbeer
file2.txt = awesomepossum
file3.txt = hownowbrowncow
cat file1.txt file2.txt file3.txt
desired output:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
Was looking for the same thing, and found this to suggest:
tail -n +1 file1.txt file2.txt file3.txt
Output:
==> file1.txt <==
<contents of file1.txt>
==> file2.txt <==
<contents of file2.txt>
==> file3.txt <==
<contents of file3.txt>
If there is only a single file then the header will not be printed. If using GNU utils, you can use -v to always print a header.
I used grep for something similar:
grep "" *.txt
It does not give you a 'header', but prefixes every line with the filename.
This should do the trick as well:
$ find . -type f -print -exec cat {} \;
./file1.txt
Content of file1.txt
./file2.txt
Content of file2.txt
Here is the explanation for the command-line arguments:
find = linux `find` command finds filenames, see `man find` for more info
. = in current directory
-type f = only files, not directories
-print = show found file
-exec = additionally execute another linux command
cat = linux `cat` command, see `man cat`, displays file contents
{} = placeholder for the currently found filename
\; = tell `find` command that it ends now here
You further can combine searches trough boolean operators like -and or -or. find -ls is nice, too.
When there is more than one input file, the more command concatenates them and also includes each filename as a header.
To concatenate to a file:
more *.txt > out.txt
To concatenate to the terminal:
more *.txt | cat
Example output:
::::::::::::::
file1.txt
::::::::::::::
This is
my first file.
::::::::::::::
file2.txt
::::::::::::::
And this is my
second file.
This should do the trick:
for filename in file1.txt file2.txt file3.txt; do
echo "$filename"
cat "$filename"
done > output.txt
or to do this for all text files recursively:
find . -type f -name '*.txt' -print | while read filename; do
echo "$filename"
cat "$filename"
done > output.txt
find . -type f -print0 | xargs -0 -I % sh -c 'echo %; cat %'
This will print the full filename (including path), then the contents of the file. It is also very flexible, as you can use -name "expr" for the find command, and run as many commands as you like on the files.
And the missing awk solution is:
$ awk '(FNR==1){print ">> " FILENAME " <<"}1' *
This is how I normally handle formatting like that:
for i in *; do echo "$i"; echo ; cat "$i"; echo ; done ;
I generally pipe the cat into a grep for specific information.
I like this option
for x in $(ls ./*.php); do echo $x; cat $x | grep -i 'menuItem'; done
Output looks like this:
./debug-things.php
./Facebook.Pixel.Code.php
./footer.trusted.seller.items.php
./GoogleAnalytics.php
./JivositeCode.php
./Live-Messenger.php
./mPopex.php
./NOTIFICATIONS-box.php
./reviewPopUp_Frame.php
$('#top-nav-scroller-pos-<?=$active**MenuItem**;?>').addClass('active');
gotTo**MenuItem**();
./Reviews-Frames-PopUps.php
./social.media.login.btns.php
./social-side-bar.php
./staticWalletsAlerst.php
./tmp-fix.php
./top-nav-scroller.php
$active**MenuItem** = '0';
$active**MenuItem** = '1';
$active**MenuItem** = '2';
$active**MenuItem** = '3';
./Waiting-Overlay.php
./Yandex.Metrika.php
you can use this simple command instead of using a for loop,
ls -ltr | awk '{print $9}' | xargs head
If the files all have the same name or can be matched by find, you can do (e.g.):
find . -name create.sh | xargs tail -n +1
to find, show the path of and cat each file.
If you like colors, try this:
for i in *; do echo; echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R
or:
tail -n +1 * | grep -e $ -e '==.*'
or: (with package 'multitail' installed)
multitail *
Here is a really simple way. You said you want to cat, which implies you want to view the entire file. But you also need the filename printed.
Try this
head -n99999999 * or head -n99999999 file1.txt file2.txt file3.txt
Hope that helps
If you want to replace those ugly ==> <== with something else
tail -n +1 *.txt | sed -e 's/==>/\n###/g' -e 's/<==/###/g' >> "files.txt"
explanation:
tail -n +1 *.txt - output all files in folder with header
sed -e 's/==>/\n###/g' -e 's/<==/###/g' - replace ==> with new line + ### and <== with just ###
>> "files.txt" - output all to a file
find . -type f -exec cat {} \; -print
AIX 7.1 ksh
... glomming onto those who've already mentioned head works for some of us:
$ r head
head file*.txt
==> file1.txt <==
xxx
111
==> file2.txt <==
yyy
222
nyuk nyuk nyuk
==> file3.txt <==
zzz
$
My need is to read the first line; as noted, if you want more than 10 lines, you'll have to add options (head -9999, etc).
Sorry for posting a derivative comment; I don't have sufficient street cred to comment/add to someone's comment.
I made a combination of:
cat /sharedpath/{unique1,unique2,unique3}/filename > newfile
and
tail -n +1 file1 file2
into this:
tail -n +1 /sharedpath/{folder1,folder2,...,folder_n}/file.extension | cat > /sharedpath/newfile
The result is a newfile that contains the content from each subfolder (unique1,unique2..) in the {} brackets, separated by subfolder name.
note unique1=folder1
In my case the file.extension has the same name in all subfolders.
If you want the result in the same format as your desired output you can try:
for file in `ls file{1..3}.txt`; \
do echo $file | cut -d '.' -f 1; \
cat $file ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
You can put echo -e before and after the cut so you have the spacing between the lines as well:
$ for file in `ls file{1..3}.txt`; do echo $file | cut -d '.' -f 1; echo -e; cat $file; echo -e ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
This method will print filename and then file contents:
tail -f file1.txt file2.txt
Output:
==> file1.txt <==
contents of file1.txt ...
contents of file1.txt ...
==> file2.txt <==
contents of file2.txt ...
contents of file2.txt ...
For solving this tasks I usually use the following command:
$ cat file{1..3}.txt >> result.txt
It's a very convenient way to concatenate files if the number of files is quite large.
First I created each file: echo 'information' > file1.txt for each file[123].txt.
Then I printed each file to makes sure information was correct:
tail file?.txt
Then I did this: tail file?.txt >> Mainfile.txt. This created the Mainfile.txt to store the information in each file into a main file.
cat Mainfile.txt confirmed it was okay.
==> file1.txt <==
bluemoongoodbeer
==> file2.txt <==
awesomepossum
==> file3.txt <==
hownowbrowncow

Insert copyright message into multiple files

How would you insert a copyright message at the very top of every file?
#!/bin/bash
for file in *; do
echo "Copyright" > tempfile;
cat $file >> tempfile;
mv tempfile $file;
done
Recursive solution (finds all .txt files in all subdirectories):
#!/bin/bash
for file in $(find . -type f -name \*.txt); do
echo "Copyright" > copyright-file.txt;
echo "" >> copyright-file.txt;
cat $file >> copyright-file.txt;
mv copyright-file.txt $file;
done
Use caution; if spaces exist in file names you might get unexpected behaviour.
sed
echo "Copyright" > tempfile
sed -i.bak "1i $(<tempfile)" file*
Or shell
#!/bin/bash
shopt -s nullglob
for file in *; do
if [ -f "$file" ];then
echo "Copyright" > tempfile
cat "$file" >> tempfile;
mv tempfile "$file";
fi
done
to do it recursive, if you have bash 4.0
#!/bin/bash
shopt -s nullglob
shopt -s globstar
for file in /path/**
do
if [ -f "$file" ];then
echo "Copyright" > tempfile
cat "$file" >> tempfile;
mv tempfile "$file";
fi
done
or using find
find /path -type f | while read -r file
do
echo "Copyright" > tempfile
cat "$file" >> tempfile;
mv tempfile "$file";
done
You may use this simple script
#!/bin/bash
# Usage: script.sh file
cat copyright.tpl $1 > tmp
mv $1 $1.tmp # optional
mv tmp $1
File list may be managed via find utility
Working in Mac OSX:
#!/usr/bin/env bash
for f in `find . -iname "*.ts"`; do # just for *.ts files
echo -e "/*\n * My Company \n *\n * Copyright © 2018 MyCompany. All rights reserved.\n *\n *\n */" > tmpfile
cat $f >> tmpfile
mv tmpfile $f
done

Resources