Create newline with awk Command? - unix

Im trying to edit a script in geek tool (mac app for desktop widgets), could someone help me make this statement only print out 10 or so words per line?
curl -s www.brainyquote.com/quotes_of_the_day.html | egrep '(div class="bqQuoteLink")| (ahref)' | sed -n '19p; 20p;' | sed -e 's/<[^>]*>//g'\
Just now everything is printed out on one big line.
I think i could use the awk command, although I am unsure how to do so on this output.
Any help would be appreciated !!

Try running it through:
| fold -s

Add this to the end of your command
| awk '{ print $1 $2 $3 $4 $5 $6 $7 $8 $9 }'
That should only print the first nine words per line, assuming they're space-delimited.

Related

How to print the last but one record of a file using sed? [duplicate]

I have a file that has the following as the last three lines. I want to retrieve the penultimate line, i.e. 100.000;8438; 06:46:12.
.
.
.
99.900; 8423; 06:44:41
100.000;8438; 06:46:12
Number of patterns: 8438
I don't know the line number. How can I retrieve it using a shell script? Thanks in advance for your help.
Try this:
tail -2 yourfile | head -1
A short sed one-liner inspired by https://stackoverflow.com/a/7671772/5287901
sed -n 'x;$p'
Explanation:
-n quiet mode: dont automatically print the pattern space
x: exchange the pattern space and the hold space (hold space now store the current line, and pattern space the previous line, if any)
$: on the last line, p: print the pattern space (the previous line, which in this case is the penultimate line).
Use this
tail -2 <filename> | head -1
ed and sed can do it as well.
str='
99.900; 8423; 06:44:41
100.000;8438; 06:46:12
Number of patterns: 8438
'
printf '%s' "$str" | sed -n -e '${x;1!p;};h' # print last line but one
printf '%s\n' H '$-1p' q | ed -s <(printf '%s' "$str") # same
printf '%s\n' H '$-2,$-1p' q | ed -s <(printf '%s' "$str") # print last line but two
From: Useful sed one-liners by Eric Pement
# print the next-to-the-last line of a file
sed -e '$!{h;d;}' -e x # for 1-line files, print blank line
sed -e '1{$q;}' -e '$!{h;d;}' -e x # for 1-line files, print the line
sed -e '1{$d;}' -e '$!{h;d;}' -e x # for 1-line files, print nothing
You don't need all of them, just pick one.
tail +2 <filename>
This prints from second line to last line.
To clarify what has already been said:
ec2thisandthat | sort -k 5 | grep 2012- | awk '{print $2}' | tail -2 | head -1
snap-e8317883
snap-9c7227f7
snap-5402553f
snap-3e7b2c55
snap-246b3c4f
snap-546a3d3f
snap-2ad48241
snap-d00150bb
returns
snap-2ad48241
tac <file> | sed -n '2p'

Count and display size of incoming stdin (count lines)

Is there a program that would result in output like "wc -l" but that would update counter on more data.
Here is what I want it for:
tail -f log/production.log | grep POST | wc -l
But wc -l should be changed for something.
tail -f log/production.log | grep --line-buffered POST | awk '{printf "\r%d", ++i} END {print ""}'
This prints the line count after every line of input. The carriage return \r makes each line number overwrite the last, so you only see the most recent one.
Use grep --line-buffered to make grep flush its output after each line rather than every 4KB. Or you can combine the grep and awk into one:
tail -f log/production.log | awk '/POST/ {printf "\r%d", ++i} END {print ""}'

Unix - Need to cut a file which has multiple blanks as delimiter - awk or cut?

I need to get the records from a text file in Unix. The delimiter is multiple blanks. For example:
2U2133 1239
1290fsdsf 3234
From this, I need to extract
1239
3234
The delimiter for all records will be always 3 blanks.
I need to do this in an unix script(.scr) and write the output to another file or use it as an input to a do-while loop. I tried the below:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then
int_1=0
else
int_2=0
fi
done < awk -F' ' '{ print $2 }' ${Directoty path}/test_file.txt
test_file.txt is the input file and file1.txt is a lookup file. But the above way is not working and giving me syntax errors near awk -F
I tried writing the output to a file. The following worked in command line:
more test_file.txt | awk -F' ' '{ print $2 }' > output.txt
This is working and writing the records to output.txt in command line. But the same command does not work in the unix script (It is a .scr file)
Please let me know where I am going wrong and how I can resolve this.
Thanks,
Visakh
The job of replacing multiple delimiters with just one is left to tr:
cat <file_name> | tr -s ' ' | cut -d ' ' -f 2
tr translates or deletes characters, and is perfectly suited to prepare your data for cut to work properly.
The manual states:
-s, --squeeze-repeats
replace each sequence of a repeated character that is
listed in the last specified SET, with a single occurrence
of that character
It depends on the version or implementation of cut on your machine. Some versions support an option, usually -i, that means 'ignore blank fields' or, equivalently, allow multiple separators between fields. If that's supported, use:
cut -i -d' ' -f 2 data.file
If not (and it is not universal — and maybe not even widespread, since neither GNU nor MacOS X have the option), then using awk is better and more portable.
You need to pipe the output of awk into your loop, though:
awk -F' ' '{print $2}' ${Directory_path}/test_file.txt |
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done
The only residual issue is whether the while loop is in a sub-shell and and therefore not modifying your main shell scripts variables, just its own copy of those variables.
With bash, you can use process substitution:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done < <(awk -F' ' '{print $2}' ${Directory_path}/test_file.txt)
This leaves the while loop in the current shell, but arranges for the output of the command to appear as if from a file.
The blank in ${Directory path} is not normally legal — unless it is another Bash feature I've missed out on; you also had a typo (Directoty) in one place.
Other ways of doing the same thing aside, the error in your program is this: You cannot redirect from (<) the output of another program. Turn your script around and use a pipe like this:
awk -F' ' '{ print $2 }' ${Directory path}/test_file.txt | while read readline
etc.
Besides, the use of "readline" as a variable name may or may not get you into problems.
In this particular case, you can use the following line
sed 's/ /\t/g' <file_name> | cut -f 2
to get your second columns.
In bash you can start from something like this:
for n in `${Directoty path}/test_file.txt | cut -d " " -f 4`
{
grep -c $n ${Directory path}/file*.txt
}
This should have been a comment, but since I cannot comment yet, I am adding this here.
This is from an excellent answer here: https://stackoverflow.com/a/4483833/3138875
tr -s ' ' <text.txt | cut -d ' ' -f4
tr -s '<character>' squeezes multiple repeated instances of <character> into one.
It's not working in the script because of the typo in "Directo*t*y path" (last line of your script).
Cut isn't flexible enough. I usually use Perl for that:
cat file.txt | perl -F' ' -e 'print $F[1]."\n"'
Instead of a triple space after -F you can put any Perl regular expression. You access fields as $F[n], where n is the field number (counting starts at zero). This way there is no need to sed or tr.

Output on a single line

The following code is working as expected. But I can not format the output.
It will print something like this:
mysql
test
someDB
I want the output on a single line
mysql test someDB
I tried using sed in the script but it did not work.
#!/bin/sh
for dbName in `mysqlshow -uroot -pPassWord | awk '{print $2}'`
do
echo "$dbName" | egrep -v 'Databases|information_schema';
done
whenever you want to combine all lines of output into one you can also use xargs:
e.g.
find
.
./zxcv
./fdsa
./treww
./asdf
./ewr
becomes:
find |xargs echo
. ./zxcv ./fdsa ./treww ./asdf ./ewr
you can use tr to get your output to one line
<output from somewhere> | tr "\n" " "
To do a variation combining naumcho's and rsp's answers that will work for small numbers of results:
echo $(mysqlshow -uroot -pPassWord | awk '{print $2}' | egrep -v 'Databases|information_schema')
The newline is generated by the echo command most likely, the following should do the same without the newlines (not tested)
mysqlshow -uroot -pPassWord | awk '{print $2}' | egrep -v 'Databases|information_schema'
and has the added bonus of spawning just 1 grep instead of 3 grep processes.

How to keep a file's format if you use the uniq command (in shell)?

In order to use the uniq command, you have to sort your file first.
But in the file I have, the order of the information is important, thus how can I keep the original format of the file but still get rid of duplicate content?
Another awk version:
awk '!_[$0]++' infile
This awk keeps the first occurrence. Same algorithm as other answers use:
awk '!($0 in lines) { print $0; lines[$0]; }'
Here's one that only needs to store duplicated lines (as opposed to all lines) using awk:
sort file | uniq -d | awk '
FNR == NR { dups[$0] }
FNR != NR && (!($0 in dups) || !lines[$0]++)
' - file
There's also the "line-number, double-sort" method.
nl -n ln | sort -u -k 2| sort -k 1n | cut -f 2-
You can run uniq -d on the sorted version of the file to find the duplicate lines, then run some script that says:
if this_line is in duplicate_lines {
if not i_have_seen[this_line] {
output this_line
i_have_seen[this_line] = true
}
} else {
output this_line
}
Using only uniq and grep:
Create d.sh:
#!/bin/sh
sort $1 | uniq > $1_uniq
for line in $(cat $1); do
cat $1_uniq | grep -m1 $line >> $1_out
cat $1_uniq | grep -v $line > $1_uniq2
mv $1_uniq2 $1_uniq
done;
rm $1_uniq
Example:
./d.sh infile
You could use some horrible O(n^2) thing, like this (Pseudo-code):
file2 = EMPTY_FILE
for each line in file1:
if not line in file2:
file2.append(line)
This is potentially rather slow, especially if implemented at the Bash level. But if your files are reasonably short, it will probably work just fine, and would be quick to implement (not line in file2 is then just grep -v, and so on).
Otherwise you could of course code up a dedicated program, using some more advanced data structure in memory to speed it up.
for line in $(sort file1 | uniq ); do
grep -n -m1 line file >>out
done;
sort -n out
first do the sort,
for each uniqe value grep for the first match (-m1)
and preserve the line numbers
sort the output numerically (-n) by line number.
you could then remove the line #'s with sed or awk

Resources