Use awk to replace word in file - unix

I have a file with some lines:
a
b
c
d
I would like to cat this file into a awk command to produce something like this:
letter is a
letter is b
letter is c
letter is d
using something like this:
cat file.txt | awk 'letter is $1'
But it's not printing out as expected:
$ cat raw.txt | awk 'this is $1'
a
b
c
d

At the moment, you have no { action } block, so your condition evaluates the two empty variables this and is, concatenating them with the first field $1, and checks whether the result is true (a non-empty string). It is, so the default action prints each line.
It sounds like you want to do this instead:
awk '{ print "letter is", $1 }' raw.txt
Although in this case, you might as well just use sed:
sed 's/^/letter is /' raw.txt
This command matches the start of each line and adds the string.
Note that I'm passing the file as an argument, rather than using cat with a pipe.

Not sure if you wanted sed or awk but this is in awk:
$ awk '{print "letter is " $1}' file
letter is a
letter is b
letter is c
letter is d

Related

Transposing multiple columns in multiple rows keeping one column fixed in Unix

I have one file that looks like below
1234|A|B|C|10|11|12
2345|F|G|H|13|14|15
3456|K|L|M|16|17|18
I want the output as
1234|A
1234|B
1234|C
2345|F
2345|G
2345|H
3456|K
3456|L
3456|M
I have tried with the below script.
awk -F"|" '{print $1","$2","$3","$4"}' file.dat | awk -F"," '{OFS=RS;$1=$1}1'
But the output is generated as below.
1234
A
B
C
2345
F
G
H
3456
K
L
M
Any help is appreciated.
What about a single simple awk process such as this:
$ awk -F\| '{print $1 "|" $2 "\n" $1 "|" $3 "\n" $1 "|" $4}' file.dat
1234|A
1234|B
1234|C
2345|F
2345|G
2345|H
3456|K
3456|L
3456|M
No messing with RS and OFS.
If you want to do this dynamically, then you could pass in the number of fields that you want, and then use a loop starting from the second field.
In the script, you might first check if the number of fields is equal or greater than the number you pass into the script (in this case n=4)
awk -F\| -v n=4 '
NF >= n {
for(i=2; i<=n; i++) print $1 "|" $i
}
' file
Output
1234|A
1234|B
1234|C
2345|F
2345|G
2345|H
3456|K
3456|L
3456|M
# perl -lne'($a,#b)=((split/\|/)[0..3]);foreach (#b){print join"|",$a,$_}' file.dat
1234|A
1234|B
1234|C
2345|F
2345|G
2345|H
3456|K
3456|L
3456|M

Unix Command for counting number of words which contains letter combination (with repeats and letters in between)

How would you count the number of words in a text file which contains all of the letters a, b, and c. These letters may occur more than once in the word and the word may contain other letters as well. (For example, "cabby" should be counted.)
Using sample input which should return 2:
abc abb cabby
I tried both:
grep -E "[abc]" test.txt | wc -l
grep 'abcdef' testCount.txt | wc -l
both of which return 1 instead of 2.
Thanks in advance!
You can use awk and use the return value of sub function. If successful substitution is made, the return value of the sub function will be the number of substitutions done.
$ echo "abc abb cabby" |
awk '{
for(i=1;i<=NF;i++)
if(sub(/a/,"",$i)>0 && sub(/b/,"",$i)>0 && sub(/c/,"",$i)>0) {
count+=1
}
}
END{print count}'
2
We keep the condition of return value to be greater than 0 for all three alphabets. The for loop will iterate over every word of every line adding the counter when all three alphabets are found in the word.
I don't think you can get around using multiple invocations of grep. Thus I would go with (GNU grep):
<file grep -ow '\w+' | grep a | grep b | grep c
Output:
abc
cabby
The first grep puts each word on a line of its own.
Try this, it will work
sed 's/ /\n/g' test.txt |grep a |grep b|grep c
$ cat test.txt
abc abb cabby
$ sed 's/ /\n/g' test.txt |grep a |grep b|grep c
abc
cabby
hope this helps..

Adding Double quotes to last value

I have the data in below format.
abc ssg,"-149,684.58","-149,469.05",-215.53
efg sfg,-80.99,-77.46,-3.53
hij sf,"4,341.23","4,131.90",209.33
kilm mm,"2,490,716.13","-180,572.48","9,223.06"
I want to add double quotes to those value at the end which does not have double quotes done through perl or unix
the output should look as below:
abc ssg,"-149,684.58","-149,469.05","-215.53"
efg sfg,-80.99,-77.46,"-3.53"
hij sf,"4,341.23","4,131.90","209.33"
kilm mm,"2,490,716.13","-180,572.48","9,223.06"
This might work for you:
gawk -F ',' -v Q='"' 'BEGIN {OFS=FS} $NF !~ Q {gsub(/.*/,Q $NF Q,$NF); print ; next} 1' INPUTFILE
-F ',' sets the input field separator
-v Q='"' sets the Q variable to ", this helps to avoid some escaping problems
BEGIN {OFS=FS} set the output field separator to the same as the input
$NF !~ Q if the last field not matches Q (==") then
gsub(/.*/,Q $NF Q,$NF) replace the last field to " delimited
print the line
skip the next rule(s) and process the next line
1 executes the default print line action for every other line (where the last field not matches ")

Is Awk and multiple file processing possible?

I need to process two file contents. I was wondering if we can pull it off using a single nawk statement.
File A contents:
AAAAAAAAAAAA 1
BBBBBBBBBBBB 2
CCCCCCCCCCCC 3
File B contents:
XXXXXXXXXXX 3
YYYYYYYYYYY 2
ZZZZZZZZZZZ 1
I would like to compare if $2 (2nd field ) in file A is the reverse of $2 in file B.
I was wondering how to write rules in nawk for multi-file processing ?
How would we distinguish A's $2 from B's $2
EDIT: I need to compare $2 of A's first line (which is 1) with the $2 of B's last line (which is 1 again) .Then compare $2 of line 2 in A with $2 in NR-1 th line of B. And so on.....
You can do something like this -
[jaypal:~/Temp] cat f1
AAAAAAAAAAAA 1
BBBBBBBBBBBB 2
CCCCCCCCCCCC 3
DDDDDDDDDDDD 4
[jaypal:~/Temp] cat f2
AAAAAAAAAAA 5
XXXXXXXXXXX 3
YYYYYYYYYYY 2
ZZZZZZZZZZZ 1
Solution:
awk '
NR==FNR {a[i++]=$2; next}
{print (a[--i] == $2 ? "Match " $2 FS a[i] : "Do not match " $2 FS a[i])}' FileB FileA
Match 1 1
Match 2 2
Match 3 3
Do not match 4 5
You can make awk process files serially, but you can't easily make it process two files in parallel. You probably can achieve the effect with careful use of getline but 'careful' is the operative term.
I think in this case, with simple two-column files, I'd be inclined to use:
paste "File A" "File B" |
awk '{ process fields $1, $2 from File A and fields $3, $4 from file B }'
You would need to make sure the two files are in the appropriate order, etc.
If your input is more complex, then this may not work so well, though you can choose the character that separates the data from the two files with paste -d'|' ... to use a pipe to separate the two records, and awk -F'|' '{ ... }' to read $1 as the info from File A and $2 as the info from File B.
Have you thought about doing something like the following?
diff --brief <(awk '{print $2}' A) <(tac B | awk '{print $2}')
tac reverses the lines of file B and then you can compare the two columns.

How to interleave lines from two text files

What's the easiest/quickest way to interleave the lines of two (or more) text files? Example:
File 1:
line1.1
line1.2
line1.3
File 2:
line2.1
line2.2
line2.3
Interleaved:
line1.1
line2.1
line1.2
line2.2
line1.3
line2.3
Sure it's easy to write a little Perl script that opens them both and does the task. But I was wondering if it's possible to get away with fewer code, maybe a one-liner using Unix tools?
paste -d '\n' file1 file2
Here's a solution using awk:
awk '{print; if(getline < "file2") print}' file1
produces this output:
line 1 from file1
line 1 from file2
line 2 from file1
line 2 from file2
...etc
Using awk can be useful if you want to add some extra formatting to the output, for example if you want to label each line based on which file it comes from:
awk '{print "1: "$0; if(getline < "file2") print "2: "$0}' file1
produces this output:
1: line 1 from file1
2: line 1 from file2
1: line 2 from file1
2: line 2 from file2
...etc
Note: this code assumes that file1 is of greater than or equal length to file2.
If file1 contains more lines than file2 and you want to output blank lines for file2 after it finishes, add an else clause to the getline test:
awk '{print; if(getline < "file2") print; else print ""}' file1
or
awk '{print "1: "$0; if(getline < "file2") print "2: "$0; else print"2: "}' file1
#Sujoy's answer points in a useful direction. You can add line numbers, sort, and strip the line numbers:
(cat -n file1 ; cat -n file2 ) | sort -n | cut -f2-
Note (of interest to me) this needs a little more work to get the ordering right if instead of static files you use the output of commands that may run slower or faster than one another. In that case you need to add/sort/remove another tag in addition to the line numbers:
(cat -n <(command1...) | sed 's/^/1\t/' ; cat -n <(command2...) | sed 's/^/2\t/' ; cat -n <(command3) | sed 's/^/3\t/' ) \
| sort -n | cut -f2- | sort -n | cut -f2-
With GNU sed:
sed 'R file2' file1
Output:
line1.1
line2.1
line1.2
line2.2
line1.3
line2.3
Here's a GUI way to do it: Paste them into two columns in a spreadsheet, copy all cells out, then use regular expressions to replace tabs with newlines.
cat file1 file2 |sort -t. -k 2.1
Here its specified that the separater is "." and that we are sorting on the first character of the second field.

Resources