I have a basic CSV that contains key/value. The first two columns being the key and the third column being the value.
Example file1:
12389472,1,136-7402
23247984,1,136-7402
23247984,2,136-7402
34578897,1,136-7402
And in another file I have a list of keys that need their value changed in the first file. I'm trying to change the value to 136-7425
Example file2:
23247984,1
23247984,2
Here's what I'm currently doing:
/usr/xpg4/bin/awk '{FS=",";OFS=","}NR==FNR{a[$1,$2]="136-7425";next}{$3=a[$1,$2]}1' file2 file1 > output
Which is working but it's leaving the value blank for keys not found in file2. I'd like to only change the value for keys present in file2, and leave the current value for keys not found.
Can anyone point out what I'm doing wrong? Or perhaps there's an easier way to accomplish this.
Thanks!
Looks like you're just zapping the third field for keys that don't exist in the first file. Try this:
awk '{FS=OFS=","}NR==FNR{a[$1,$2]="136-7425";next} ($1,$2) in a{$3=a[$1,$2]} 1' file2 file1 > output
or (see comments below):
awk '{FS=OFS=","}NR==FNR{seen[$1,$2]++;next} seen[$1,$2]{$3="136-7425"} 1' file2 file1 > output
FYI an array named seen[] is also similarly and commonly used to remove duplicates from input, e.g.:
awk '!seen[$0]++' file
this line should work for you:
awk -F, -v OFS="," 'NR==FNR{a[$1,$2]=1;next}a[$1,$2]{$3="136-7425"}7' file2 file1
Related
I have a bunch of different files.We have used "|" as delimeter All files contain a column titled CARDNO, but not necessarily in the same location in all of the files. I have a function called data_mask. I want to apply to CARDNO in all of the files to change them into NEWCARDNO.
I know that if I pass in the column number of CARDNO I can do this pretty simply, say it's the 3rd column in a 5 column file with something like:
awk -v column=$COLNUMBER '{print $1, $2, FUNCTION($column), $4, $5}' FILE
However, if all of my files have hundreds of columns and it's somewhere arbitrary in each file, this is incredibly tedious. I am looking for a way to do something along the lines of this:
awk -v column=$COLNUMBER '{print #All columns before $column, FUNCTION($column), #All columns after $column}' FILE
My function takes a string as an input and changes it into a new one. It takes the value of the column as an input, not the column number. Please suggest me Unix command which can pass the column value to the function and give the desired output.
Thanks in advance
If I understand your problem correctly, the first row of the file is the header and one of those columns is named CARDNO. If this is the case then you just search for the header in that file and process accordingly.
awk 'BEGIN{FS=OFS="|";c=1}
(NR==1){while($c != "CARDNO" && c<=NF) c++
if(c>NF) exit
$c="NEWCARDNO" }
(NR!=1){$c=FUNCTION($c)}
{print}' <file>
As per comment, if there is no header in the file, but you know per file, which column number it is, then you can simply do:
awk -v c=$column 'BEGIN{FS=OFS="|"}{$c=FUNCTION($c)}1' <file>
I need to find a duplicate entry in 2 different columns and keep only one of the duplicate and all unique entries. For me if A123 is in the first column and it show up later in the second column it's a duplicate. I also know for sure that A123 will always be paired to B123 by either being A123,B123 or B123,A123. I only need to keep one and it doesn't matter which one it is.
Ex: My input file would contain:
A123,B123
A234,B234
C123,D123
B123,A123
B234,A234
I'd like the output to be:
A123,B123
A234,B234
C123,D123
The best I can do is to extract the unique entries with :
awk -F',' 'NR==FNR{x[$1]++;next}; !x[$2]' file1 file1
or get only the duplicates with
awk -F',' 'NR==FNR{x[$1]++;next}; x[$2]' file1 file1
Any help would be greatly appreciated.
This can be shorter!
First print if the element is not yet present in the array. Then add the first field to the array. Only one run over the inputfile is necessary:
awk -F, '!x[$2];{x[$1]++}' file1
This awk one-liner works for your example:
awk -F, '!($2 in a){a[$1]=$0}END{for(x in a)print a[x]}' file
The conventional, idiomatic awk solution:
$ awk -F, '!seen[$1>$2 ? $1 : $2]++' file
A123,B123
A234,B234
C123,D123
By convention we always use seen (rather than x or anything else) as the array name when it represents a set where you want to check if it's index has been seen before, and using a ternary expression to produce the largest of the possible key values as the index ensures the order they appear in the input doesn't matter.
The above doesn't care about your unique situation where every $2 is paired to a specific $1 - it simply prints unique individual occurrences across a pair of fields. If you wanted it to work on the pair of fields combined (and assuming you have more fields so just using $0 as the index wouldn't work) that'd be:
awk -F, '!seen[$1>$2 ? $1 FS $2 : $2 FS $1]++' file
I have a text file that has 110132 lines and looks like this,
b3694658:heccc 238622
b3769025:heccc 238622
b3694659:heccc 238623
b3769026:heccc 238623
b3694660:heccc 238624
b3769027:heccc 238624
b3694661:heccc 238625
b3769028:heccc 238625
Notice that every 2nd line has a duplicate entry at heccc etc., i want an output that only has the 2nd occurrence of the duplicate, so it would look like this,
b3769025:heccc 238622
b3769026:heccc 238623
b3769027:heccc 238624
b3769028:heccc 238625
Thanks for your help!
It appears that you are just looking to output unique values. If that is so, just do this:
cat textfile | sort | uniq
uniq -f1 file.txt
should do in this case.
see how -f , -s options work with the uniq command?
Sorry to ask this, might be a trivial question, tried awk script as well. But I think I am new to that.
I have a list of Ids in a file i.e. ids.txt
1xre23
223dsf
234ewe
and a log file with FIX messages which might contain those ids.
sample: log file abc.log
35=D^A54=1xre23^A22=s^A120=GBP^A
35=D^A54=abcd23^A22=s^A120=GBP^A
35=D^A54=234ewe^A22=s^A120=GBP^A
35=D^A54=xyzw23^A22=s^A120=GBP^A
35=D^A54=223dsf^A22=s^A120=GBP^A
I want to check how many ids matched in that log file.
Ids are large almost 10K, and log file size is around 300MB.
sample output I am looking for is.
output:
35=D^A54=1xre23^A22=s^A120=GBP^A
35=D^A54=234ewe^A22=s^A120=GBP^A
35=D^A54=223dsf^A22=s^A120=GBP^A
Try something like with grep command:
grep -w -f ids.txt abc.log
Output:
35=D^A54=1xre23^A22=s^A120=GBP^A<br>
35=D^A54=234ewe^A22=s^A120=GBP^A<br>
35=D^A54=223dsf^A22=s^A120=GBP^A<br>
If you like to use awk this should do:
awk -F"[=^]" 'FNR==NR {a[$0];next} $4 in a' ids.txt abc.log
35=D^A54=1xre23^A22=s^A120=GBP^A
35=D^A54=234ewe^A22=s^A120=GBP^A
35=D^A54=223dsf^A22=s^A120=GBP^A
This store the ids.txt in array a
If fourth field (separated by = and ^) contains the ID, print it.
You can also do it the other way around:
awk 'FNR==NR {a[$0];next} {for (i in a) if ($0~i) print}' abc.log ids.txt
35=D^A54=1xre23^A22=s^A120=GBP^A
35=D^A54=234ewe^A22=s^A120=GBP^A
35=D^A54=223dsf^A22=s^A120=GBP^A
Store all data from abc.log in array a
Then test if line contains data for id.txt
If yes, print the line.
I would like to compare two files [ unsorted ]
file1 and file2. I would like to do file2 - file1 [ the difference ] irrespective of the line number?
diff is not working.
I got the solution by using comm
comm -23 file1 file2
will give you the desired output.
The files need to be sorted first anyway.
Well, you can just sort the files first, and diff the sorted files.
sort file1 > file1.sorted
sort file2 > file2.sorted
diff file1.sorted file2.sorted
You can also filter the output to report lines in file2 which are absent from file1:
diff -u file1.sorted file2.sorted | grep "^+"
As indicated in comments, you in fact do not need to sort the files. Instead, you can use a process substitution and say:
diff <(sort file1) <(sort file2)
There are 3 basic commands to compare files in unix:
cmp : This command is used to compare two files byte by byte and as any mismatch occurs,it echoes it on the screen.if no mismatch occurs i gives no response.
syntax:$cmp file1 file2.
comm : This command is used to find out the records available in one but not in another
diff
Most easy way: sort files with sort(1) and then use diff(1).