pattern match and create multiple files LINUX - unix

I have a pipe delimited file with over 20M rows. In 4th column I have a date field. I have to take the partial value (YYYYMM) from the date field and write the matching data to a new file appending it to file name. Thanks for all your inputs.
Inputfile.txt
XX|1234|PROCEDURES|20160101|RC
XY|1634|PROCEDURES|20160115|RC
XM|1245|CODES|20170124|RC
XZ|1256|CODES|20170228|RC
OutputFile_201601.txt
XX|1234|PROCEDURES|20160101|RC
XY|1634|PROCEDURES|20160115|RC
OutputFile_201701.txt
XM|1245|CODES|20170124|RC
OutputFile_201702.txt
XZ|1256|CODES|20170228|RC

Using awk:
$ awk -F\| '{f="outputfile_" substr($4,1,6) ".txt"; print >> f ; close (f)}' file
$ ls outputfile_201*
outputfile_201601.txt outputfile_201701.txt outputfile_201702.txt
Explained:
$ awk -F\| ' # pipe as delimiter
{
f="outputfile_" substr($4,1,6) ".txt" # form output filename
print >> f # append record to file
close(f) # close output file
}' file

Related

Count file line along with file name in unix

I have 3 files at temp dir as below :
test1.txt -- It has 4 lines
test2.txt -- It has 5 lines
test3.txt -- It has 6 lines
Need to count the lines in the file along with the name to a separate file (LIST.txt), like below
'test1.txt','4'
'test2.txt','5'
'test3.txt','6'
Code Tried :
FileDir=/temp/test*.txt
for file in ${FileDir}
do
filename=$(basename $file) & awk 'END {print NR-1}' ${file} >> /temp/LIST.txt
done
This is not giving me the name, it only gives me the line counts.
Also, how to get the output of those 2 commands separated by ',' ?
Perhaps this would suit?
FileDir=/temp/test*.txt
for file in ${FileDir}
do
awk 'END{print FILENAME "," NR}' "$file"
done > LIST.txt
cat LIST.txt
/temp/test1.txt,4
/temp/test2.txt,5
/temp/test3.txt,6
Remove "/temp/" and include single quotes:
cd /temp
FileDir=test*.txt
for file in ${FileDir}
do
awk 'END{q="\047"; print q FILENAME q "," q NR q}' "$file"
done > ../LIST.txt
cd ../
cat LIST.txt
'test1.txt','4'
'test2.txt','5'
'test3.txt','6'
An alternative approach:
FileDir=/temp/test*.txt
for file in ${FileDir}
do
awk 'END{q="\047"; n = split(FILENAME, a, "/"); print q a[n] q "," q NR q}' "$file"
done > LIST.txt
cat LIST.txt
'test1.txt','4'
'test2.txt','5'
'test3.txt','6'

compare two fields from two different files using awk

I have two files where I want to compare certain fields and produce the output
I have a variable as well
echo ${CURR_SNAP}
123
File1
DOMAIN1|USER1|LE1|ORG1|ACCES1|RSCTYPE1|RSCNAME1
DOMAIN2|USER2|LE2|ORG2|ACCES2|RSCTYPE2|RSCNAME2
DOMAIN3|USER3|LE3|ORG3|ACCES3|RSCTYPE3|RSCNAME3
DOMAIN4|USER4|LE4|ORG4|ACCES4|RSCTYPE4|RSCNAME4
File2
ORG1|PRGPATH1
ORG3|PRGPATH3
ORG5|PRGPATH5
ORG6|PRGPATH6
ORG7|PRGPATH7
The output I am expecting as below where the last column is CURR_SNAP value and the matching will be 4th column of File1 should be matched with 1st column of File2
DOMAIN1|USER1|LE1|ORG1|ACCES1|RSCTYPE1|123
DOMAIN3|USER3|LE3|ORG3|ACCES3|RSCTYPE3|123
I tried with the below code piece but looks like I am not doing it correctly
awk -v CURRSNAP="${CURR_SNAP}" '{FS="|"} NR==FNR {x[$0];next} {if(x[$1]==$4) print $1"|"$2"|"$3"|"$4"|"$5"|"$6"|"CURRSNAP}' File2 File1
With awk:
#! /bin/bash
CURR_SNAP="123"
awk -F'|' -v OFS='|' -v curr_snap="$CURR_SNAP" '{
if (FNR == NR)
{
# this stores the ORG* as an index
# here you can store other values if needed
orgs_arr[$1]=1
}
else if (orgs_arr[$4] == 1)
{
# overwrite $7 to contain CURR_SNAP value
$7=curr_snap
print
}
}' file2 file1
As in your expected output, you didn't output RSCNAME*, so I have overwritten $7(which is column for RSCNAME*) with $CURR_SNAP. If you want to display RSCNAME* column aswell, remove $7=curr_snap and change print statement to print $0, curr_snap.
I wouldn't use awk at all. This is what join(1) is meant for (Plus sed to append the extra column:
$ join -14 -21 -t'|' -o 1.1,1.2,1.3,1.4,1.5,1.6 File1 File2 | sed "s/$/|${CURR_SNAP}/"
DOMAIN1|USER1|LE1|ORG1|ACCES1|RSCTYPE1|123
DOMAIN3|USER3|LE3|ORG3|ACCES3|RSCTYPE3|123
It does require that the files be sorted based on the common field, like your examples are.
You can do this with awk with two-rules. For the first file (where NR==FNR), simply use string concatenation to append the fields 1 - (NF-1) assigning the concatenated result to an array indexed by $4. Then for the second file (where NR>FNR) in rule two test if array[$1] has content and if so, output the array and append "|"CURR_SNAP (with CURR_SNAP shortened to c in the example below and array being a), e.g.
CURR_SNAP=123
awk -F'|' -v c="$CURR_SNAP" '
NR==FNR {
for (i=1;i<NF;i++)
a[$4]=i>1?a[$4]"|"$i:a[$4]$1
}
NR>FNR {
if(a[$1])
print a[$1]"|"c
}
' file1 file2
Example Use/Output
After setting the filenames to match yours, you can simply copy/middle-mouse-paste in your console to test, e.g.
$ awk -F'|' -v c="$CURR_SNAP" '
> NR==FNR {
> for (i=1;i<NF;i++)
> a[$4]=i>1?a[$4]"|"$i:a[$4]$1
> }
> NR>FNR {
> if(a[$1])
> print a[$1]"|"c
> }
> ' file1 file2
DOMAIN1|USER1|LE1|ORG1|ACCES1|RSCTYPE1|123
DOMAIN3|USER3|LE3|ORG3|ACCES3|RSCTYPE3|123
Look things over and let me know if you have further questions.

Replace column in header of a large .txt file - unix

i need to replace the date in header of a large file. So i have multiple column in header, using |(pipe) as separator, like this:
A|B05|1|xxc|2018/06/29|AC23|SoOn
So i need the same header but with the date(5th column) updated : A|B05|1|xxc|2018/08/29|AC23
Any solutions for me? I tried with awk and sed but both of them carried me errors greater than me. I'm new on this and i really want to understand the solution. So could you please help me?
You can use below command which replaces 5th column from every line with content of newdate variable:
awk -v newdate="2018/08/29" 'BEGIN{FS=OFS="|"}{ $5 = newdate }1' infile > outfile
Explanation
awk -v newdate="2018/08/29" ' # call awk, and set variable newdate
BEGIN{
FS=OFS="|" # set input and output field separator
}
{
$5 = newdate # assign fifth field with a content of variable newdate
}1 # 1 at the end does default operation
# print current line/row/record, that is print $0
' infile > outfile
If you want to skip first line incase if you have header then use FNR>1
awk -v newdate="2018/08/29" 'BEGIN{FS=OFS="|"}FNR>1{ $5 = newdate }1' infile > outfile
If you want to replace 5th column in 1st row only then use FNR==1
awk -v newdate="2018/08/29" 'BEGIN{FS=OFS="|"}FNR==1{ $5 = newdate }1' infile > outfile
If you still have problem, frame your question with sample input and
expected output, so that it will be easy to interpret your problem.
Short sed solution:
sed -Ei '1s~\|[0-9]{4}/[0-9]{2}/[0-9]{2}\|~|2018/08/29|~' file
-i - modify the file in-place
1s - substitute only in the 1st(header) line
[0-9]{4}/[0-9]{2}/[0-9]{2} - date pattern

How to split and replace strings in columns using awk

I have a tab-delim text file with only 4 columns as shown below:
GT:CN:CNL:CNP:CNQ:FT .:2:a:b:c:PASS .:2:c:b:a:PASS .:2:d:c:a:FAIL
If the string "FAIL" is found in a specific column starting from column2 to columnN (all the strings are separated by ":") then it would need to replace the second element in that column to "-1". Sample output is shown below:
GT:CN:CNL:CNP:CNQ:FT .:2:a:b:c:PASS .:2:c:b:a:PASS .:-1:d:c:a:FAIL
Any help using awk?
With any awk:
$ awk 'BEGIN{FS=OFS="\t"} {for (i=2;i<=NF;i++) if ($i~/:FAIL$/) sub(/:[^:]+/,":-1",$i)} 1' file
GT:CN:CNL:CNP:CNQ:FT .:2:a:b:c:PASS .:2:c:b:a:PASS .:-1:d:c:a:FAIL
In order to split in awk you can use "split".
An example of it would be the following:
split(1,2,"3");
1 is the string you want to split
2 is the array you want to split it into
and 3 is the character that you want to be split on
e.g
string="hello:world"
result=`echo $string | awk '{ split($1,ARR,":"); printf("%s ",ARR[1]);}'`
In this case the result would be equal to hello, because we split the string to the " : " character and we printed the first half of the ARR, if we would print the second half (so printf("%s ",ARR[2])) of the ARR then it would be returned to result the "world".
With gawk:
awk '{$0=gensub(/[^:]*(:[^:]*:[^:]*:[^:]:FAIL)/,"-1\\1", "g" , $0)};1' File
with sed:
sed 's/[^:]*\(:[^:]*:[^:]*:[^:]:FAIL\)/-1\1/g' File
If you are using GNU awk, you can take advantage of the RT feature1 and split the records at tabs and newlines:
awk '$NF == "FAIL" { $2 = "-1"; } { printf "%s", $0 RT }' RS='[\t\n]' FS=':' infile
Output:
GT:CN:CNL:CNP:CNQ:FT .:2:a:b:c:PASS .:2:c:b:a:PASS .:-1:d:c:a:FAIL
1 The record separator that follows the current record.
Your requirements are somewhat vague, but I'm pretty sure this does what you want with bog standard awk (no gnu-awk extensions):
awk '/FAIL/{$2=-1}1' ORS=\\t RS=\\t FS=: OFS=: input

Remove all lines from file with duplicate value in field, including the first occurrence

I would like to remove all the lines in my data file that contain a value in column 2 that is repeated in column 2 in other lines.
I've sorted by the value in column 2, but can't figure out how to use uniq for just the values in one field as the values are not necessarily of the same length.
Alternately, I can remove lines with the duplicate using an awk one-liner like
awk -F"[,]" '!_[$2]++'
but this retains the line with the first incidence of the repeated value in col 2.
As an example, if my data is
a,b,c
c,b,a
d,e,f
h,i,j
j,b,h
I would like to remove ALL lines (including the first) where b occurs in the second column.
Like this:
d,e,f
h,i,j
Thanks for any advice!!
If the order is not important then the following should work:
awk -F, '
!seen[$2]++ {
line[$2] = $0
}
END {
for(val in seen)
if(seen[val]==1)
print line[val]
}' file
Output
h,i,j
d,e,f
Solution with grep:
grep -v -E '\b,b,\b' text.txt
Content of the file:
$ cat text.txt
a,b,c
c,b,a
d,e,f
h,i,j
j,b,h
a,n,b
b,c,f
$ grep -v -E '\b,b,\b' text.txt
d,e,f
h,i,j
a,n,b
b,c,f
Hope it helps
Some different awk:
awk -F, '
BEGIN {f=0}
FNR==NR {_[$2]++;next}
f==0 {
f=1
for(j in _)if(_[j]>1)delete _[j]
}
$2 in _
' file file
Explanation
The awk passes through the file twice - that's why it appears twice at the end. On the first pass (when FNR==NR) I count the number of times each column 2 appears in array _[]. At the end of the first pass, I then delete all elements of _[] where that element has been seen more than once. Then, on the second pass, I print lines whose second field appears in _[].

Resources