There are 3 files in a directory. How can i print first file 1st line, Second file 3rd line and Third file 4th line using UNIX command ?
I tried with cat filename.txt| sed -n 1p but it is applicable for only one file. How can I view all the three files at a time ??
Using awk. at the beginning of each file f is increased to follow which file we're dealing with then we just team that up with the required record number of each file (FNR):
$ awk 'FNR==1 {f++} f==1&&FNR==1 || f==2&&FNR==3 || f==3&&FNR==4' 1 2 3
11
23
34
Record of the first file, the others are similar:
$ cat 1
11
12
13
14
Related
I need to merge several files, removing redundant lines among files, while keeping redundant lines within files. A schematic representation of my files is the following:
File1.txt
1
2
3
3
4
5
6
File2.txt
6
7
8
8
9
File3.txt
9
10
10
11
The desired output would be:
1
2
3
3
4
5
6
7
8
8
9
10
10
11
I would prefer to get a solution either in awk, or in bash or in R language. I searched the web for solutions and, though there were plenty of them* (please find some examples below), there were all removing duplicated lines regardless of the fact that they were located within or outside files.
Thanks in advance.
Arturo
Examples of previous solutions removing redundant lines both within and outside files:
https://unix.stackexchange.com/questions/50103/merge-two-lists-while-removing-duplicates
https://unix.stackexchange.com/questions/457320/combine-text-files-and-delete-duplicate-lines
https://unix.stackexchange.com/questions/350520/awk-combine-two-big-files-and-remove-duplicated-lines
https://unix.stackexchange.com/questions/257467/merging-2-files-and-keeping-the-one-duplicate
With your shown samples, could you please try following. This will NOT remove redundant lines within files but will remove them file wise.
awk '
FNR==1{
for(key in current){
total[key]
}
delete current
}
!($0 in total)
{
current[$0]
}
' file1.txt file2.txt file3.txt
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
FNR==1{ ##Checking condition if its first line(of each file) then do following.
for(key in current){ ##Traverse through current array here.
total[key] ##placing index of current array into total(for all files) one.
}
delete current ##Deleting current array here.
}
!($0 in total) ##If current line is NOT present in total then do following.
{
current[$0] ##Place current line into current array.
}
' file1.txt file2.txt file3.txt ##Mentioning Input_file names here.
Here's a trick adding on to https://stackoverflow.com/a/15385080/3358272 using diff and its output format. There is likely a presumption of "sorted" here, untested.
out=$(mktemp -p .)
tmpout=$(mktemp -p .)
trap 'rm -f "${out}" "${tmpout}"' EXIT
for F in ${#} ; do
{ cat "${out}" ;
diff --changed-group-format='%>' --unchanged-group-format='' "${out}" "${F}" ;
} > "${tmpout}"
mv "${tmpout}" "${out}"
done
cat "${out}"
Output:
$ ./question.sh F*
1
2
3
3
4
5
6
7
8
8
9
10
10
11
$ diff <(./question.sh F*) Output.txt
(Per markp-fuso's comment, if File3.txt had two 9s, this would preserve both.)
i have a file that contains random lines and key words END:
line 1
line 2
...
line 23
END
line 25
....
line 40
END
and i want to split it into multiple files based on the key word END and have it inside each file as:
file 1
line 1
line 2
...
line 23
END
file 2
line 25
.....
line 40
END
I tried:
csplit -k file_name '/END/' '{*}' but i do not get the correct output.
Add an offset of 1 to the regex to include the matching END in the current file. I also added to ^ and $ to anchor the regex.
csplit -k file -f file --elide-empty-files '/^END$/1' '{*}'
-f file Sets the output filename prefix
--elide-empty-files This is a GNU extension and doesn't output empty files (in this case an empty file file02)
Output:
$ head file0*
==> file00 <==
line 1
line 2
...
line 23
END
==> file01 <==
line 25
....
line 40
END
with awk
$ awk '{f= FILENAME"."(c+1); print > f} /^END$/{close(f); c++}' file
$ head file.*
==> file.1 <==
line 1
line 2
...
line 23
END
==> file.2 <==
line 25
....
line 40
END
Edge case behavior:
if the input file is empty (no lines), no output will be generated.
if the file has blank line(s), exact copy of the file is generated as a partition (same as with no END marker or only one at the end).
Could someone please help/advise how could I removed the first 4 line and the last 2 line of codes in my 3 JavaScript files using the Shell Script?
I tried using this guide: UNIX - delete specific lines but it will only work for the first 4 lines. All 3 Javascript files have different set of line of codes.
set -vx
lines2del="(1,2,3,4)"
sedCmds=${lines2del//,/d;}
sedCmds=${sedCmds/(/}
sedCmds=${sedCmds/)/}
sedCmds=${sedCmds}d
sed -i "$sedCmds" file
Any inputs are highly appreciated. Thanks
This might work for you (GNU sed):
sed -i '1,4d;N;$d;P;D' file
This deletes the lines 1 to 4 and then prints all other lines except the last two which it also deletes.
Add the following to your lines2del:
$(($(cat file | wc -l)-2)) // third last line
$(($(cat file | wc -l)-1)) // second last line
$(cat file | wc -l) // last line
$ seq 10 | tail -n +5 | head -n -2
5
6
7
8
$ seq 10 | awk '{p3=p2; p2=p1; p1=$0} NR>6{print p3}'
5
6
7
8
$ seq 10 | awk '{p[NR%6]=$0} NR>6{print p[(NR-2)%6]}'
5
6
7
8
$ seq 10 | awk -v b=4 -v a=2 'BEGIN{t=b+a} {p[NR%t]=$0} NR>t{print p[(NR-a)%t]}'
5
6
7
8
$ seq 10 | awk -v b=3 -v a=5 'BEGIN{t=b+a} {p[NR%t]=$0} NR>t{print p[(NR-a)%t]}'
4
5
I needed to extract all hits from one list (list.txt) which can be found in one of the columns of another (here in Data.txt) into a third (output.txt).
Data.txt (tab delimited)
some_data more_data other_data here yet_more_data etc
A B 2 Gee;Whiz;Hello 13 12
A B 2 Gee;Whizz;Hi 56 32
E 4 Btm;Lol 16 2
T 3 Whizz 13 3
List.txt
Gee
Whiz
Lol
Ideally output.txt looks like
some_data more_data other_data here yet_more_data etc
A B 2 Gee;Whiz;Hello 13 12
A B 2 Gee;Whizz;Hi 56 32
E 4 Btm;Lol 16 2
So I tried a shell script
for ids in List.txt
do
grep $ids Data.txt >> output.txt
done
except I typed out everything (cut and paste actually) in List.txt in said script.
Unfortunately it gave me an output.txt including the last line, I assume as 'Whizz' contains 'Whiz'.
I also tried cat Data.txt | egrep -F "List.txt" and that resulted in grep: conflicting matchers specified -- I suppose that was too naive of me. The actual files: List.txt contains a sorted list of 985 words, Data.txt has 115576 rows with 17 columns.
Some help/guidance would be much appreciated thanks.
Try something like this:
for ids in List.txt
do
grep "[TAB;]$ids[TAB;]" Data.txt >> output.txt
done
But it has two drawbacks:
"Data.txt" is scanned multiple times
You can get one line multiple times.
If it is problem try two step version:
cat List.txt | sed -e "s/.*/[TAB;]\0[TAB;]/g" > List_mod.txt
grep -f List_mod.txt Data.txt > output.txt
Note:
TAB character can be inserted by combination Ctrl-V following by Tab key in command line, and Tab character in editor. You have to check if your edit does not change tab to series of spaces.
The UNIX tool for general text processing is "awk":
awk '
NR==FNR { list[$0]; next }
{
for (word in list) {
if ($0 ~ "[\t;]" word "[\t;]") {
print
next
}
}
}
' List.txt Data.txt > output.txt
I know how to get a range of lines by using awk and sed.
I also do know how to print out every nth line using awk and sed.
However, I don't know how to combined the two.
For example, I have a file with 1780000 lines.
For every 17800th line, I would like to print 17800th line plus the two after that.
So if I have a file with 1780000 lines and it starts from 1 and ends at 1780000, this will print:
1
2
3
17800
17801
17802
35600
35601
35602
# ... and so on.
Does anyone know how to get a range of line every nth interval using awk, sed, or other unix command?
Using GNU sed:
sed -n '0~17800{N;N;p}' input
Meaning,
For every 17800th line: 0~17800
Read two lines: {N;N;
And print these out: p}
We can also add the first three lines:
sed -n -e '1,3p' -e '0~17800{N;N;p}' input
Using Awk, this would be simpler:
awk 'NR%17800<3 || NR==3 {print}' input
$ cat file
1
2
3
4
5
6
7
8
9
10
$ awk '!(NR%3)' file
3
6
9
$ awk -v intvl=3 -v delta=2 '!(NR%intvl){print "-----"; c=delta} c&&c--' file
-----
3
4
-----
6
7
-----
9
10
$ awk -v intvl=4 -v delta=2 '!(NR%intvl){print "-----"; c=delta} c&&c--' file
-----
4
5
-----
8
9
$ awk -v intvl=4 -v delta=3 '!(NR%intvl){print "-----"; c=delta} c&&c--' file
-----
4
5
6
-----
8
9
10
seq -f %.0f 1780000 | awk 'NR < 4 || NR % 17800 < 3' | head
output:
1
2
3
17800
17801
17802
35600
35601
35602
53400
Explanation
The NR < 4 is for the first 3 lines because the requirement For every 17800th line, print 17800th line plus the two after that. doesn't fit the output you gave.
Here I use head for reducing the output size and you should remove it in your use case.
For GNU seq, you don't need -f %.0f.