I can delete duplicate lines in files using below commands:
1) sort -u and uniq commands. is that possible using sed or awk ?
There's a "famous" awk idiom:
awk '!seen[$0]++' file
It has to keep the unique lines in memory, but it preserves the file order.
sort and uniq these only need to remove duplicates
cat filename | sort | uniq >> filename2
if its file consist of number use sort -n
After sorting we can use this sed command
sed -E '$!N; /^(.*)\n\1$/!P; D' filename
If the file is unsorted then you can use with combination of the command.
sort filename | sed -E '$!N; /^\(.*\)\n\1$/!P; D'
Related
I'm trying to get a uniq -c output like the following:
100 a
99 b
45 c
etc...
To write to a file in csv format using only unix utils, with the ordering reversed:
a,100
b,99
c,45
etc...
I figure I can pipe the output into tr " " "," to get the csv format, but how do I get the order of the columns to be reversed, only using unix utils?
cat test.txt | awk '{print $2,$1}'
armnotstrong's is probably the simplest but you can also use sed
sed -r 's/(.+) (.+)/\2,\1/'
For these one liners in unix, i like to use perl.
uniq -c yourfile | perl -lne 's/^(\s*)(\d*)(\s*)(.*)/$4,$2/g; print $_'
Difference is how the "," gets included in the output. If done as per #armnotstrong, the result will just be a bunch of lines seperated by spaces.
bash
=> cat text
100 a
99 b
98 c
=> cat text | awk '{print $2","$1}'
a,100
b,99
c,98
The awk solutions don't work correctly if your text data contains spaces.
Based on #cbreezier's answer, I came up with this solution:
sed -r 's/ +([0-9]+) (.+)/\2,\1/'
How can I grep a certain part of a large file from lines 1000 to 2000, up to line 1000 or from line 1000 for example?
I don't want to split the file in smaller files.
you could use sed to pre-process. EDIT: adding a q per Kent's suggestion
sed -n '1000,2000{p;2000q}' file.txt | grep 'abc'
for line 1000 through end of file
sed -n '1000,$p' file.txt | grep 'abc'
As a minor improvement over the sed solution by #ravoori, refactor the grep into the sed:
sed '1000,$/pattern/!d;2000q' file.txt
If you have the pattern in a variable, use double quotes;
sed '1000,$/'"$pattern"'/!d;2000q' file.txt
Or equivalently in Awk:
awk 'NR==2000{exit(0)}NR>=1000 && /pattern/' file.txt
or with a variable
awk -v pat="$pattern" 'NR==2000{exit(0)}NR>=1000 && $0~pat' file.txt
I'd suggest
head -2000 FILE.TXT | tail -1000 | grep XXX
as the neatest solution because head does not have to read the huge file, just the first few N thousand lines. It essentially achieves what q does in the sed solution.
I am having a log file a.log and i need to extract a piece of information from it.
To locate the start and end line numbers of the pattern i am using the following.
start=$(sed -n '/1112/=' file9 | head -1)
end=$(sed -n '/true/=' file9 | head -1)
i need to use the variables (start,end) in the following command:
sed -n '16q;12,15p' orig-data-file > new-file
so that the above command appears something like:
sed -n '($end+1)q;$start,$end'p orig-data-file > new-file
I am unable to replace the line numbers with the variables. Please suggest the correct syntax.
Thanks,
Rosy
When I realized how to do it, I was looking for anyway to get line number into a file containing the requested info, and display the file from that line to EOF.
So, this was my way.
with
PATTERN="pattern"
INPUT_FILE="file1"
OUTPUT_FILE="file2"
line number of first match of $PATTERN into $INPUT_FILE can be retrieved with
LINE=`grep -n ${PATTERN} ${INPUT_FILE} | awk -F':' '{ print $1 }' | head -n 1`
and the outfile will be the text from that $LINE to EOF. This way:
sed -n ${LINE},\$p ${INPUT_FILE} > ${OUTPUT_FILE}
The point here, is the way how can variables be used with command sed -n:
first witout using variables
sed -n 'N,$p' <file name>
using variables
LINE=<N>; sed -n ${LINE},\$p <file name>
Remove the single quotes thus. Single quotes turn off the shell parsing of the string. You need shell parsing to do the variable string replacements.
sed -n '('$end'+1)q;'$start','$end''p orig-data-file > new-file
Is there a oneliner for for sort and uniq given a filename in unix?
I googled and found the following but its not sorting,also not sure what is the below command doing..any better ways using awk or anyother unix tool?
cut -d, -f1 file | uniq | xargs -I{} grep -m 1 "{}" file
On a side note,is there one that can be used in both windows and unix?this is not important but just checking..
C:\Users\Chola>sort -t "#" -k2,2 email-list.txt
Input text file:-
436485
422636
429228
427041
433414
425810
422636
431526
428808
If your file consists only of numbers, one per line:
sort -n FILENAME | uniq
or
sort -u -n FILENAME
(You can add -u to the sort command instead of piping through uniq in all of the following.).
If you want to extract just one column of a file, and then sort that column numerically removing duplicates, you could do this:
cut -f7 FILENAME | sort -n | uniq
Cut assumes that there is a single tab between columns. If your file is CSV, you might be able to do this:
cut -f7 -d, FILENAME | sort -n | uniq
but that won't work if there is a , in a text field in the file (where CSV will protect it with "'s).
If you want to sort by the column but remove only completely duplicate lines, then you can do this:
sort -k7,7n FILENAME | uniq
sort assumes that columns are separated by whitespace. Again, if you want to separate with ,, you can use:
sort -k7,7n -t, FILENAME | uniq
In order to use the uniq command, you have to sort your file first.
But in the file I have, the order of the information is important, thus how can I keep the original format of the file but still get rid of duplicate content?
Another awk version:
awk '!_[$0]++' infile
This awk keeps the first occurrence. Same algorithm as other answers use:
awk '!($0 in lines) { print $0; lines[$0]; }'
Here's one that only needs to store duplicated lines (as opposed to all lines) using awk:
sort file | uniq -d | awk '
FNR == NR { dups[$0] }
FNR != NR && (!($0 in dups) || !lines[$0]++)
' - file
There's also the "line-number, double-sort" method.
nl -n ln | sort -u -k 2| sort -k 1n | cut -f 2-
You can run uniq -d on the sorted version of the file to find the duplicate lines, then run some script that says:
if this_line is in duplicate_lines {
if not i_have_seen[this_line] {
output this_line
i_have_seen[this_line] = true
}
} else {
output this_line
}
Using only uniq and grep:
Create d.sh:
#!/bin/sh
sort $1 | uniq > $1_uniq
for line in $(cat $1); do
cat $1_uniq | grep -m1 $line >> $1_out
cat $1_uniq | grep -v $line > $1_uniq2
mv $1_uniq2 $1_uniq
done;
rm $1_uniq
Example:
./d.sh infile
You could use some horrible O(n^2) thing, like this (Pseudo-code):
file2 = EMPTY_FILE
for each line in file1:
if not line in file2:
file2.append(line)
This is potentially rather slow, especially if implemented at the Bash level. But if your files are reasonably short, it will probably work just fine, and would be quick to implement (not line in file2 is then just grep -v, and so on).
Otherwise you could of course code up a dedicated program, using some more advanced data structure in memory to speed it up.
for line in $(sort file1 | uniq ); do
grep -n -m1 line file >>out
done;
sort -n out
first do the sort,
for each uniqe value grep for the first match (-m1)
and preserve the line numbers
sort the output numerically (-n) by line number.
you could then remove the line #'s with sed or awk