Snakemake: MissingInputException in snakemake pipeline - wildcard
I'm trying a SnakeMake pipeline and I'm stucked on an error I really don't understand.
I've got a directory (raw_data) in which I have the input files :
ll /home/nico/labo/etudes/Optimal/data/raw_data
total 41M
drwxrwxr-x 2 nico nico 4,0K mars 6 16:09 ./
drwxrwxr-x 11 nico nico 4,0K mars 6 16:14 ../
-rw-rw-r-- 1 nico nico 15M févr. 27 12:21 sampleA_R1.fastq.gz
-rw-rw-r-- 1 nico nico 19M févr. 27 12:22 sampleA_R2.fastq.gz
-rw-rw-r-- 1 nico nico 3,4M févr. 27 12:21 sampleB_R1.fastq.gz
-rw-rw-r-- 1 nico nico 4,3M févr. 27 12:22 sampleB_R2.fastq.gz
This directory contains 4 files for 2 samples.
I created a config json file for the SnakeMake pipeline named config_snakemake_Optimal_mapping_BaL.json:
{
"fastqExtension": "fastq.gz",
"fastqDir": "/home/nico/labo/etudes/Optimal/data/raw_data",
"outputDir": "/home/nico/labo/etudes/Optimal/data/mapping_BaL",
"logDir": "logs",
"reference": {
"fasta": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta",
"index": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta.bwt"
}
}
And finally the SnakeMake file snakefile_bwa_samtools.py:
import subprocess
from os.path import join
### Globals ---------------------------------------------------------------------
# A Snakemake regular expression matching fastq files.
SAMPLES, = glob_wildcards(join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]))
print(SAMPLES)
### Rules -----------------------------------------------------------------------
# Pipeline output files
rule all:
input: expand(join(config["outputDir"], "{sample}.bam.bai"), sample=SAMPLES)
# Reads alignment on reference genome and BAM file creation
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1_ID = "{sample}_R1."+config["fastqExtension"],
fq2_ID = "{sample}_R2."+config["fastqExtension"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1_ID} and {input.fq2_ID} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
# Sorting the BAM files on genomic positions
rule bam_sort:
input:
join(config["outputDir"], "{sample}.bamUnsorted")
output:
join(config["outputDir"], "{sample}.bam")
log:
join(config["outputDir"], config["logDir"], "{sample}.samtools_sort.log")
version:
subprocess.getoutput(
"samtools --version | "
"head -1 | "
"cut -d' ' -f2"
)
message:
"Genomic sorting of {input} with samtools version {version}."
shell:
"samtools sort -f {input} {output} 2> {log}"
# Indexing the BAM files
rule bam_index:
input:
join(config["outputDir"], "{sample}.bam")
output:
join(config["outputDir"], "{sample}.bam.bai")
message:
"Indexing {input}."
shell:
"samtools index {input}"
I run this pipeline:
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
and I've got the following error outputs:
['sampleB', 'sampleA']
MissingInputException in line 18 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
Missing input files for rule bwa_mem_to_bam:
sampleB_R1.fastq.gz
sampleB_R2.fastq.gz
or depending the moment:
['sampleB', 'sampleA']
PeriodicWildcardError in line 40 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
The value _unsorted in wildcard sample is periodically repeated (sampleB_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted). This would lead to an infinite recursion. To avoid this, e.g. restrict the wildcards in this rule to certain values.
The samples are correctly detected as they appear in the list (first line of kind of outputs) and I'm surely messing around with the wildcards in the rule bwa_mem_to_bam, but I really don't get why..
Any clue?
I quickly looked your code.
Why didn't the first one work out?
Look when you declare fq1_ID and fq1, same for sample 2. You didn't assign the same string. For fq1 you add a repertory for the file witch is not present for fq1_ID so snakemake is searching it in the workdir (current directory if -d option is not set) a file name with your string. Beacuse these variables are in input section.
So by removing the two fq1/2_ID, it will erase all files searching problems.
Hugo
Finally, I succed with the pipeline removing the fq1_ID and fq2_ID variables in the rule bwa_mem_to_bam and replacing in the message of the rule input.fq1_ID and input.fq2_ID by input.fq1 and input.fq2.
The message is less elegant, but the pipeline is running correctly. Still doesn't understand exactly where was the mistake, if someone can explain, I'm still listening!
The correct code for rule bwa_mem_to_bam:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1} and {input.fq2} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
Thanks Hugo for checking my code and your explanation, it makes sense!
I finally get a flash idea waking up this morning (the best ones), and realized that I neglected the params part of the rule, fq1_ID and fq2_ID are not inputs but params..
I changed the code to that:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1.fastq.gz"),
fq2 = join(config["fastqDir"], "{sample}_R2.fastq.gz")
output:
temp(join(config["outputDir"],"{sample}_unsorted.bam"))
params:
fq1_ID = "{sample}_R1.fastq.gz",
fq2_ID = "{sample}_R2.fastq.gz",
ref_ID = os.path.basename(config["reference"]["fasta"])
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {params.fq1_ID} and {params.fq2_ID} on {params.ref_ID} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
And it works just fine!
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
Provided cores: 3
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
2 bam_index
2 bam_sort
2 bwa_mem_to_bam
7
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
1 of 7 steps (14%) done
Genomic sorting of sampleB_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB_unsorted.bam.
2 of 7 steps (29%) done
Indexing sampleB.bam.
3 of 7 steps (43%) done
4 of 7 steps (57%) done
Genomic sorting of sampleA_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA_unsorted.bam.
5 of 7 steps (71%) done
Indexing sampleA.bam.
6 of 7 steps (86%) done
localrule all:
input: /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB.bam.bai, /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA.bam.bai
7 of 7 steps (100%) done
And finally get my correct messages:
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on
BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta
with BWA version 0.7.12.
Related
yosys fails at ABC pass (on counter.v demo)
I hope someone can help me with this... This is my first encounter with yosys. For the start, I'm trying to run the very same demo as Clifford explained in his presentation. I downloaded the demo at the following location: https://github.com/cliffordwolf/yosys/tree/master/manual/PRESENTATION_Intro yosys run beaks at the ABC pass with following message: 12. Executing ABC pass (technology mapping using ABC). 12.1. Extracting gate netlist of module `\counter' to `<abc-temp-dir>/input.blif'.. Extracted 6 gates and 12 wires to a netlist network with 4 inputs and 2 outputs. 12.1.1. Executing ABC. Running ABC command: <yosys-exe-dir>/yosys-abc -s -f <abc-temp-dir>/abc.script 2>&1 ABC: ABC command line: "source <abc-temp-dir>/abc.script". ABC: ABC: + read_blif <abc-temp-dir>/input.blif ABC: + read_lib -w /home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib ABC: usage: read_lib [-SG float] [-M num] [-dnvwh] <file> ABC: reads Liberty library from file ABC: -S float : the slew parameter used to generate the library [default = 0.00] ABC: -G float : the gain parameter used to generate the library [default = 0.00] ABC: -M num : skip gate classes whose size is less than this [default = 0] ABC: -d : toggle dumping the parsed library into file "*_temp.lib" [default = no] ABC: -n : toggle replacing gate/pin names by short strings [default = no] ABC: -v : toggle writing verbose information [default = yes] ABC: -v : toggle writing information about skipped gates [default = yes] ABC: -h : prints the command summary ABC: <file> : the name of a file to read ABC: ** cmd error: aborting 'source <abc-temp-dir>/abc.script' ERROR: Can't open ABC output file `/tmp/yosys-abc-KDGya6/output.blif'. [boris#E7440 yosys_synthesys]$ I have had a look at the file location mentioned in the error statement above, there is no output.blif in there: [boris#E7440 yosys_synthesys]$ ll /tmp/yosys-abc-KDGya6/ total 12K -rw-rw-r--. 1 boris boris 542 Jul 5 11:21 abc.script -rw-rw-r--. 1 boris boris 526 Jul 5 11:21 input.blif -rw-rw-r--. 1 boris boris 852 Jul 5 11:21 stdcells.genlib [boris#E7440 yosys_synthesys]$ Buy the way, here is some system/tools info that might be relevant for debugging: Linux E7440.DELL 4.4.13-200.fc22.x86_64 #1 SMP Wed Jun 8 15:59:40 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Yosys 0.6+141 (git sha1 080f95f, gcc 5.3.1 -fPIC -Os) UC Berkeley, ABC 1.01 (compiled Mar 8 2015 01:00:49)
The issue has been resolved... Solution = Changed rundir from: /home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib to: /home/boris/Documents/SelfLearning/yosys_synthesys/mycells.lib Lesson learned = ABC tool does not accept space characters in the path/file name.
Format output of concatenating 2 variables in unix
I am coding a simple shell script that checks the space of the target path and the space utilization per directory on that target path (example, I am checking space of /path1/home, and also checks how all the folders on /path1/home is consuming the total space.) My question is regarding the output it produces, it is not that pleasing to the eye (uneven spacing). See sample output lines below. SIZE USER_FOLDER DATE_LAST_MODIFIED 83G FOLDER 1 Apr 15 03:45 34G FOLDER 10 Mar 9 05:02 26G FOLDER 11 Mar 29 13:01 8.2G FOLDER 100 Apr 1 09:42 1.8G FOLDER 101 Apr 11 13:50 1.3G FOLDER 110 Feb 16 09:30 I just want the output format to be in line with the header so it will look neat because I will use it as a report. Here is the code I am using for this part. ls -1 | grep -v "lost+found" |grep -v "email_body.tmp" > $v_path/Users.tmp for user in `cat $v_path/Users.tmp | grep -v "Users.tmp"` do folder_size=`du -sh $user 2>/dev/null` # should be run using a more privileged user so that other folders can be read (2>/dev/null was used to discard error messages i.e. "du: cannot read directory `./marcnad/.gnupg': Permission denied") folder_date=`ls -ltr | tr -s " " | cut -f6,7,8,9, -d" " | grep -w $user | cut -f1,2,3, -d" "` folder_size="$folder_size $folder_date" echo $folder_size >> $v_path/Users_Usage.tmp done echo "Summary of $v_path Disk Space Utilization per folder." >> email_body.tmp echo "" >> email_body.tmp echo "SIZE USER_FOLDER DATE_LAST_MODIFIED" >> email_body.tmp for i in T G M K do cat $v_path/Users_Usage.tmp | grep [0-9]$i | sort -nr -k 1 >> $v_path/email_body.tmp done Thanks! EDIT: Formatting
When you print the data use printf instead of echo cat $v_path/Users_Usage.tmp | while read a b c d e f do printf '%-5s%-7%s%-4s%-4s%-3s-6s' $a $b $c $d $e $f done See here
Finding common elements from one file in a column of another file and output the entire row of the latter
I needed to extract all hits from one list (list.txt) which can be found in one of the columns of another (here in Data.txt) into a third (output.txt). Data.txt (tab delimited) some_data more_data other_data here yet_more_data etc A B 2 Gee;Whiz;Hello 13 12 A B 2 Gee;Whizz;Hi 56 32 E 4 Btm;Lol 16 2 T 3 Whizz 13 3 List.txt Gee Whiz Lol Ideally output.txt looks like some_data more_data other_data here yet_more_data etc A B 2 Gee;Whiz;Hello 13 12 A B 2 Gee;Whizz;Hi 56 32 E 4 Btm;Lol 16 2 So I tried a shell script for ids in List.txt do grep $ids Data.txt >> output.txt done except I typed out everything (cut and paste actually) in List.txt in said script. Unfortunately it gave me an output.txt including the last line, I assume as 'Whizz' contains 'Whiz'. I also tried cat Data.txt | egrep -F "List.txt" and that resulted in grep: conflicting matchers specified -- I suppose that was too naive of me. The actual files: List.txt contains a sorted list of 985 words, Data.txt has 115576 rows with 17 columns. Some help/guidance would be much appreciated thanks.
Try something like this: for ids in List.txt do grep "[TAB;]$ids[TAB;]" Data.txt >> output.txt done But it has two drawbacks: "Data.txt" is scanned multiple times You can get one line multiple times. If it is problem try two step version: cat List.txt | sed -e "s/.*/[TAB;]\0[TAB;]/g" > List_mod.txt grep -f List_mod.txt Data.txt > output.txt Note: TAB character can be inserted by combination Ctrl-V following by Tab key in command line, and Tab character in editor. You have to check if your edit does not change tab to series of spaces.
The UNIX tool for general text processing is "awk": awk ' NR==FNR { list[$0]; next } { for (word in list) { if ($0 ~ "[\t;]" word "[\t;]") { print next } } } ' List.txt Data.txt > output.txt
How to properly grep filenames only from ls -al
How do I tell grep to only print out lines if the "filename" matches when I'm piping through ls? I want it to ignore everything on each line until after the timestamp. There must be some easy way to do this on a single command. As you can see, without it, if I searched for the file "rwx", it would return not only the line with rwx.c, but also the first three lines because of permissions. I was going to use AWK but I want it to display the whole last line if I search for "rwx". Any ideas? EDIT: Thanks for the hacks below. However, it would be great to have a more bug-free method. For example, if I had a file named "rob rob", I wouldn't be able to use the stated solutions. drwxrwxr-x 2 rob rob 4096 2012-03-04 18:03 . drwxrwxr-x 4 rob rob 4096 2012-03-04 12:38 .. -rwxrwxr-x 1 rob rob 13783 2012-03-04 18:03 a.out -rw-rw-r-- 1 rob rob 4294 2012-03-04 18:02 function1.c -rw-rw-r-- 1 rob rob 273 2012-03-04 12:54 function1.c~ -rw-rw-r-- 1 rob rob 16 2012-03-04 18:02 rwx.c -rw-rw-r-- 1 rob rob 16 2012-03-04 18:02 rob rob
The following will list only file name, and one file in each row. $ ls -1 To include . files $ ls -1a Please note that the argument is number "1", not letter "l".
Why don't you use grep and match the file name following the timestamp? grep -P "[0-9]{2}:[0-9]{2} $FILENAME(\.[a-zA-Z0-9]+)?$" The [0-9]{2}:[0-9]{2} is for the time, the $FILENAME is where you'd put rob rob or rwx, and the trailing (\.[a-zA-Z0-9]+)? is to allow for an optional extension. Edit: #JonathanLeffler below points out that when files are older than bout 6 months the time column gets replaced by a year - this is what happens on my computer anyhow. You could do ([0-9]{2}:[0-9]{2}|(19|20)[0-9]{2}) to allow time OR year, but you may be best of using awk (?). [foo#bar ~/tmp]$ls -al total 8 drwxrwxr-x 2 foo foo 4096 Mar 5 09:30 . drwxr-xr-- 83 foo foo 4096 Mar 5 09:30 .. -rw-rw-r-- 1 foo foo 0 Mar 5 09:30 foo foo -rw-rw-r-- 1 foo foo 0 Mar 5 09:29 rwx.c -rw-rw-r-- 1 foo foo 0 Mar 5 09:29 tmp [foo#bar ~/tmp]$export filename='foo foo' [foo#bar ~/tmp]$echo $filename foo foo [foo#bar ~/tmp]$ls -al | grep -P "[0-9]{2}:[0-9]{2} $filename(\.[a-zA-Z0-9]+)?$" -rw-rw-r-- 1 cha66i cha66i 0 Mar 5 09:30 foo foo (You could additionally extend to matching the whole line if you wanted: ^ # start of line [d-]([r-][w-][x-]){3} + # permissions & space (note: is there a 't' or 's' # sometimes where the 'd' can be??) [0-9]+ # whatever that number is [\w-]+ [\w-]+ + # user/group (are spaces allowed in these?) [0-9]+ + # file size (modify for -h switch??) (19|20)[0-9]{2}- # yyyy (modify if you want to allow <1900) (1[012]|0[1-9])- # mm (0[1-9]|[12][0-9]|3[012]) + # dd ([01][0-9]|2[0-3]):[0-6][0-9] +# HH:MM (24hr) $filename(\.[a-zA-Z0-9]+)? # filename & optional extension $ # end of line . You get the point, tailor to your needs.)
Assuming that you aren't prepared to do: ls -ld $(ls -a | grep rwx) then you need to exploit the fact that there are 8 columns with space separation before the file name starts. Using egrep (or grep -E), you could do: ls -al | egrep "^([^ ]+ +){8}.*rwx" This looks for 'rwx' after the 8th column. If you want the name to start with rwx, omit the .*. If you want the name to end with rwx, add a $ at the end. Note that I used double quotes so you could interpolate a variable in place of the literal rwx. This was tested on Mac OS X 10.7.3; the ls -l command consistently gives three columns for the date field: -r--r--r-- 1 jleffler staff 6510 Mar 17 2003 README,v -r--r--r-- 1 jleffler staff 26676 Mar 3 21:44 ccs.nmd Your ls -l seems to be giving just two columns, so you'd need to change the {8} to {7} for your machine - and beware migrating between systems.
Well, if you're working with filenames that don't have spaces in them, you could do something like this: grep 'rwx\S*$'
Aside frrm the fact that you can use pattern matching with ls, exaple ksh and bash, which is probably what you should do, you can use the fact that filename occur in a fixed position. awk (gawk, nawk or whaever you have) is a better choice for this. If you have to use grep it smells like homework to me. Please tag it that way. Assume the filename starting position is based on this output from ls -l in linux: 56 -rwxr-xr-x 1 Administrators None 2052 Feb 28 20:29 vote2012.txt ls -l | awk ' substr($0,56) ~/your pattern even with spaces goes here/' e.g., ls -l | awk ' substr($0,56) ~/^val/' will find files starting with "val"
As a simple hack, just add a space before your filename so you don't match the beginning of the output: ls -al | grep '\srwx' Edit: OK, this is not as robust as it should be. Here's awk: ls -l | awk ' $9 ~ /rwx/ { print $0 }'
This works for me, unlike ls -l & others as some folks pointed out. I like this because its really generic & gives me the base file name, which removes the path names before the file. ls -1 /path_name |awk -F/ '{print $NF}'
Only one command you needed for this -- ls -al | gawk '{print $9}'
You can use this: ls -p | grep -v /
this is super old, but i needed the answer and had a hard time finding it. i didn't really care about the one-liner part; i just needed it done. this is down and dirty and requires that you count the columns. i'm not looking for an upvote here, just leaving some options for future searcher-ers. the helpful awk trick is here -- Using awk to print all columns from the nth to the last if YOUR_FILENAME="rob rob" and WHERE_FILENAMES_START=8 ls -al | while read x; do y=$(echo "$x" | awk '{for(i=$WHERE_FILENAMES_START; i<=NF; ++i) printf $i""FS; print ""}') [[ "$YOUR_FILENAME " = "$y" ]] && echo "$x" done if you save it as a bash script and swap out the vars with $2 and $1, throw the script in your usr bin... then you'll have your clean simple one-liner ;) output will be: > -rw-rw-r-- 1 rob rob 16 2012-03-04 18:02 rob rob the question was for a one-liner so... ls -al | while read x; do [[ "$YOUR_FILENAME " = "$(echo "$x" | awk '{for(i=WHERE_FILENAMES_START; i<=NF; ++i) printf $i""FS; print ""}')" ]] && echo "$x" ; done (lol ;P) on another note: mathematical.coffee your answer was rad. it didn't solve my version of this problem, so i didn't upvote, but i liked your regex breakdown :D
Move top 1000 lines from text file to a new file using Unix shell commands
I wish to copy the top 1000 lines in a text file containing more than 50 million entries, to another new file, and also delete these lines from the original file. Is there some way to do the same with a single shell command in Unix?
head -1000 input > output && sed -i '1,+999d' input For example: $ cat input 1 2 3 4 5 6 $ head -3 input > output && sed -i '1,+2d' input $ cat input 4 5 6 $ cat output 1 2 3
head -1000 file.txt > first100lines.txt tail --lines=+1001 file.txt > restoffile.txt
Out of curiosity, I found a box with a GNU version of sed (v4.1.5) and tested the (uncached) performance of two approaches suggested so far, using an 11M line text file: $ wc -l input 11771722 input $ time head -1000 input > output; time tail -n +1000 input > input.tmp; time cp input.tmp input; time rm input.tmp real 0m1.165s user 0m0.030s sys 0m1.130s real 0m1.256s user 0m0.062s sys 0m1.162s real 0m4.433s user 0m0.033s sys 0m1.282s real 0m6.897s user 0m0.000s sys 0m0.159s $ time head -1000 input > output && time sed -i '1,+999d' input real 0m0.121s user 0m0.000s sys 0m0.121s real 0m26.944s user 0m0.227s sys 0m26.624s This is the Linux I was working with: $ uname -a Linux hostname 2.6.18-128.1.1.el5 #1 SMP Mon Jan 26 13:58:24 EST 2009 x86_64 x86_64 x86_64 GNU/Linux For this test, at least, it looks like sed is slower than the tail approach (27 sec vs ~14 sec).
This is a one-liner but uses four atomic commands: head -1000 file.txt > newfile.txt; tail +1000 file.txt > file.txt.tmp; cp file.txt.tmp file.txt; rm file.txt.tmp
Perl approach: perl -ne 'if($i<1000) { print; } else { print STDERR;}; $i++;' in 1> in.new 2> out && mv in.new in
Using pipe: cat en-tl.100.en | head -10