How to delete a row in a csv file with powershell in R? - r

Good morning,
I'm new about powershell and I'd like to ask you if somebody can help me.
I have a big csv file around 3.5gb and my goal is to load it with fread (a data.table function) in R environment, but this function makes a error.
> n_a<-fread("C:/x/xy/xyz/name_file.csv",sep=";", fill = TRUE)
The error is:
Warning message:
In fread("C:/x/xy/xyz/name_file.csv") :
Stopped early on line 458945. Expected 29 fields but found 30. Consider fill=TRUE and comment.char=. First discarded non-empty line
I tried to use different way (I putted in my code fill=true, but doesn't work) to solve the problem, but I couldn't do it.
After different researches I found this kind of solution (always to do in R):
>system("powershell Get-Content C:/a/b/c/file.csv | Select -Index (0..458944 + 1000000) > output.csv")
The focus about the use of powershell in R is to delete a specific row and to load with fread the file.
My question is:
How I can delete a specific row in a csv in powershell but without specifying the length of the matrix?
Thank you in advance for every type of help.
Francesco

I'd hazard a guess that the invalid row's location is not known. In such a case, it might be sensible to read the original file and create a new file that contains only valid data. What's more, if the source data would benefit of manipulation, it can be done before reading it into R.
A file as large as 3,5 GiB is a bit on the large side to read in memory as such. Sure, it can be done in the days of 64 bit systems, but for simple row processing it's unwieldy. A scalable solution uses .Net methods and row-by-row approach.
To process a file on row-by-row basis, use .Net methods for efficient row reading. A StringBuilder is created to store rows that contain valid data, others are discarded. The StringBuilder is flushed on disk every so often. Even on days of SSDs, a write operation for each row is relatively slow in respect to writing in a bulk of, say, 10 000 rows a time.
$sb = New-Object Text.StringBuilder
$reader = [IO.File]::OpenText("MyCsvFile.csv")
$i = 0
$MaxRows = 10000
$colonCount = 30
while($null -ne ($line = $reader.ReadLine())) {
# Split the line on semicolons
$elements = $line -split ';'
# If there were $colonCount elements, add those to builder
if($elements.count -eq $colonCount) {
# If $line's contents need modifications, do it here
# before adding it into the builder
[void]$sb.AppendLine($line)
++$i
}
# Write builder contents into file every now and then
if($i -ge $MaxRows) {
add-content "MyCleanCsvFile.csv" $sb.ToString()
[void]$sb.Clear()
$i = 0
}
}
# Flush the builder after the loop if there's data
if($sb.Length -gt 0) {
add-content "MyCleanCsvFile.csv" $sb.ToString()
}

This is easy done in powershell: Read csv in generic list, remove line and write back:
Add-Type -AssemblyName System.Collections
[System.Collections.Generic.List[string]]$csvList = #()
$csvFile = 'C:\test\myfile.csv'
$csvList = [System.IO.File]::ReadLines( $csvFile )
$lineToDelete = 2
[void]$csvList.RemoveAt( $lineToDelete - 1 )
[System.IO.File]::WriteAllLines( $csvFile, $csvList ) | Out-Null

vonPryz's helpful answer offers the best solution, given the size of your input file.
The following works too, but will be slow - in general, due to the overhead of using a pipeline, but also because Get-Content itself is slow due to decorating each line read with additional properties (see green-lighted, but not yet implemented GitHub suggestion #7537):
# Exclude line number 458945 (0-based index 458944)
Get-Content C:/a/b/c/file.csv | Select-Object -SkipIndex 458944 > output.csv
The beneficial flip side of use of the pipeline is that it acts as a memory throttle, so the above command can be used to process arbitrarily large files (though it may take a long time).

Related

Rascal MPL get line count of file without loading its contents

Is there a more efficient way than
int fileSize = size(readFileLines(fileLoc));
to get the total number of lines in a file? I presume this code has to read the entire file first, which could become costly for huge files.
I have looked into IO and Loc whether some of this info might be saved in conjunction with the file.
This is the way, unless you'd like to call wc -l via util::ShellExec 😁
Apart from streaming the file and saving some memory counting lines is always linear in the size of the file so you won't win much time.

Making writing to a file process more efficient

I am new to programming and i am running this script to clean a large text file (over 12000 lines) and write it to another .txt file. The problem is when a run this with a smaller file (roughly around 500 line) it executes fast, therefore my conclusion was it is taking time due to the size of the file. So if someone can guide me to make this code efficient it will be highly appreciated.
input_file = open('bNEG.txt', 'rt', encoding='utf-8')
l_p = LanguageProcessing()
sentences=[]
for lines in input_file.readlines():
tokeniz = l_p.tokeniz(lines)
cleaned_url = l_p.clean_URL(tokeniz)
remove_words = l_p.remove_non_englishwords(cleaned_url)
stopwords_removed = l_p.remove_stopwords(remove_words)
cleaned_sentence=' '.join(str(s) for s in stopwords_removed)+"\n"
output_file = open('cNEG.txt', 'w', encoding='utf-8')
sentences.append(cleaned_sentence)
output_file.writelines(sentences)
input_file.close()
output_file.close()
EDIT: Below is the corrected code as mentioned in the answer with few other alteration to suit my requirements
input_file = open('chromehistory_log.txt', 'rt', encoding='utf-8')
output_file = open('dNEG.txt', 'w', encoding='utf-8')
l_p = LanguageProcessing()
#sentences=[]
for lines in input_file.readlines():
#print(lines)
tokeniz = l_p.tokeniz(lines)
cleaned_url = l_p.clean_URL(tokeniz)
remove_words = l_p.remove_non_englishwords(cleaned_url)
stopwords_removed = l_p.remove_stopwords(remove_words)
#print(stopwords_removed)
if stopwords_removed==[]:
continue
else:
cleaned_sentence=' '.join(str(s) for s in stopwords_removed)+"\n"
#sentences.append(cleaned_sentence)
output_file.writelines(cleaned_sentence)
input_file.close()
output_file.close()
To have the discussion as answer:
Two problems are here:
You open / create the outputfile and write the data in the loop - for every line of the input file. Additional you are collection all data in an array (sentences).
You have two possibilities:
a) Create the file before the loop, and write in the loop just "cleaned_sentence" (and delete the collecting "sentences").
b) Collect everything in "sentences" and write "sentences" at once after the loop.
Disadvantage of a) is: this is a bit slower than b) (as long as the OS di not have to swap memory for b). But the advantage is: This is much less memory consuming and will work no matter how big the file is and how less memory in the computer is installed.

phpExcel Read in chunks so slow and memory errors

I'm trying to read a big excel file of about 20mb to import into mysql.
I've searched across internet and found the "Chunks reading" solution, however is not working... or is SO slowly for me, and I'm not sure why.
This is what im doing:
// .....
// into MyReadFilter class.. this is the most important function:
public function readCell($column, $row, $worksheetName = '') {
// Only read the rows and columns that were configured
if (($row == 1) || ($row >= $this->_startRow && $row < $this->_endRow)) {
if (in_array($column,$this->_columns)) {
return true;
}
}
return false;
}
// .....
$filter = new MyReadFilter(1, 22000);
$chunkSize = 10;
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
$objReader->setReadFilter($filter);
$objReader->setReadDataOnly(false); //not sure if this should be true
for ($startRow = 2; $startRow <= 65536; $startRow += $chunkSize) {
echo "Reading";
$filterSubset->setRows($startRow, $chunkSize);
$objPHPExcel = $objReader->load($inputFileName); // this line takes like 40 seconds... for 10 rows?
echo "chunk done! ";
}
However, inside the for, the $objReader->load() is taking like 40 seconds, and in fact, after 2 loops I got a memory error.
If I unset the $objReader inside the for I can make it run about 20 times inside the for... (although it take like 10 minutes) and.. memory error.
I'm wondering why the load function seems to read all the file if im using a filter, also the filter strategy seems to parse all rows and return false for all rows that are not being required... is not posible to abort reading or really read just the required ones?
I've tried a couple of FilterClass and code snippets but got same results...
If You're using a filter, then the Reader is still reading the whole file but only populating the PHPExcel object cells that are defined by the filter; and the Reader still needs to read the whole file each pass of the filtering process, which is what makes it slower.
The Reader needs to read the whole file because of the structure of the raw spreadsheet files. Cell data is not stored with cell formatting, and cell content may also be stored separately. The Reader needs to pull all this together. You can't simply abort the reader when the filter condition is met, because the reader has no way of knowing that it has been completed... if you have a filter that is limiting the load to cells A1:C3, then you can't abort after B3 has been read because you don't know if cell B2 comes after that in the file, or there may be comments associated with cell A1 further on in the file. Until the whole file has been loaded and parsed, you can't start to filter.
The main memory usage in PHPExcel is the PHPExcel object, and specifically the cells (typically about 1k/cell on 32-bit PHP).... the main solution provided to reduce memory here is cell caching. This can (using SQLite caching) reduce cell memory usage to 0k/cell, though at a cost in speed.
The Reader doesn't use much more memory than the size of the Excel file (decompressed) itself, so is normally far less of a memory problem; but this is being addressed (for XML-based spreadsheet formats) by switching from SimpleXML to XMLReader. But it is dependent on the format of the file being loaded; xls format files are very different to xlsx files (xlsx will benefit from this, xls won't) and also dependent on the developers being able to find the time to do this - but it is on the roadmap for the coming year, and work has already started.

Looking for algorithm to do long pair wise nucleotide alignments

I am trying to scan for possible SNPs and indels by aligning scaffolds to subsequences from a reference genome. (the raw reads are not available). I am using R/bioconductor and the `pairwiseAlignment function from the Biostrings package.
This was working fine for smaller scaffolds, but failed when I tried to align as 56kbp scaffold with the error message:
Error in QualityScaledXStringSet.pairwiseAlignment(pattern = pattern,
: cannot allocate memory block of size 17179869183.7 Gb
I am not sure if this is a bug or not ? ; I was under the impression that the Needleman-Wunsch algorithm used by pairwiseAlignment is an O(n*m) which I thought would imply the computational demand to be on the order of 3.1E9 operations (56K * 56k ~= 3.1E9). It seems the Needleman-Wunsch similarity matrix should as well take up on the order of 3.1 gigs of memory as well. Not sure if I'm not remembering big-o notation correctly or that is actually the memory overhead that would be needed to build the alignment given the overhead of the R scripting environment.
Does anybody have suggestions for a better alignment algorithm to use for aligning longer sequences? An initial alignment was already done using BLAST to find the region of the reference genome to align. I am not entirely confident BLAST's reliability for correctly placing indels and I have not yet been able to find an api as good as that provided by biostrings for parsing the raw BLAST alignments.
By the way, here is a code snippet that replicates the problem:
library("Biostrings")
scaffold_set = read.DNAStringSet(scaffold_file_name) #scaffold_set is a DNAStringSet instance
scafseq = scaffold_set[[scaffold_name]] #scaf_seq is a "DNAString" instance
genome = read.DNAStringSet(genome_file_name)[[1]] #genome is a "DNAString" instance
#qstart, qend, substart, subend are all from intial BLAST alignment step
scaf_sub = subseq(scafseq, start=qstart, end=qend) #56170-letter "DNAString" instance
genomic_sub = subseq(genome, start=substart, end=subend) #56168-letter "DNAString" instance
curalign = pairwiseAlignment(pattern = scaf_sub, subject = genomic_sub)
#that last line gives the error:
#Error in .Call2("XStringSet_align_pairwiseAlignment", pattern, subject, :
#cannot allocate memory block of size 17179869182.9 Gb
The error does not happen with shorter alignments (hundreds of bases).
I have not yet found the length cutoff where the error starts happening
So I use Clustal as an alignment tool. Not sure about the specific performance, but it has never given me issues when doing multiple sequence alignments of large quantity. Here is a script that runs a whole directory of .fasta files and aligns them. You can modify the flags on the system call to suit your input/output needs. Just look at the clustal documentation. This is in Perl, I don't use R too much for alignments. You need to edit the executable path in the script to match where clustal is on your computer.
#!/usr/bin/perl
use warnings;
print "Please type the list file name of protein fasta files to align (end the directory path with a / or this will fail!): ";
$directory = <STDIN>;
chomp $directory;
opendir (DIR,$directory) or die $!;
my #file = readdir DIR;
closedir DIR;
my $add="_align.fasta";
foreach $file (#file) {
my $infile = "$directory$file";
(my $fileprefix = $infile) =~ s/\.[^.]+$//;
my $outfile="$fileprefix$add";
system "/Users/Wes/Desktop/eggNOG_files/clustalw-2.1-macosx/clustalw2 -INFILE=$infile -OUTFILE=$outfile -OUTPUT=FASTA -tree";
}

Compress EACH LINE of a file individually and independently of one another? (or preserve newlines)

I have a very large file (~10 GB) that can be compressed to < 1 GB using gzip. I'm interested in using sort FILE | uniq -c | sort to see how often a single line is repeated, however the 10 GB file is too large to sort and my computer runs out of memory.
Is there a way to compress the file while preserving newlines (or an entirely different method all together) that would reduce the file to a small enough size to sort, yet still leave the file in a condition that's sortable?
Or any other method of finding out / countin how many times each line is repetead inside a large file (a ~10 GB CSV-like file) ?
Thanks for any help!
Are you sure you're running out of the Memory (RAM?) with your sort?
My experience debugging sort problems leads me to believe that you have probably run out of diskspace for sort to create it temporary files. Also recall that diskspace used to sort is usually in /tmp or /var/tmp.
So check out your available disk space with :
df -g
(some systems don't support -g, try -m (megs) -k (kiloB) )
If you have an undersized /tmp partition, do you have another partition with 10-20GB free? If yes, then tell your sort to use that dir with
sort -T /alt/dir
Note that for sort version
sort (GNU coreutils) 5.97
The help says
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
I'm not sure if this means can combine a bunch of -T=/dr1/ -T=/dr2 ... to get to your 10GB*sortFactor space or not. My experience was that it only used the last dir in the list, so try to use 1 dir that is big enough.
Also, note that you can go to the whatever dir you are using for sort, and you'll see the acctivity of the temporary files used for sorting.
I hope this helps.
As you appear to be a new user here on S.O., allow me to welcome you and remind you of four things we do:
. 1) Read the FAQs
. 2) Please accept the answer that best solves your problem, if any, by pressing the checkmark sign. This gives the respondent with the best answer 15 points of reputation. It is not subtracted (as some people seem to think) from your reputation points ;-)
. 3) When you see good Q&A, vote them up by using the gray triangles, as the credibility of the system is based on the reputation that users gain by sharing their knowledge.
. 4) As you receive help, try to give it too, answering questions in your area of expertise
There are some possible solutions:
1 - use any text processing language (perl, awk) to extract each line and save the line number and a hash for that line, and then compare the hashes
2 - Can / Want to remove the duplicate lines, leaving just one occurence per file? Could use a script (command) like:
awk '!x[$0]++' oldfile > newfile
3 - Why not split the files but with some criteria? Supposing all your lines begin with letters:
- break your original_file in 20 smaller files: grep "^a*$" original_file > a_file
- sort each small file: a_file, b_file, and so on
- verify the duplicates, count them, do whatever you want.

Resources