I am using a 32 bit perl in my openvms system.(So perl can access up till 2gb of virtual address space ).
I am hitting "out of memory!" in a large perl script. I zeroed in on the location of variable causing this . However after my tests with devel:size it turns out the array is using only 13 Mb memory and the hash is using much less than that.
My question is about memory profiling this perl script in VMS.
is there a good way of doing memory profile on VMS?
I used size to get size of array and hash.(Array is local scope and hash is global scope)
DV Z01 A4:[INTRO_DIR]$ perl scanner_SCANDIR.PL
Directory is Z3:[new_dir]
13399796 is total on array
3475702 is total on hash
Directory is Z3:[new_dir.subdir]
2506647 is total on array
4055817 is total on hash
Directory is Z3:[new_dir.subdir.OBJECT]
5704387 is total on array
6040449 is total on hash
Directory is Z3:[new_dir.subdir.XFET]
1585226 is total on array
6390119 is total on hash
Directory is Z3:[new_dir.subdir.1]
3527966 is total on array
7426150 is total on hash
Directory is Z3:[new_dir.subdir.2]
1698678 is total on array
7777489 is total on hash
(edited: Pmis-spelled GFLQUOTA )
Where is that output coming from? To OpenVMS folks it suggests files in directories, which the code might suck in? There would typically be considerable malloc/align overhead per element saved.
Anyway the available ADDRESSABLE memory when strictly using 32 pointers on OpenVMS is 1GB: 0x0 .. 0x3fffffff, not 2GB, for programs and (malloc) data for 'P0' space. There is also room in P1 (0x7fffffff .. 0x4000000) for thread-local stack storages, but perl does not use (much) of that.
From a second session you can look at that with DCL:
$ pid = "xxxxxxxx"
$ write sys$output f$getjpi(pid,"FREP0VA"), " ", f$getjpi(pid,"FREP1VA")
$ write sys$output f$getjpi(pid,"PPGCNT"), " ", f$getjpi(pid,"GPGCNT")
$ write sys$output f$getjpi(pid,"PGFLQUOTA")
However... those are just addresses ranges, NOT how much memory the process is allowed to used. That's governed by the process page-file-quota. Check with $ SHOW PROC/QUOTA before running perl. And its usage can be reported as per above from the outside adding Private pages and Groups-shared pages as per above.
An other nice way to look at memory (and other quota) is SHOW PROC/CONT ... and then hit "q"
So how many elements are stored in each large active array? How large is an average element, rounded up to 16 bytes? How many hash elements? How large are the key + value on average (round up generously)
What is the exact message?
Does the program 'blow' up right away, or after a while (so you can use SHOW PROC/CONT)
Is there a source file data set (size) that does work?
Cheers,
Hein.
Related
Is there a more efficient way than
int fileSize = size(readFileLines(fileLoc));
to get the total number of lines in a file? I presume this code has to read the entire file first, which could become costly for huge files.
I have looked into IO and Loc whether some of this info might be saved in conjunction with the file.
This is the way, unless you'd like to call wc -l via util::ShellExec 😁
Apart from streaming the file and saving some memory counting lines is always linear in the size of the file so you won't win much time.
I'm using fread in data.table (1.8.8, R 3.0.1) in a attempt to read very large files.
The file in questions has 313 rows and ~6.6 million cols of numeric data rows and the file is around around 12gb. This is a Centos 6.4 with 512GB of RAM.
When I attempt to read in the file:
g=fread('final.results',header=T,sep=' ')
'header' changed by user from 'auto' to TRUE
Error: protect(): protection stack overflow
I tried starting R with --max-ppsize 500000 , which is the max, but the same error.
I also tried setting the stack size to unlimited via
ulimit -s unlimited
Virtual memory was already set to unlimited.
Am I being unrealistic with a file of this size? Did I miss something fairly obvious?
Now fixed in v1.8.9 on R-Forge.
An unintended 50,000 column limit has been removed in fread. Thanks to mpmorley for reporting. Test added.
The reason was I got this part wrong in the fread.c source :
// *********************************************************************
// Allocate columns for known nrow
// *********************************************************************
ans=PROTECT(allocVector(VECSXP,ncol));
protecti++;
setAttrib(ans,R_NamesSymbol,names);
for (i=0; i<ncol; i++) {
thistype = TypeSxp[ type[i] ];
thiscol = PROTECT(allocVector(thistype,nrow)); // ** HERE **
protecti++;
if (type[i]==SXP_INT64)
setAttrib(thiscol, R_ClassSymbol, ScalarString(mkChar("integer64")));
SET_TRUELENGTH(thiscol, nrow);
SET_VECTOR_ELT(ans,i,thiscol);
}
According to R-exts section 5.9.1, that PROTECT inside the loop isn't needed :
In some cases it is necessary to keep better track of whether protection is really needed. Be
particularly aware of situations where a large number of objects are generated. The pointer
protection stack has a fixed size (default 10,000) and can become full. It is not a good idea
then to just PROTECT everything in sight and UNPROTECT several thousand objects at the end. It
will almost invariably be possible to either assign the objects as part of another object (which
automatically protects them) or unprotect them immediately after use.
So that PROTECT is now removed and all is well. (It seems that the pointer protection stack limit has been reduced to 50,000 since that text was written; Defn.h contains #define R_PPSSIZE 50000L.) I've checked all other PROTECTs in data.table C source for anything similar and found and fixed one in assign.c too (when adding more than 50,000 columns by reference), no others.
Thanks for reporting!
I am trying to scan for possible SNPs and indels by aligning scaffolds to subsequences from a reference genome. (the raw reads are not available). I am using R/bioconductor and the `pairwiseAlignment function from the Biostrings package.
This was working fine for smaller scaffolds, but failed when I tried to align as 56kbp scaffold with the error message:
Error in QualityScaledXStringSet.pairwiseAlignment(pattern = pattern,
: cannot allocate memory block of size 17179869183.7 Gb
I am not sure if this is a bug or not ? ; I was under the impression that the Needleman-Wunsch algorithm used by pairwiseAlignment is an O(n*m) which I thought would imply the computational demand to be on the order of 3.1E9 operations (56K * 56k ~= 3.1E9). It seems the Needleman-Wunsch similarity matrix should as well take up on the order of 3.1 gigs of memory as well. Not sure if I'm not remembering big-o notation correctly or that is actually the memory overhead that would be needed to build the alignment given the overhead of the R scripting environment.
Does anybody have suggestions for a better alignment algorithm to use for aligning longer sequences? An initial alignment was already done using BLAST to find the region of the reference genome to align. I am not entirely confident BLAST's reliability for correctly placing indels and I have not yet been able to find an api as good as that provided by biostrings for parsing the raw BLAST alignments.
By the way, here is a code snippet that replicates the problem:
library("Biostrings")
scaffold_set = read.DNAStringSet(scaffold_file_name) #scaffold_set is a DNAStringSet instance
scafseq = scaffold_set[[scaffold_name]] #scaf_seq is a "DNAString" instance
genome = read.DNAStringSet(genome_file_name)[[1]] #genome is a "DNAString" instance
#qstart, qend, substart, subend are all from intial BLAST alignment step
scaf_sub = subseq(scafseq, start=qstart, end=qend) #56170-letter "DNAString" instance
genomic_sub = subseq(genome, start=substart, end=subend) #56168-letter "DNAString" instance
curalign = pairwiseAlignment(pattern = scaf_sub, subject = genomic_sub)
#that last line gives the error:
#Error in .Call2("XStringSet_align_pairwiseAlignment", pattern, subject, :
#cannot allocate memory block of size 17179869182.9 Gb
The error does not happen with shorter alignments (hundreds of bases).
I have not yet found the length cutoff where the error starts happening
So I use Clustal as an alignment tool. Not sure about the specific performance, but it has never given me issues when doing multiple sequence alignments of large quantity. Here is a script that runs a whole directory of .fasta files and aligns them. You can modify the flags on the system call to suit your input/output needs. Just look at the clustal documentation. This is in Perl, I don't use R too much for alignments. You need to edit the executable path in the script to match where clustal is on your computer.
#!/usr/bin/perl
use warnings;
print "Please type the list file name of protein fasta files to align (end the directory path with a / or this will fail!): ";
$directory = <STDIN>;
chomp $directory;
opendir (DIR,$directory) or die $!;
my #file = readdir DIR;
closedir DIR;
my $add="_align.fasta";
foreach $file (#file) {
my $infile = "$directory$file";
(my $fileprefix = $infile) =~ s/\.[^.]+$//;
my $outfile="$fileprefix$add";
system "/Users/Wes/Desktop/eggNOG_files/clustalw-2.1-macosx/clustalw2 -INFILE=$infile -OUTFILE=$outfile -OUTPUT=FASTA -tree";
}
I have a very large file (~10 GB) that can be compressed to < 1 GB using gzip. I'm interested in using sort FILE | uniq -c | sort to see how often a single line is repeated, however the 10 GB file is too large to sort and my computer runs out of memory.
Is there a way to compress the file while preserving newlines (or an entirely different method all together) that would reduce the file to a small enough size to sort, yet still leave the file in a condition that's sortable?
Or any other method of finding out / countin how many times each line is repetead inside a large file (a ~10 GB CSV-like file) ?
Thanks for any help!
Are you sure you're running out of the Memory (RAM?) with your sort?
My experience debugging sort problems leads me to believe that you have probably run out of diskspace for sort to create it temporary files. Also recall that diskspace used to sort is usually in /tmp or /var/tmp.
So check out your available disk space with :
df -g
(some systems don't support -g, try -m (megs) -k (kiloB) )
If you have an undersized /tmp partition, do you have another partition with 10-20GB free? If yes, then tell your sort to use that dir with
sort -T /alt/dir
Note that for sort version
sort (GNU coreutils) 5.97
The help says
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
I'm not sure if this means can combine a bunch of -T=/dr1/ -T=/dr2 ... to get to your 10GB*sortFactor space or not. My experience was that it only used the last dir in the list, so try to use 1 dir that is big enough.
Also, note that you can go to the whatever dir you are using for sort, and you'll see the acctivity of the temporary files used for sorting.
I hope this helps.
As you appear to be a new user here on S.O., allow me to welcome you and remind you of four things we do:
. 1) Read the FAQs
. 2) Please accept the answer that best solves your problem, if any, by pressing the checkmark sign. This gives the respondent with the best answer 15 points of reputation. It is not subtracted (as some people seem to think) from your reputation points ;-)
. 3) When you see good Q&A, vote them up by using the gray triangles, as the credibility of the system is based on the reputation that users gain by sharing their knowledge.
. 4) As you receive help, try to give it too, answering questions in your area of expertise
There are some possible solutions:
1 - use any text processing language (perl, awk) to extract each line and save the line number and a hash for that line, and then compare the hashes
2 - Can / Want to remove the duplicate lines, leaving just one occurence per file? Could use a script (command) like:
awk '!x[$0]++' oldfile > newfile
3 - Why not split the files but with some criteria? Supposing all your lines begin with letters:
- break your original_file in 20 smaller files: grep "^a*$" original_file > a_file
- sort each small file: a_file, b_file, and so on
- verify the duplicates, count them, do whatever you want.
we can use time in a unix environment to see how long something took...
shell> time some_random_command
real 0m0.709s
user 0m0.008s
sys 0m0.012s
is there an equivalent for recording memory usage of the process(es)?
in particular i'm interested in peak allocation.
Check the man page for time. You can specify a format string where it is possible to output memory information. For example:
>time -f"mem: %M" some_random_command
mem: NNNN
will output maximum resident set size of the process during its lifetime, in Kilobytes.
Can you not use ps? e.g. ps v <pid> will return memory information.