Input blob size is not equal network input size
Square root of required network input size was the plan, not tried.
cv2.error: OpenCV(3.4.2-openvino_2018_R2.0.0) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:416: error: (-215:Assertion failed) Failed to initialize Inference Engine backend: Input blob size is not equal network input size (50176!=150528). in function 'initPlugin'
Should not give the above error
Related
I am working on rocksdb,but unable to get an option which can tell me the maximum limit of size file inside a level ? And if once it reaches that maximum size how files gets split in RocksDB?
The option you are looking for is target_file_size_base and target_file_size_multiplier.
target_file_size_base - configures the size of SST files in level-1.
target_file_size_multiplier - configures the size of SST files in further levels.
For eg : If target_file_size_base is set to 2MB and target_file_size_multiplier is 10,
Level-1 SST files will be 2MB,
Level-2 SST files will be 20MB,
Level-3 SST files will be 200MB and so on..
You can also configure the number of such files in each level using,
max_bytes_for_level_base and max_bytes_for_level_multiplier.
For eg : If max_bytes_for_level_base = 200MB and target_file_size_base = 2MB, then Level-1 will contain 100 files of 2MB each
You can check for these options in options.h and advanced_options.h files.
if once it reaches that maximum size how files gets split in RocksDB
During compaction/flush, the files are created with configured size. If there are more files than the configured number, compaction gets triggered and the files are pushed to higher levels
Here's a useless example that shows my point
library(future)
a = 1:200000000
object.size(a)
test %<-% head(a)
I get the following error:
Error in getGlobalsAndPackages(expr, envir = envir, persistent =
persistent, : The total size of all global objects that need to be
exported for the future expression (‘head(a)’) is 762.95 MiB. This
exceeds the maximum allowed size of 500.00 MiB (option
'future.global.maxSize'). There are two globals: ‘a’ (762.94 MiB of
class ‘numeric’) and ‘head’ (10.05 KiB of class ‘function’).
Can anyone help me understand how to adjust that future.global.maxSize option? I tried options(future.global.maxSize = 1500000) but that didn't work.
Got it figured out and learned how you can edit options for any package.
This is the line that I used (edit: the change was from 'global' to 'globals':
options(future.globals.maxSize= 891289600)
If you want to customize your limit, I saw in the package source that the limit was calculated and this is how you would calculate the size for an 850mb limit:
850*1024^2 = 891289600
Thanks!
When I am trying to create a database in Oracle 11g manually, I'm facing below error.
SQL> create database test
2 Datafile '/opt/oradata/test/system01.dbf' size 10M
3 Sysaux datafile '/opt/oradata/test/sysaux01.dbf' size 10M
4 Logfile '/opt/oradata/test/redo01.log' size 10M,
5 '/opt/oradata/test/redo02.log' size 10M
6 Undo tablespace undotbs1
7 Datafile '/opt/oradata/test/undo01.dbf' size 10M
8 Default temporary tablespace temp
9 Tempfile '/opt/oradata/test/temp01.dbf' size 10M;
Error:
SQL> /
create database test
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-01501: CREATE DATABASE failed
ORA-01519: error while processing file '?/rdbms/admin/doptim.bsq' near line 15
ORA-00604: error occurred at recursive SQL level 1
ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM
Process ID: 4562
Session ID: 1 Serial number: 3
Kindly help.
"ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM"
If you look at this error it clearly says it's not able to create the initial extent since the size of the extent required is more than the file size. So, it leaves you with the two options
1) create DMT tablespace.
2) Increase the file size (100M rather than 10M).
Thank You,
Sid
If I use a barrier (no matter if CLK_LOCAL_MEM_FENCE or CLK_GLOBAL_MEM_FENCE) in my kernel, it causes a CL_INVALID_WORK_GROUP_SIZE error. The global work size is 512, the local work size is 128, 65536 items have to be computed, the max work group size of my device is 1024, I am using only one dimension. For Java bindings I use JOCL.
The kernel is very simple:
kernel void sum(global float *input, global float *output, const int numElements, local float *localCopy
{
localCopy[get_local_id(0)] = grid[get_global_id(0)];
barrier(CLK_LOCAL_MEM_FENCE); // or barrier(CLK_GLOBAL_MEM_FENCE)
}
I run the kernel on the Intel(R) Xeon(R) CPU X5570 # 2.93GHz and can use OpenCL 1.2. The calling method looks like
kernel.putArg(aCLBuffer).putArg(bCLBuffer).putArg(elementCount).putNullArg(localWorkSize);
queue.put1DRangeKernel(kernel, 0, globalWorkSize, localWorkSize);
But the error is always the same:
[...]can not enqueue 1DRange CLKernel [...] with gwo: null gws: {512} lws: {128}
cond.: null events: null [error: CL_INVALID_WORK_GROUP_SIZE]
What I am doing wrong?
This is expected behaviour on some OpenCL platforms. For example, on my Apple system, the CPU device has a maximum work-group size of 1024. However, if a kernel has a barrier inside, then the maximum work-group size for that specific kernel is reduced to 1.
You can query the maximum work-group size for a specific kernel by using the clGetKernelWorkGroupInfo function with the CL_KERNEL_WORK_GROUP_SIZE parameter. The value returned will be no more than the value returned by clGetDeviceInfo and CL_DEVICE_MAX_WORK_GROUP_SIZE, but is allowed to be less (as it is in this case).
I am trying to scan for possible SNPs and indels by aligning scaffolds to subsequences from a reference genome. (the raw reads are not available). I am using R/bioconductor and the `pairwiseAlignment function from the Biostrings package.
This was working fine for smaller scaffolds, but failed when I tried to align as 56kbp scaffold with the error message:
Error in QualityScaledXStringSet.pairwiseAlignment(pattern = pattern,
: cannot allocate memory block of size 17179869183.7 Gb
I am not sure if this is a bug or not ? ; I was under the impression that the Needleman-Wunsch algorithm used by pairwiseAlignment is an O(n*m) which I thought would imply the computational demand to be on the order of 3.1E9 operations (56K * 56k ~= 3.1E9). It seems the Needleman-Wunsch similarity matrix should as well take up on the order of 3.1 gigs of memory as well. Not sure if I'm not remembering big-o notation correctly or that is actually the memory overhead that would be needed to build the alignment given the overhead of the R scripting environment.
Does anybody have suggestions for a better alignment algorithm to use for aligning longer sequences? An initial alignment was already done using BLAST to find the region of the reference genome to align. I am not entirely confident BLAST's reliability for correctly placing indels and I have not yet been able to find an api as good as that provided by biostrings for parsing the raw BLAST alignments.
By the way, here is a code snippet that replicates the problem:
library("Biostrings")
scaffold_set = read.DNAStringSet(scaffold_file_name) #scaffold_set is a DNAStringSet instance
scafseq = scaffold_set[[scaffold_name]] #scaf_seq is a "DNAString" instance
genome = read.DNAStringSet(genome_file_name)[[1]] #genome is a "DNAString" instance
#qstart, qend, substart, subend are all from intial BLAST alignment step
scaf_sub = subseq(scafseq, start=qstart, end=qend) #56170-letter "DNAString" instance
genomic_sub = subseq(genome, start=substart, end=subend) #56168-letter "DNAString" instance
curalign = pairwiseAlignment(pattern = scaf_sub, subject = genomic_sub)
#that last line gives the error:
#Error in .Call2("XStringSet_align_pairwiseAlignment", pattern, subject, :
#cannot allocate memory block of size 17179869182.9 Gb
The error does not happen with shorter alignments (hundreds of bases).
I have not yet found the length cutoff where the error starts happening
So I use Clustal as an alignment tool. Not sure about the specific performance, but it has never given me issues when doing multiple sequence alignments of large quantity. Here is a script that runs a whole directory of .fasta files and aligns them. You can modify the flags on the system call to suit your input/output needs. Just look at the clustal documentation. This is in Perl, I don't use R too much for alignments. You need to edit the executable path in the script to match where clustal is on your computer.
#!/usr/bin/perl
use warnings;
print "Please type the list file name of protein fasta files to align (end the directory path with a / or this will fail!): ";
$directory = <STDIN>;
chomp $directory;
opendir (DIR,$directory) or die $!;
my #file = readdir DIR;
closedir DIR;
my $add="_align.fasta";
foreach $file (#file) {
my $infile = "$directory$file";
(my $fileprefix = $infile) =~ s/\.[^.]+$//;
my $outfile="$fileprefix$add";
system "/Users/Wes/Desktop/eggNOG_files/clustalw-2.1-macosx/clustalw2 -INFILE=$infile -OUTFILE=$outfile -OUTPUT=FASTA -tree";
}