Issue with Join results in more than 2^31 rows - r

I have two huge data frames:
> dim(res)
[1] 111478253 8
> dim(asign)
[1] 107371528 5
I want to merge them by "chr" and "pos"
> head(res)
chr pos a1 a2 a3 variant_id pval_nominal gene_id
1: chr1 54490 G A b38 chr1_54490_G_A_b38 0.608495 ENSG00000227232.5
2: chr1 58814 G A b38 chr1_58814_G_A_b38 0.295211 ENSG00000227232.5
3: chr1 60351 A G b38 chr1_60351_A_G_b38 0.439788 ENSG00000227232.5
4: chr1 61920 G A b38 chr1_61920_G_A_b38 0.319528 ENSG00000227232.5
5: chr1 63671 G A b38 chr1_63671_G_A_b38 0.237739 ENSG00000227232.5
6: chr1 64931 G A b38 chr1_64931_G_A_b38 0.276679 ENSG00000227232.5
> head(asign)
gene chr chr_pos pos p.val.Retina
1: ENSG00000227232 chr1 1:10177:A:AC 10177 0.381708
2: ENSG00000227232 chr1 rs145072688:10352:T:TA 10352 0.959523
3: ENSG00000227232 chr1 1:11008:C:G 11008 0.218132
4: ENSG00000227232 chr1 1:11012:C:G 11012 0.218132
5: ENSG00000227232 chr1 1:13110:G:A 13110 0.998262
6: ENSG00000227232 chr1 rs201725126:13116:T:G 13116 0.438572
> m=merge(res, asign, by = c("chr", "pos"))
Error in vecseq(f__, len__, if (allow.cartesian || notjoin || !anyDuplicated(f__, :
Join results in more than 2^31 rows (internal vecseq reached physical limit). Very likely misspecified join. Check for duplicate key values in i each of which join to the same group in x over and over again. If that's ok, try by=.EACHI to run j for each group to avoid the large allocation. Otherwise, please search for this error message in the FAQ, Wiki, Stack Overflow and data.table issue tracker for advice.
I tried using by=.EACHI but got the same error.
I the final merged file I only need to keep matching: "chr", "pos", "pval_nominal","p.val.Retina"
I only need rows in common between "res" and "asign" data frames.
I can remove columns which I don't need from both of those data frames and I got this:
> head(asignx)
chr pos p.val.Retina
1: chr1 10177 0.381708
2: chr1 10352 0.959523
3: chr1 11008 0.218132
4: chr1 11012 0.218132
5: chr1 13110 0.998262
6: chr1 13116 0.438572
> head(l4x)
chr pos pval_nominal
1: chr1 13550 0.375614
2: chr1 14671 0.474708
3: chr1 14677 0.699887
4: chr1 16841 0.127895
5: chr1 16856 0.627822
6: chr1 17005 0.802803
But again when I try to merge these:
> m=merge(l4x,asignx, by = c("chr", "pos"),all.x=FALSE,all.y=FALSE)
Error in vecseq(f__, len__, if (allow.cartesian || notjoin || !anyDuplicated(f__, :
Join results in more than 2^31 rows (internal vecseq reached physical limit)

Assuming that both of your dataframes are loaded into a database (you will have to setup a database like postgres or sql server), the sql equivalent of:
m=merge(res, asign, by = c("chr", "pos"))
Is
select *
into m_table
from res
join asign
on res.chr = asign.chr
and res.pos = asign.pos
Then you will have a new table:
select *
from m_table;

Related

Error in using GADEM function from rGADEM package

I have big peak list in the "Bed" format and I converted it to GenomicRange for use as an input for the GADEM package to find denovo motifs. But when I try the GADEM function always I face the below error.
Could you please anybody who knows help me with this error?
This is a small example of my real file with only 20 rows.
1 chr6 29723590 29723790
2 chr14 103334312 103334512
3 chr1 150579030 150579230
4 chr7 76358527 76358727
5 chr6 11537891 11538091
6 chr14 49893256 49893456
7 chr5 179623200 179623400
8 chr1 228082831 228083031
9 chr12 93441644 93441844
10 chr10 3784776 3784976
11 chr3 183635833 183636033
12 chr7 975301 975501
13 chr12 123364510 123364710
14 chr1 1615578 1615778
15 chr1 36156320 36156520
16 chr14 55051781 55051981
17 chr8 11867697 11867897
18 chr22 38706135 38706335
19 chr6 44265256 44265456
20 chr1 185316658 185316858
and the code that I use is :
library(GenomicRanges)
library(rGADEM)
data = makeGRangesFromDataFrame(data, keep.extra.columns = TRUE)
data = reduce(data)
data = resize(data, width = 50, fix='center')
gadem<-GADEM(data,verbose=1,genome=Hsapiens)
plot(gadem)
and error is:
[ Retrieving sequences... Error in.Call2("C_solve_user_SEW", refwidths, start, end, width, translate.negative.coord:
solving row 136: 'allow.nonnarrowing' is FALSE and the supplied start (55134751) is > refwidth + 1 ]
Better to mention that, when I try an example input file with less than 136 rows, it works and I get motifs.
Thanks in advance.

awk: change a field's value conditionally based on the value of another column

I have a table snp150Common.txt, where the second and third fields $2 and $3 can be equal or not.
If they are equal, I want $2 to become $2-1, so that:
chr1 10177 10177 rs367896724 - - -/C insertion near-gene-5
chr1 10352 10352 rs555500075 - - -/A insertion near-gene-5
chr1 11007 11008 rs575272151 C C C/G single near-gene-5
chr1 11011 11012 rs544419019 C C C/G single near-gene-5
chr1 13109 13110 rs540538026 G G A/G single intron
chr1 13115 13116 rs62635286 T T G/T single intron
chr1 13117 13118 rs62028691 A A C/T single intron
chr1 13272 13273 rs531730856 G G C/G single ncRNA
chr1 14463 14464 rs546169444 A A A/T single near-gene-3,ncRNA
becomes:
chr1 10176 10177 rs367896724 - - -/C insertion near-gene-5
chr1 10351 10352 rs555500075 - - -/A insertion near-gene-5
chr1 11007 11008 rs575272151 C C C/G single near-gene-5
chr1 11011 11012 rs544419019 C C C/G single near-gene-5
chr1 13109 13110 rs540538026 G G A/G single intron
chr1 13115 13116 rs62635286 T T G/T single intron
chr1 13117 13118 rs62028691 A A C/T single intron
chr1 13272 13273 rs531730856 G G C/G single ncRNA
chr1 14463 14464 rs546169444 A A A/T single near-gene-3,ncRNA
My current command adapted from https://askubuntu.com/a/312843:
zcat < snp150/snp150Common.txt.gz | head | awk '{ if ($2 == $3) $2=$2-1; print $0 }' | cut -f 2,3,4,5,8,9,10,12,16
gives the same output:
chr1 10177 10177 rs367896724 - - -/C insertion near-gene-5
chr1 10352 10352 rs555500075 - - -/A insertion near-gene-5
chr1 11007 11008 rs575272151 C C C/G single near-gene-5
chr1 11011 11012 rs544419019 C C C/G single near-gene-5
chr1 13109 13110 rs540538026 G G A/G single intron
chr1 13115 13116 rs62635286 T T G/T single intron
chr1 13117 13118 rs62028691 A A C/T single intron
chr1 13272 13273 rs531730856 G G C/G single ncRNA
chr1 14463 14464 rs546169444 A A A/T single near-gene-3,ncRNA
Any help is greatly appreciated.
This answer is based on pure speculation of the source file format:
$ zcat snp150/snp150Common.txt.gz |
awk '
BEGIN { OFS="\t" } # field separators are most likely tabs
{
if ($3 == $4) # based on cut these should be compared
$3=$3-1
print $2,$3,$4,$5,$8,$9,$10,$12,$16 # ... and there fields printed
}
NR==10 { exit }' # this replaces head
And remember: Practising (anything but sucking) makes you suck less.

R: "label" a row based on conditions from another data.table

I have a data.table (A) that is over 100,000 rows long. There are 3 columns.
chrom start end
1: chr1 6484847 6484896
2: chr1 6484896 6484945
3: chr1 6484945 6484994
4: chr1 6484994 6485043
5: chr1 6485043 6485092
---
183569: chrX 106893605 106893654
183570: chrX 106893654 106893703
183571: chrX 106893703 106893752
183572: chrX 106893752 106893801
183573: chrX 106893801 106894256
I'd like to generate a new column named "gene" that provides a label for each row based annotations from another data.table which has ~90 rows (B). Seen below:
chrom start end gene
1: chr1 6484847 6521004 ESPN
2: chr1 41249683 41306124 KCNQ4
3: chr1 55464616 55474465 BSND
42: chrX 82763268 82764775 POU3F4
43: chrX 100600643 100603957 TIMM8A
44: chrX 106871653 106894256 PRPS1
If the row start value in data.table A is within the row start and end values of data.table B I need the row in A to be labeled with the correct gene accordingly.
For example the resulting complete data.table A would be
chrom start end gene
1: chr1 6484847 6484896 ESPN
2: chr1 6484896 6484945 ESPN
3: chr1 6484945 6484994 ESPN
4: chr1 6484994 6485043 ESPN
5: chr1 6485043 6485092 ESPN
---
183569: chrX 106893605 106893654 TIMM8A
183570: chrX 106893654 106893703 TIMM8A
183571: chrX 106893703 106893752 TIMM8A
183572: chrX 106893752 106893801 TIMM8A
183573: chrX 106893801 106894256 TIMM8A
I've attempted some nested loops to do this but that seems like it would take WAY too long. I think there must be a way to do this with the data.table package but I can't seem to figure it out.
Any and all suggestions would be greatly appreciated.
While it's certainly possible to do this in base R (or potentially using data.table), I would highly recommend using GenomicRanges; it's a very powerful and flexible R/Bioconductor library that's been designed for these kind of tasks.
Here is an example using GenomicRanges::findOverlaps:
# Sample data
df1 <- read.table(text =
"chrom start end
chr1 6484847 6484896
chr1 6484896 6484945
chr1 6484945 6484994
chr1 6484994 6485043
chr1 6485043 6485092", sep = "", header = T, stringsAsFactors = F);
df2 <- read.table(text =
"chrom start end gene
chr1 6484847 6521004 ESPN
chr1 41249683 41306124 KCNQ4
chr1 55464616 55474465 BSND
chrX 82763268 82764775 POU3F4
chrX 100600643 100603957 TIMM8A
chrX 106871653 106894256 PRPS1", sep = "", header = TRUE, stringsAsFactors = F);
# Convert to GRanges objects
gr1 <- with(df1, GRanges(chrom, IRanges(start = start, end = end)));
gr2 <- with(df2, GRanges(chrom, IRanges(start = start, end = end), gene = gene));
# Find features from gr1 that overlap with gr2
m <- findOverlaps(gr1, gr2);
# Add gene annotation as metadata to gr1
mcols(gr1)$gene[queryHits(m)] <- mcols(gr2)$gene[subjectHits(m)];
gr1;
#GRanges object with 5 ranges and 1 metadata column:
# seqnames ranges strand | gene
# <Rle> <IRanges> <Rle> | <character>
# [1] chr1 [6484847, 6484896] * | ESPN
# [2] chr1 [6484896, 6484945] * | ESPN
# [3] chr1 [6484945, 6484994] * | ESPN
# [4] chr1 [6484994, 6485043] * | ESPN
# [5] chr1 [6485043, 6485092] * | ESPN
# -------
# seqinfo: 1 sequence from an unspecified genome; no seqlengths
Besides the GRanges/IRanges solution by Maurits Evers, there is an alternative data.table approach using non-equi join and update on join.
A[B, on = .(chrom, start >= start, start <= end), gene := i.gene][]
chrom start end gene
1: chr1 6484847 6484896 ESPN
2: chr1 6484896 6484945 ESPN
3: chr1 6484945 6484994 ESPN
4: chr1 6484994 6485043 ESPN
5: chr1 6485043 6485092 ESPN
6: chrX 106893605 106893654 PRPS1
7: chrX 106893654 106893703 PRPS1
8: chrX 106893703 106893752 PRPS1
9: chrX 106893752 106893801 PRPS1
10: chrX 106893801 106894256 PRPS1
According to the OP, A and B are already data.table objects. So, this approach avoids the coercion to GRanges objects.
Reproducible Data
library(data.table)
A <- fread("rn chrom start end
1: chr1 6484847 6484896
2: chr1 6484896 6484945
3: chr1 6484945 6484994
4: chr1 6484994 6485043
5: chr1 6485043 6485092
183569: chrX 106893605 106893654
183570: chrX 106893654 106893703
183571: chrX 106893703 106893752
183572: chrX 106893752 106893801
183573: chrX 106893801 106894256", drop = 1L)
B <- fread("rn chrom start end gene
1: chr1 6484847 6521004 ESPN
2: chr1 41249683 41306124 KCNQ4
3: chr1 55464616 55474465 BSND
42: chrX 82763268 82764775 POU3F4
43: chrX 100600643 100603957 TIMM8A
44: chrX 106871653 106894256 PRPS1", drop = 1L)

How to replace values in dataframe in R with translation table with minimal computational time?

I have the following biological data file.
#acgh_file
chromosome startPosition
chr1 37196
chr1 52308
chr1 357503
chr1 443361
chr1 530358
and I need to convert the positions by means of a translation table.
#convert
chr1 37196 chr1 47333
chr1 52308 chr1 62445
chr1 357503 chr1 367640
chr1 443361 chr1 453498
chr1 530358 chr1 540495
What needs to happen is that I have to replace the startPosition in the acgh_file with the value in fourth column of the convert table.
I made a script, but as the files are quite large it takes ages before it finishes (probably due to that R is not good for doing for-loops).
for (n in 1:nrow(convert)){
acgh_file[acgh_file$chromosome==convert[n,1] & acgh_file$startPosition==convert[n,2],3] <- convert[n,4]
}
I'm looking for a quicker solution here. Anybody have some ideas? I thought about doing something with the apply functions, but I don't know how to combine that when using this convert look-up table that I have here.
No need to use a for-loop here( Btw for loop in R are slow when they are not used in the good manner). Here you want to do a merge between 2 data sets. Since you have a big data.frame, I suggest to use data.table package to do the merge.
library(data.table)
setkey(acgh_file,chromosome,startPosition)
setkey(convert_file,V1,V2)
acgh_file[convert_file]
# chromosome startPosition V4
# 1: chr1 37196 47333
# 2: chr1 52308 62445
# 3: chr1 357503 367640
# 4: chr1 443361 453498
# 5: chr1 530358 540495
where the data sets are data.table :
acgh_file <- fread("
chromosome startPosition
chr1 37196
chr1 52308
chr1 357503
chr1 443361
chr1 530358")
convert_file <- fread("
chr1 37196 chr1 47333
chr1 52308 chr1 62445
chr1 357503 chr1 367640
chr1 443361 chr1 453498
chr1 530358 chr1 540495")[,V3:=NULL]

apply function to return a data.table, or convert the list directly to a data.table

I would like to apply a function that returns a matrix to each row of a large data.table object (original file is around 30 GB, I have 80 GB ram), and get back a data.table object. I'd like to do it efficiently. My current approach is the following:
my.function <- function(x){
alnRanges<-cigarToIRanges(x[6]);
alnStarts<-start(alnRanges)+as.numeric(x[4])-1;
alnEnds<-end(alnRanges)+as.numeric(x[4])-1;
y<-x[-4];
ys<-matrix(rep(y,length(alnRanges)),nrow=length(alnRanges),ncol=length(y),byrow=TRUE);
ys<-cbind(ys,alnStarts,alnEnds);
return(ys); # ys is a matrix
}
my.dt<-fread(my.file.name);
my.list.of.matrices<-apply(my.dt,1,my.function);
new.df<-do.call(rbind.data.frame,my.list.of.matrices);
colnames(new.df)[1:14]<-colnames(my.dt)[-4];
new.dt<-as.data.table(new.df);
Note1: I specify the my.function just to show that it returns a matrix, and that my apply line is therefore a list of matrices.
Note2: I am not sure how slow are the operations I am doing but seems that I could reduce the number of lines. For example, is it slow to convert a data frame to a data table for large objects?
Reproducible example:
Note that Arun and Roland made me think harder about the problem so I am still working on it... may be that I do not need these lines...
I want to take a sam file, and then create a new coordinates file where each read is split according to its CIGAR field.
My sam file:
qname rname pos cigar
2218 chr1 24613476 42M2S
2067 chr1 87221030 44M
2129 chr1 79702717 44M
2165 chr1 43113438 44M
2086 chr1 52155089 4M921N40M
code:
library("data.table");
library("GenomicRanges");
sam2bed <- function(x){
alnRanges<-cigarToIRanges(x[4]);
alnStarts<-start(alnRanges)+as.numeric(x[3])-1;
alnEnds<-end(alnRanges)+as.numeric(x[3])-1;
#y<-as.data.frame(x[,pos:=NULL]);
#ys<-y[rep(seq_len(nrow(y)),length(alnRanges)),];
y<-x[-3];
ys<-matrix(rep(y,length(alnRanges)),nrow=length(alnRanges),ncol=length(y),byrow=TRUE);
ys<-cbind(ys,alnStarts,alnEnds);
return(ys);
}
sam.chr.dt<-fread(sam.parent.chr.file);
setnames(sam.chr.dt,old=c("V1","V2","V3","V4"),new=c("qname","rname","pos","cigar"));
bed.chr.lom<-apply(sam.chr.dt,1,sam2bed);
> bed.chr.lom
[[1]]
alnStarts alnEnds
[1,] "2218" "chr1" "42M2S" "24613476" "24613517"
[[2]]
alnStarts alnEnds
[1,] "2067" "chr1" "44M" "87221030" "87221073"
[[3]]
alnStarts alnEnds
[1,] "2129" "chr1" "44M" "79702717" "79702760"
[[4]]
alnStarts alnEnds
[1,] "2165" "chr1" "44M" "43113438" "43113481"
[[5]]
alnStarts alnEnds
[1,] "2086" "chr1" "4M921N40M" "52155089" "52155092"
[2,] "2086" "chr1" "4M921N40M" "52156014" "52156053"
bed.chr.df<-do.call(rbind.data.frame,bed.chr.lom);
> bed.chr.df
V1 V2 V3 alnStarts alnEnds
1 2218 chr1 42M2S 24613476 24613517
2 2067 chr1 44M 87221030 87221073
3 2129 chr1 44M 79702717 79702760
4 2165 chr1 44M 43113438 43113481
5 2086 chr1 4M921N40M 52155089 52155092
6 2086 chr1 4M921N40M 52156014 52156053
bed.chr.dt<-as.data.table(bed.chr.df);
> bed.chr.dt
V1 V2 V3 alnStarts alnEnds
1: 2218 chr1 42M2S 24613476 24613517
2: 2067 chr1 44M 87221030 87221073
3: 2129 chr1 44M 79702717 79702760
4: 2165 chr1 44M 43113438 43113481
5: 2086 chr1 4M921N40M 52155089 52155092
6: 2086 chr1 4M921N40M 52156014 52156053
Assuming ff is your data.table, how about this?
splits <- cigarToIRangesListByAlignment(ff$cigar, ff$pos, reduce.ranges = TRUE)
widths <- width(attr(splits, 'partitioning'))
cbind(data.table(qname=rep.int(ff$qname, widths),
rname=rep.int(ff$rname, widths)), as.data.frame(splits))
qname rname space start end width
1: 2218 chr1 1 24613476 24613517 42
2: 2067 chr1 2 87221030 87221073 44
3: 2129 chr1 3 79702717 79702760 44
4: 2165 chr1 4 43113438 43113481 44
5: 2086 chr1 5 52155089 52155092 4
6: 2086 chr1 5 52156014 52156053 40

Resources