When trying to use the cSPADE algorithm from the arulesSequences package in R I always receive the following error:
Error in makebin(data, file) : 'sid' invalid (order)
I think the problem is related to the arrangement of the data somehow but I really don't know what the problem may be.
A similar problem was presented here but the solution that I tried to incorporate by using
arrange(sessions#itemsetInfo,sequenceID,eventID)
did not change anything. Thanks for your help!
#This is how I pre-processed the data
load("example.Rda")
example.Rda %>% head(5) %>% knitr::kable()
str(example.Rda)
example.Rda = as.data.frame(example.Rda)
sessions <- as(example.Rda %>% transmute(items = question_type), "transactions")
transactionInfo(sessions)$sequenceID <- example.Rda$user_id
transactionInfo(sessions)$eventID <- example.Rda$item_id
itemLabels(sessions) <- str_replace_all(itemLabels(sessions), "items=", "")
inspect(head(sessions,10))
arrange(sessions#itemsetInfo,sequenceID,eventID)
str(sessions)
View(sessions)
#This is the definition of the itemsets after which the error occurs
itemsets <- cspade(sessions,
parameter = list(support = 0.001),
control = list(verbose = FALSE))
I want to parse the AAChange.refGene column and then use biomaRt R package to extract information. My code is raising Error in is.single.string(object) : argument "object" is missing, with no default even though the getSequence function is meant to accept multiple arguments.
library(tidyr)
variant_calls = read.delim("variant_calls.txt")
info = tidyr::separate(variant_calls["AAChange.refGene"], AAChange.refGene, c("Refseq ID", "cDNA level change", "Protein level change"), ":")
df = cbind(variant_calls["Gene.refGene"],info)
library(biomaRt)
ensembl <- useMart(biomart="ENSEMBL_MART_ENSEMBL", dataset="hsapiens_gene_ensembl", host="https://grch37.ensembl.org", path="/biomart/martservice")
pep <- vector()
for(i in 1:length(df$`Refseq ID`)){
temp <- getSequence(id=df$`Refseq ID`[i],type='refseq_mrna',seqType='peptide', mart=ensembl)
temp <- sapply(temp$peptide, nchar)
temp <- sort(temp, decreasing = TRUE)
temp <- names(temp[1])
pep[i] <- temp
}
df$Sequence <- pep
Traceback:
Error in is.single.string(object) :
argument "object" is missing, with no default
I got the same error and found out (using ?getSequence) that it was a conflict between packages (classic R), specifically biomart and seqinr which is used to handle fasta format thus probably used together often.
My solution consisted in calling the function like this:
biomaRt::getSequence()
I have some code that loops over a list of study IDs (ids) and turns them into separate polygons/spatial points. On the first execution of the loop it produces the following error:
Error in (function (x) : attempt to apply non-function
This is from the raster::rasterToPoints function. I've looked at the examples in the help section for this function and passing fun=NULL seems to be an acceptable method (filters out all NA values). All the values are equal to 1 anyways so I tried passing a simple function like it suggests such as function(x){x==1}. When this didn't work, I also tried to just suppress the error message but without any luck using try() or tryCatch().
Main questions:
1. Why does this produce an error at all?
2. Why does it only display the error on the first run through the loop?
Reproducible example:
library(ggplot2)
library(raster)
library(sf)
library(dplyr)
pacific <- map_data("world2")
pac_mod <- pacific
coordinates(pac_mod) <- ~long+lat
proj4string(pac_mod) <- CRS("+init=epsg:4326")
pac_mod2 <- spTransform(pac_mod, CRS("+init=epsg:4326"))
pac_rast <- raster(pac_mod2, resolution=0.5)
values(pac_rast) <- 1
all_diet_density_samples <- data.frame(
lat_min = c(35, 35),
lat_max = c(65, 65),
lon_min = c(140, 180),
lon_max = c(180, 235),
sample_replicates = c(38, 278),
id= c(1,2)
)
ids <- all_diet_density_samples$id
for (idnum in ids){
poly1 = all_diet_density_samples[idnum,]
pol = st_sfc(st_polygon(list(cbind(c(poly1$lon_min, poly1$lon_min, poly1$lon_max, poly1$lon_max, poly1$lon_min), c(poly1$lat_min, poly1$lat_max, poly1$lat_max, poly1$lat_min, poly1$lat_min)))))
pol_sf = st_as_sf(pol)
x <- rasterize(pol_sf, pac_rast)
df1 <- raster::rasterToPoints(x, fun=NULL, spatial=FALSE) #ERROR HERE
df2 <- as.data.frame(df1)
density_poly <- all_diet_density_samples %>% filter(id == idnum) %>% pull(sample_replicates)
df2$density <- density_poly
write.csv(df2, paste0("pol_", idnum, ".csv"))
}
Any help would be greatly appreciated!
These are error messages, but not errors in the strict sense as the script continues to run, and the results are not affected. They are related to garbage collection (removal from memory of objects that are no longer in use) and this makes it tricky to pinpoint what causes it (below you can see a slightly modified example that suggests another culprit), and why it does not always happen at the same spot.
Edit (Oct 2022)
These annoying messages
Error in x$.self$finalize() : attempt to apply non-function
Error in (function (x) : attempt to apply non-function
Will disappear with the next release of Rcpp, which is planned for Jan 2023. You can also install the development version of Rcpp like this:
install.packages("Rcpp", repos="https://rcppcore.github.io/drat")
I send you a message because I would like realise an PCA in R with the package ade4.
I have the data "PAYSAGE" :
All the variables are numeric, PAYSAGE is a data frame, there are no NAS or blank.
But when I do :
require(ade4)
ACP<-dudi.pca(PAYSAGE)
2
I have the message error :
**You can reproduce this result non-interactively with:
dudi.pca(df = PAYSAGE, scannf = FALSE, nf = NA)
Error in if (nf <= 0) nf <- 2 : missing value where TRUE/FALSE needed
In addition: Warning message:
In as.dudi(df, col.w, row.w, scannf = scannf, nf = nf, call = match.call(), :
NAs introduced by coercion**
I don't understand what does that mean. Have you any idea??
Thank you so much
I'd suggest sharing a data set/example others could access, if possible. This seems data-specific and with NAs introduced by coercion you may want to check the type of your input - typeof(PAYSAGE) - the manual for dudi.pca states it takes a data frame of numeric values as input.
Yes, for example :
ag_div <- c(75362,68795,78384,79087,79120,73155,58558,58444,68795,76223,50696,0,17161,0,0)
canne <- c(rep(0,10),5214,6030,0,0,0)
prairie_el<- c(60, rep(0,13),76985)
sol_nu <- c(18820,25948,13150,9903,12097,21032,35032,35504,25948,20438,12153,33096,15748,33260,44786)
urb_peu_d <- c(448,459,5575,5902,5562,458,6271,6136,459,1850,40,13871,40,13920,28669)
urb_den <- c(rep(0,12),14579,0,0)
veg_arbo <- c(2366,3327,3110,3006,3049,2632,7546,7620,3327,37100,3710,0,181,0,181)
veg_arbu <- c(18704,18526,15768,15527,15675,18886,12971,12790,18526,15975,22216,24257,30962,24001,14523)
eau <- c(rep(0,10),34747,31621,36966,32165,28054)
PAYSAGE<-data.frame(ag_div,canne,prairie_el,sol_nu,urb_peu_d,urb_den,veg_arbo,veg_arbu,eau)
require(ade4)
ACP<-dudi.pca(PAYSAGE)
In R the Limma package can give you a list of differentially expressed genes.
How can I simply get all the probesets with highest signal intensity in the respect of a threshold?
Can I get only the most expressed genes in an healty experiment, for example from one .CEL file? Or the most expressed genes from a set of .CEL files of the same group (all of the control group, or all of the sample group).
If you run the following script, it's all ok. You have many .CEL files and all work.
source("http://www.bioconductor.org/biocLite.R")
biocLite(c("GEOquery","affy","limma","gcrma"))
gse_number <- "GSE13887"
getGEOSuppFiles( gse_number )
COMPRESSED_CELS_DIRECTORY <- gse_number
untar( paste( gse_number , paste( gse_number , "RAW.tar" , sep="_") , sep="/" ), exdir=COMPRESSED_CELS_DIRECTORY)
cels <- list.files( COMPRESSED_CELS_DIRECTORY , pattern = "[gz]")
sapply( paste( COMPRESSED_CELS_DIRECTORY , cels, sep="/") , gunzip )
celData <- ReadAffy( celfile.path = gse_number )
gcrma.ExpressionSet <- gcrma(celData)
But if you delete all .CEL files manually but you leave only one, execute the script from scratch, in order to have 1 sample in the celData object:
> celData
AffyBatch object
size of arrays=1164x1164 features (17 kb)
cdf=HG-U133_Plus_2 (54675 affyids)
number of samples=1
number of genes=54675
annotation=hgu133plus2
notes=
Then you'll get the error:
Error in model.frame.default(formula = y ~ x, drop.unused.levels = TRUE) :
variable lengths differ (found for 'x')
How can I get the most expressed genes from 1 .CEL sample file?
I've found a library that could be useful for my purpose: the panp package.
But, if you run the following script:
if(!require(panp)) { biocLite("panp") }
library(panp)
myGDS <- getGEO("GDS2697")
eset <- GDS2eSet(myGDS,do.log2=TRUE)
my_pa <- pa.calls(eset)
you'll get an error:
> my_pa <- pa.calls(eset)
Error in if (chip == "hgu133b") { : the argument has length zero
even if the platform of the GDS is that expected by the library.
If you run with the pa.call() with gcrma.ExpressionSet as parameter then all work:
my_pa <- pa.calls(gcrma.ExpressionSet)
Processing 28 chips: ############################
Processing complete.
In summary, If you run the script you'll get an error while executing:
my_pa <- pa.calls(eset)
and not while executing
my_pa <- pa.calls(gcrma.ExpressionSet)
Why if they are both ExpressionSet?
> is(gcrma.ExpressionSet)
[1] "ExpressionSet" "eSet" "VersionedBiobase" "Versioned"
> is(eset)
[1] "ExpressionSet" "eSet" "VersionedBiobase" "Versioned"
Your gcrma.ExpressionSet is an object of class "ExpressionSet"; working with ExpressionSet objects is described in the Biobase vignette
vignette("ExpressionSetIntroduction")
also available on the Biobase landing page. In particular the matrix of summarized expression values can be extracted with exprs(gcrma.ExpressionSet). So
> eset = gcrma.ExpressionSet ## easier to display
> which(exprs(eset) == max(exprs(eset)), arr.ind=TRUE)
row col
213477_x_at 22779 24
> sampleNames(eset)[24]
[1] "GSM349767.CEL"
Use justGCRMA() rather than ReadAffy as a faster and more memory efficient way to get to an ExpressionSet.
Consider asking questions about Biocondcutor packages on the Bioconductor support site where you'll get fast responses from knowledgeable members.