Running R 3.5.1 in R Studio.
I’ve edited pam.res$clustering to manually change the clusters.
Used silhouette() to try to observe the silhouette info for the edited clustering:
mahal<-D2.dist(data, cov.wt(data)$cov)
newsil<-silhouette(pam.res$clustering, mahal)
And all I get from summary(newsil) is
Mode NA’s
logical 1
I can’t reference within newsil, as it’s an atomic vector, which it shouldn’t be.
Can’t figure out what’s gone wrong. Any ideas? Thanks.
Related
I'm using the latest version of the MDMR package in R to analyze whether there is a significant relationship between a set of independent and dependent variables. However, when I execute something like the following:
library(MDMR)
results<-mdmr(X=predictor_variable,D=distance_matrix)
I get 0.00199799899899874826986 as the output p-value.
However, when I try display it later with summary(results), it displays <0.002, and if I try to assign the value to a variable, 0 is assigned instead. I presume this is a floating point error in a calculation within R, but am clueless regarding how to fix it.
I was running a spatstat envelop to generate simulations sample, however, it got stuck and did not run. So, I attempted to close the application but fail.
RStudio diagnostic log
Additional error message:
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information
There are several typing errors in the command shown in the question. The argument rank should be nrank and the argument glocal should be global. I will assume that these were typed correctly when you ran the command.
Since global=TRUE this command will generate 2 * nsim = 198 realisations of a completely random pattern and calculate the L function for each one of them. In my experience it should take only a minute or two to compute this, unless the geometry of the window is very complicated. One hour is really extraordinary.
So I'm guessing either you have a very complicated window (so that the edge correction calculation is taking a long time) or that RStudio is hanging somehow.
Try setting correction="border" or correction="none" and see if that makes it run faster. (These are the fastest choices.) If that works, then read the help for Lest or Kest about edge corrections, and choose an edge correction that you like. If not, then try running the same command in R instead of RStudio.
I'm trying to create a wordcloud using R in Eclipse. I've been working with R for some weeks without any problem and I´ve created lots of different plots, but when creating the wordcloud, any kind of them and using different configurations, I always get the wordcloud with all the words overlapped.
I've followed different examples and I always get the words overlapped. For example, if I execute this code:
library(wordcloud)
library(tm)
wordcloud("May our children and our children's children to a
thousand generations, continue to enjoy the benefits conferred
upon us by a united country, and have cause yet to rejoice under
those glorious institutions bequeathed us by Washington and his
compeers.",colors=brewer.pal(6,"Dark2"),random.order=FALSE)
I get this result:
As you can see, all the words are overlapped and I don´t know what to do. I've search a lot on the Internet and I didn't get any clue.
The arguments within the wordcloud package includes:
"use.r.layout - if false, then c++ code is used for collision detection, otherwise R is used"
-Documentation for Wordcloud package.
There may be some difficulty with Eclipse and the usage of R vs. C++. As I am unsure as to the default of Wordcloud try toggling the argument between TRUE and FALSE.
e.g. Wordcloud("Corpus",use.r.layout=TRUE,colors=brewer.pal(6,"Dark2"),random.order=FALSE)
I got this problem after adding the command
Sys.setlocale('LC_ALL','C')
Disabling this directive made wordclouds work fine again.
I am using Jupyter Notebook with R kernel
I am trying to find a way to use distance weighted discrimination method (DWD) to remove biases from multiple microarray datasets.
My starting point is this. The problem is that Matlab version runs only under Windows, needs excel 5 format as input (where data appears to be truncated at line 65535 - matlab error is:
Error reading record for cells starting at column 65535. Try saving as Excel 98.
). Java version runs only with caBIG support, which, if I understood, has been shut down recently.
So I searched a lot and I find R/DWD package but from example I could not get how to provide the two datasets to merge to kdwd function.
Does anybody know how to use it?
Thanks
Try this, it has a DWD implementation
http://www.bioconductor.org/packages/release/bioc/html/inSilicoMerging.html
I recently started experimenting with the biganalytics package for R. I ran into a problem however...
I am trying to run bigkmeans with a cluster number of about 2000 e.g clust <- bigkmeans(mymatrix, centers=2000)
However, I get the following error:
Error in 1:(10 + 2^k) : result would be too long a vector
Can someone maybe give me a hint what I am doing wrong here?
Vectors are limited by the type used for the index -- there is/was some talk about replacing this index type by a double but it hasn't happen yet and is unlikely as it may break so much existing code.
If your k is really large, you may not be able to do this the way you had planned.