I am currently trying to do a Spearman's Rank Correlation in Tableau using the new R capability. I was able to get the correct code in R, but having trouble putting it in a form for Tableau to understand.
My data is grouped by code. So it is a Group Correlation. My code in R:
library (plyr)
ddply(mydata,"Code",summarise, corr=cor(Survey.1,Survey.2, method="spearman"))
How do I use the Script_Real in Tableau to give me that correlation?
For those wanting to understand the coding. I have figured it out!
Script_Real("cor(.arg1,.arg2, method='spearman')",SUM([x]), Sum([y])).
Note: You need to have ID:1,2,3,4,5,6,7 near your data in order for it to run the test. Then click the pill and compute using that ID code.
One step you need is to configure the connection to R.
See the help menu ->Settings and Performance > Manage R Connection
If you are using Tableau server, that will need to know the connection path to RServe as well. See the online help.
Related
I want to create a conincidence matrix using R Studio for a decision tree that I have generated. I have done the same in SPSS but am not able to figure out how to do the same in R. I am attaching an image of how it looks in SPSS. If you could point me to the right resource or link that can tell me what the quivalent of this in R is, that would be very helpful. Thank you!!
First, please try and ask targeted questions. What have you tried? What packages have you explored? Where are you getting stuck?
Nonetheless, I would start by reading through this, A Short Introduction to the caret Package. Then, do this:
install.packages("caret")
library(caret)
?confusionMatrix
It is my first time using R for phylogenetics work and I was wondering if I could do that. It seems a rather trivial job and I think there must be a very small code for this, but I am unable to get it done. Any help appreciated!
I am currently testing various community detection algorithms in the igraph package to compare against my implementation.
I am able to run the algorithms on different graphs but I was wondering if there was a way for me to write the clustering to a file, where all nodes in one community are written to one line and so on. I am able to obtain the membership of each node using membership(communities_object) and write that to a file using dput() but I don't know how to write it the way I want.
This is the first time I am working with R as well. I apologize if this has been asked before.
This does not have to do much with igraph, the clustering is given by a simple numeric vector. See ?write.
write(membership(communities_object), file="myfile", ncolumns=1)
write(communities_object$membership, file="myfile", ncolumns=1) also work
I am trying to get comfortable with the 'rattle' package in R. I am having issues building a neural network using this package.
I have a training data set of 140 columns and 200000 rows and a target variable that takes values from 0-4 depending on the class it belongs to. It is a classic pattern classification problem.
When I load my data into rattle, the option of 'neural network' under 'Model' tab is de-activated. Is there a pre-requisite that my data doesn't fulfil?
I know I can use neural network specific packages to implement one, but the situation requires me to use rattle.
Any clues/suggestions are very much appreciated.
Thanks in advance!
Make sure that your dataset contains all neumeric values. Else go to data tab and select 'ignore' to ignore that feature for further calculation.
Check this link if you want to do the same without using the GUI
http://www.r-bloggers.com/visualizing-neural-networks-in-r-update/
I am trying to find a way to use distance weighted discrimination method (DWD) to remove biases from multiple microarray datasets.
My starting point is this. The problem is that Matlab version runs only under Windows, needs excel 5 format as input (where data appears to be truncated at line 65535 - matlab error is:
Error reading record for cells starting at column 65535. Try saving as Excel 98.
). Java version runs only with caBIG support, which, if I understood, has been shut down recently.
So I searched a lot and I find R/DWD package but from example I could not get how to provide the two datasets to merge to kdwd function.
Does anybody know how to use it?
Thanks
Try this, it has a DWD implementation
http://www.bioconductor.org/packages/release/bioc/html/inSilicoMerging.html