I want to compare the two data table columns ID using R-Script / TERR in spotfire. Due to some limitations in am not able to install the functions called "compare","SQLDf". I can use the functions called "duplicated". Can some one help me in creating the sample script with out using the above functions.
Please find the below images for the detailed requirements.
Two Data Table
Result Table
Thanks,
-Vidya
Let's say you have two vectors setA and setB. You can get the result by
# in A but not in B
setdiff(setA,setB)
# in B but not in A
setdiff(setB,setA)
# both in A and B
intersect(setA,setB)
If you just want to know the count use the length function. This may not be the exact answer you were looking for but using the above functions you can create any set you want. If you need help with a specific logic please update your question.
Related
I have a set of SMILES codes of different molecules and I would like to know how to determine similarity among them. I have decided to use the ChemmineR package based on this tutorial. The issue is that I cannot understand how to connect my dataframe and use it like a ChemmineR object in order to run the analysis on SMILES.
DrugName<-c("alclofenac","alosetron")
DrugID_CID<-c("30951","2099")
DrugID<-c("CHEMBL94081","DB00969")
DrugBank<-c("DB13167","DB00969")
SMILES<-c("OC(=O)Cc1ccc(OCC=C)c(Cl)c1","Cc1[nH]cnc1CN1CCc2c(C1=O)c1ccccc1n2C")
Target<-c("PTGS1","HTR3A")
test<-data.frame(DrugName,DrugID_CID,DrugID,DrugBank,SMILES,Target)
I have used the read.SMIset function which imports one or many molecules from a SMILES file and stores them in a SMIset container but I cannot understand how to further proceed with this.
library("ChemmineR")
test; smiset <- smisample
write.SMI(smiset, file="sub.smi")
smiset <- read.SMIset("sub.smi")
data(smisample) # Loads the same SMIset provided by the library
smiset <- smisample
smiset
view(smiset)
cid(smiset)
smi <- as.character(smiset)
as(smi, "SMIset")
It's not entirely clear what you want to compare with what. However, here is one way to proceed with the SMILES in your example data frame.
First you need to convert the SMILES to a SDFset. This is the first step in most ChemmineR operations.
test_sdf <- smiles2sdf(test$SMILES)
For pairwise comparison using atom pairs, you need to convert again to an APset:
test_ap <- sdf2ap(test_sdf)
You could now compare, for example, the first compound in the APset with the second:
cmp.similarity(test_ap[1], test_ap[2])
[1] 0.1313131
I would spend some time reading and working through the Chemminer vignette linked in your question. It's a lot of information but it is well-presented, very clear and covers most things that you'll want to do.
I'm a beginner in R and i'm working on a automation,i have a list of variables in a separate file based on which the values needs to be aggregated in the master dataset.The Master datastructure is attached Master Dataset
and the referal dataset contains the vars to be aggregated Referal dataset
Of the 6 variables i need to aggregate the Variables D,E,F by Sum(C)(as per the referal dataset).
The below code does my requirement manually,
X<-aggregate(C,by=list(D,E,F),FUN=sum)
But i need a code which does the same funtionality automatically.I tried making loops but the problem i face is that both datasets dont have same data.frame size. Can someone help me on this ?
So, it seems like you want to do a few things:
1) read in the master/referent datasets
2) subset the master according to the values in the referent
3) compute column sums on the master?
also, is there a specific reason you want to use aggregate()? there are probably lots of ways to do this. In any case, here is what i would do:
# assuming master is a dataframe or matrix, referent is a vector
# just simulating them here because not clear how you are reading them in
master = matrix(rnorm(36),6)
colnames(master) = c('A','B','C','D','E','F')
referent = c('D','E','F')
colSums(master[,referent])
so is that doing what you want to do? I like colSums because it's a handy built-in. I am not an R superstar though so it is possible that other ways are better for some reason.
I have an .xdf file on an HDFS cluster which is around 10 GB having nearly 70 columns. I want to read it into a R object so that I could perform some transformation and manipulation. I tried to Google about it and come around with two functions:
rxReadXdf
rxXdfToDataFrame
Could any one tell me the preferred function for this as I want to read data & perform the transformation in parallel on each node of the cluster?
Also if I read and perform transformation in chunks, do I have to merge the output of each chunks?
Thanks for your help in advance.
Cheers,
Amit
Note that rxReadXdf and rxXdfToDataFrame have different arguments and do slightly different things:
rxReadXdf has a numRows argument, so use this if you want to read the top 1000 (say) rows of the dataset
rxXdfToDataFrame supports rxTransforms, so use this if you want to manipulate your data in addition to reading it
rxXdfToDataFrame also has the maxRowsByCols argument, which is another way of capping the size of the input
So in your case, you want to use rxXdfToDataFrame since you're transforming the data in addition to reading it. rxReadXdf is a bit faster in the local compute context if you just want to read the data (no transforms). This is probably also true for HDFS, but I haven’t checked this.
However, are you sure that you want to read the data into a data frame? You can use rxDataStep to run (almost) arbitrary R code on an xdf file, while still leaving your data in that format. See the linked documentation page for how to use the transforms arguments.
So I'm trying to manipulate a simple Qualtrics CSV, and I want to use colSums on certain columns of data, given a certain filter.
For example: within the .csv file called data, I want to get the sum of a few columns, and print them with certain labels (say choice1, choice2 etc). That is easy enough by itself:
firstqn<-data.frame(choice1=data$Q7_2,choice2=data$Q7_3,choice3=data$Q7_4);
secondqn<-data.frame(choice1=data$Q8_6,choice2=data$Q8_7,choice3=data$Q8_8)
print colSums(firstqn); print colSums(secondqn)
The problem comes when I want to repeat the above steps with different filters, - say, only the rows where gender==2.
The only way I know how is to create a new dataset data2 and replace data$ with data2$ in every line of the above code, such as:
data2<-(data[data$Q2==2,])
firstqn<-data.frame(choice1=data2$Q7_2,choice2=data2$Q7_3,choice3=data2$Q7_4);
however i have 6 choices for each of 5 questions and am planning to apply about 5-10 different filters, and I don't relish the thought of copy/pasting data2 and `data3' etc hundreds of times.
So my question is: Is there any way of getting R to reference data by default without using data$ in front of every variable name?
I can probably use attach() to achieve this, but i really don't want to:
data2<-(data[data$Q2==2,])
attach(data2)
firstqn<-data.frame(choice1=Q7_2,choice2=Q7_3,choice3=Q7_4);
detach(data2)
is there a command like attach() that would allow me to avoid using data$ in front of every variable, for a specified amount of code? Then whenever I wanted to create a new filter, I could just copy/paste the same code and change the first command (defining a new dataset).
I guess I'm looking for some command like with(data2, *insert multiple commands here*)
Alternatively, if anyone has a better way to do the above in an entirely different way please enlighten me - i'm not very proficient at R (yet).
I have a dataset in SPSS that has 100K+ rows and over 100 columns. I want to filter both the rows and columns at the same time into a new SPSS dataset.
I can accomplish this very easily using the subset command in R. For example:
new_data = subset(old_data, select = ColumnA >10, select = c(ColumnA, ColumnC, ColumnZZ))
Even easier would be:
new data = old_data[old_data$ColumnA >10, c(1, 4, 89)]
where I am passing the column indices instead.
What is the equivalent in SPSS?
I love R, but the read/write and data management speed of SPSS is significantly better.
I am not sure what exactly you are referring to when you write that "the read/write and data management speed of SPSS being significantly better" than R. Your question itself demonstrates how flexible R is at data management! And, a dataset of 100k rows and 100 columns is by no means a large one.
But, to answer your question, perhaps you are looking for something like this. I'm providing a "programmatic" solution, rather than the GUI one, because you're asking the question on Stack Overflow, where the focus is more on the programming side of things. I'm using a sample data file that can be found here: http://www.ats.ucla.edu/stat/spss/examples/chp/p004.sav
Save that file to your SPSS working directory, open up your SPSS syntax editor, and type the following:
GET FILE='p004.sav'.
SELECT IF (lactatio <= 3).
SAVE OUTFILE= 'mynewdatafile.sav'
/KEEP currentm previous lactatio.
GET FILE='mynewdatafile.sav'.
More likely, though, you'll have to go through something like this:
FILE HANDLE directoryPath /NAME='C:\path\to\working\directory\' .
FILE HANDLE myFile /NAME='directoryPath/p004.sav' .
GET FILE='myFile'.
SELECT IF (lactatio <= 3).
SAVE OUTFILE= 'directoryPath/mynewdatafile.sav'
/KEEP currentm previous lactatio.
FILE HANDLE myFile /NAME='directoryPath/mynewdatafile.sav'.
GET FILE='myFile'.
You should now have a new file created that has just three columns, and where no value in the "lactatio" column is greater than 3.
So, the basic steps are:
Load the data you want to work with.
Subset for all columns from all the cases you're interested in.
Save a new file with only the variables you're interested in.
Load that new file before you proceed.
With R, the basic steps are:
Load the data you want to work with.
Create an object with your subset of rows and columns (which you know how to do).
Hmm.... I don't know about you, but I know which method I prefer ;)
If you're using the right tools with R, you can also directly read in the specific subset you are interested in without first loading the whole dataset if speed really is an issue.
In spss you can't combine the two actions in one command, but it's easy enough to do it in two:
dataset copy old_data. /* delete this if you don't need to keep both old and new data.
select if ColumnA>10.
add files /file=* /keep=ColumnA ColumnC ColumnZZ.