Translate Stata "gen" into R - r

I am reproducing some Stata code in R and struggling with the following command:
gen new_var=((var1_a==1 & var1_b==0) | (var2_a==1 & var2_b==0))
I am generally familiar with the gen syntax, but in this case I do not understand how values are assigned based on the boolean condition.
What would the above be in R?

In Stata, in general the above gen command will work because you have variables in your in-memory dataset (similar to a single R dataframe) named var1_a, var1_b, var_2_a, and var2_b. If these exist as vectors in your R environment, then our colleague Nick Cox is exactly correct: all is needed is the statement without the leading gen.. (although typically in R we would write it like this):
new_var <- (var1_a==1 & var1_b==0) | (var2_a==1 & var2_b==0)
However, if you have a data frame object, say df that contains columns with these names, and the objective is to add another column to df that reflects your logical conditions (like adding a new variable ("column") to the dataset in Stata using generate / gen. In this case, the above approach will not work as the columns var1_a, var1_b, etc will not be found in the global environment.
Instead, to add a new column called new_var to the dataframe called df, we would write something like this:
df["new_var"] <- (df$var1_a==1 & df$var1_b==0) | (df$var2_a==1 & var2_b==0)

Related

SPSS create variable function in R

I am trying to switch from SPSS to R, but I'm still stuck on some aspects of the R logic at the moment. Right now, I am trying to rebuild something like the "create new variable" function from SPSS - exactly the follwing SPSS syntax:
COMPUTE ABC_WC = 1- ((ABS ( WC_B - WC_A )) / ( WC_B + WC_A + 0.0001))
In R, my data frame contains all of the defined variables (WC_A and WC_B) as columns with X observations in the line. I need to repeat these analyses for a fixed, repeating number of variables every time I do my analyses - so a more or less automated version of this calculation across all 80 variables would be great.
Thank you very much!
If you have a data frame called df then the following should do the job in R:
df$ABC_WC <- 1 - ((abs(df$WC_B - df$WC_A)) / (df$WC_B + df$WC_A + 0.0001))
To write a function, try substituting the columns and a data frame by variables. Then write a for loop or use apply.

How to create variables using R through SPSS and pass back to SPSS?

I've been experimenting with calling R through SPSS.
I have figured out how to pull SPSS data into an R dataframe, create a variable, and pass the dataframe with the new variable back to a SPSS data set.
What I cannot figure out how to do is pass back variables that are additional transformations of the first variable created using R.
Specifically, I first create the variable
index <- c("INDX","label",0,"F8.2","scale")
by scaling the variable B from 0 to 1 and create the dataframe casedata using the code below:
casedata <- data.frame(casedata, ave(casedata$B, casedata$Patient_Type,
FUN = function(x) (x- min(x))/(max(x)- min(x))))
I can successfully pass the new dataframe back to SPSS and everything's fine. But in the same call to R, I would like to create a new variable
indexave <- c("INDX_Ave","label",0,"F8.2","scale")
which indexes INDX to the average of itself using the code below:
casedata <- data.frame(casedata, casedata$INDX/mean(casedata$INDX))
I cannot figure out how to pass INDX_Ave back to SPSS.
I suspect that is has to do with the way SPSS assigns names to new variables. You'll notice that
ave(casedata$B, casedata$Patient_Type, FUN = function(x) (x- min(x))/(max(x) - min(x))
doesn't have casedata$INDX= in front of it. SPSS apparently knows from this line of code
index <- c("INDX","label",0,"F8.2","scale")
to pass the name INDX to the first variable created. I believe this disjointedness of the variable name from the variable itself is preventing the additional variable INDX_Ave from being created.
Below is my entire program block:
BEGIN PROGRAM R.
dict <- spssdictionary.GetDictionaryFromSPSS()
casedata <- spssdata.GetDataFromSPSS(factorMode="labels")
catdict <- spssdictionary.GetCategoricalDictionaryFromSPSS()
index <- c("INDX","Level Importance Index",0,"F8.2","scale")
indexave <- c("INDX_Ave","Level importance indexed to average importance",0,"F8.2","scale")
dict<-data.frame(dict,index,indexave)
casedata <- data.frame(casedata, ave(casedata$B, casedata$Patient_Type,
FUN = function(x) (x- min(x))/(max(x)- min(x))))
casedata <- data.frame(casedata, casedata$INDX/mean(casedata$INDX)) #dosent work
spssdictionary.SetDictionaryToSPSS("BWOverallBetas2",dict,categoryDictionary=catdict)
spssdata.SetDataToSPSS("BWOverallBetas2",casedata,categoryDictionary=catdict)
spssdictionary.EndDataStep()
END PROGRAM.
See the section "Writing Results to a New IBM SPSS Statistics Dataset" in the R Programmability doc. The names in the dictionary you pass govern the names on the SPSS side, but note that the rules for legal variable names in SPSS and R are different, although that isn't an issue here. Also, you can't create a dataset if SPSS is in procedure state (also not an issue with this code).
Your code adds INDX to the SPSS dictionary and computes it via ave but does not assign the name INDX in the casedata data frame. Then it adds another variable but does not add that to the dictionary to be sent to SPSS, so the sizes of the dictionary and the data frames don't match.
Note also that you can omit the factorMode argument in GetDataFromSPSS and then not bother with the categorical dictionary, because the values will be unchanged.
HTH

Loop and clear the basic function in R

I've got this dataset
install.packages("combinat")
install.packages("quantmod")
library(quantmod)
library(combinat)
library(utils)
getSymbols("AAPL",from="2012-01-01")
data<-AAPL
p1<-4
dO<-data[,1]
dC<-data[,4]
emaO<-EMA(dO,n=p1)
emaC<-EMA(dC,n=p1)
Pos_emaO_dO_UP<-emaO>dO
Pos_emaO_dO_D<-emaO<dO
Pos_emaC_dC_UP<-emaC>dC
Pos_emaC_dC_D<-emaC<dC
Pos_emaC_dO_D<-emaC<dO
Pos_emaC_dO_UP<-emaC>dO
Pos_emaO_dC_UP<-emaO>dC
Pos_emaO_dC_D<-emaO<dC
Profit_L_1<-((lag(dC,-1)-lag(dO,-1))/(lag(dO,-1)))*100
Profit_L_2<-(((lag(dC,-2)-lag(dO,-1))/(lag(dO,-1)))*100)/2
Profit_L_3<-(((lag(dC,-3)-lag(dO,-1))/(lag(dO,-1)))*100)/3
Profit_L_4<-(((lag(dC,-4)-lag(dO,-1))/(lag(dO,-1)))*100)/4
Profit_L_5<-(((lag(dC,-5)-lag(dO,-1))/(lag(dO,-1)))*100)/5
Profit_L_6<-(((lag(dC,-6)-lag(dO,-1))/(lag(dO,-1)))*100)/6
Profit_L_7<-(((lag(dC,-7)-lag(dO,-1))/(lag(dO,-1)))*100)/7
Profit_L_8<-(((lag(dC,-8)-lag(dO,-1))/(lag(dO,-1)))*100)/8
Profit_L_9<-(((lag(dC,-9)-lag(dO,-1))/(lag(dO,-1)))*100)/9
Profit_L_10<-(((lag(dC,-10)-lag(dO,-1))/(lag(dO,-1)))*100)/10
which are given to this frame
frame<-data.frame(Pos_emaO_dO_UP,Pos_emaO_dO_D,Pos_emaC_dC_UP,Pos_emaC_dC_D,Pos_emaC_dO_D,Pos_emaC_dO_UP,Pos_emaO_dC_UP,Pos_emaO_dC_D,Profit_L_1,Profit_L_2,Profit_L_3,Profit_L_4,Profit_L_5,Profit_L_6,Profit_L_7,Profit_L_8,Profit_L_9,Profit_L_10)
colnames(frame)<-c("Pos_emaO_dO_UP","Pos_emaO_dO_D","Pos_emaC_dC_UP","Pos_emaC_dC_D","Pos_emaC_dO_D","Pos_emaC_dO_UP","Pos_emaO_dC_UP","Pos_emaO_dC_D","Profit_L_1","Profit_L_2","Profit_L_3","Profit_L_4","Profit_L_5","Profit_L_6","Profit_L_7","Profit_L_8","Profit_L_9","Profit_L_10")
There is vector with variables for later usage
vector<-c("Pos_emaO_dO_UP","Pos_emaO_dO_D","Pos_emaC_dC_UP","Pos_emaC_dC_D","Pos_emaC_dO_D","Pos_emaC_dO_UP","Pos_emaO_dC_UP","Pos_emaO_dC_D")
I made all possible combination with 4 variables of the vector (there are no depended variables)
comb<-as.data.frame(combn(vector,4))
comb
and get out the ,,nonsense" combination (where are both possible values of variable)
rc<-comb[!sapply(comb, function(x) any(duplicated(sub('_D|_UP', '', x))))]
rc
Then I prepare the first combination to later subseting
var<-paste(rc[,1],collapse=" & ")
var
and subset the frame (with all DVs)
kr<-eval(parse(text=paste0('subset(frame,' , var,')' )))
kr
Now I have the subseted df by the first combination of 4 variables.
Then I used the evaluation function on it
evaluation<-function(x){
s_1<-nrow(x[x$Profit_L_1>0,])/nrow(x)
s_2<-nrow(x[x$Profit_L_2>0,])/nrow(x)
s_3<-nrow(x[x$Profit_L_3>0,])/nrow(x)
s_4<-nrow(x[x$Profit_L_4>0,])/nrow(x)
s_5<-nrow(x[x$Profit_L_5>0,])/nrow(x)
s_6<-nrow(x[x$Profit_L_6>0,])/nrow(x)
s_7<-nrow(x[x$Profit_L_7>0,])/nrow(x)
s_8<-nrow(x[x$Profit_L_8>0,])/nrow(x)
s_9<-nrow(x[x$Profit_L_9>0,])/nrow(x)
s_10<-nrow(x[x$Profit_L_10>0,])/nrow(x)
n_1<-nrow(x[x$Profit_L_1>0,])/nrow(frame)
n_2<-nrow(x[x$Profit_L_2>0,])/nrow(frame)
n_3<-nrow(x[x$Profit_L_3>0,])/nrow(frame)
n_4<-nrow(x[x$Profit_L_4>0,])/nrow(frame)
n_5<-nrow(x[x$Profit_L_5>0,])/nrow(frame)
n_6<-nrow(x[x$Profit_L_6>0,])/nrow(frame)
n_7<-nrow(x[x$Profit_L_7>0,])/nrow(frame)
n_8<-nrow(x[x$Profit_L_8>0,])/nrow(frame)
n_9<-nrow(x[x$Profit_L_9>0,])/nrow(frame)
n_10<-nrow(x[x$Profit_L_10>0,])/nrow(frame)
pr_1<-sum(kr[,"Profit_L_1"])/nrow(kr[,kr=="Profit_L_1"])
pr_2<-sum(kr[,"Profit_L_2"])/nrow(kr[,kr=="Profit_L_2"])
pr_3<-sum(kr[,"Profit_L_3"])/nrow(kr[,kr=="Profit_L_3"])
pr_4<-sum(kr[,"Profit_L_4"])/nrow(kr[,kr=="Profit_L_4"])
pr_5<-sum(kr[,"Profit_L_5"])/nrow(kr[,kr=="Profit_L_5"])
pr_6<-sum(kr[,"Profit_L_6"])/nrow(kr[,kr=="Profit_L_6"])
pr_7<-sum(kr[,"Profit_L_7"])/nrow(kr[,kr=="Profit_L_7"])
pr_8<-sum(kr[,"Profit_L_8"])/nrow(kr[,kr=="Profit_L_8"])
pr_9<-sum(kr[,"Profit_L_9"])/nrow(kr[,kr=="Profit_L_9"])
pr_10<-sum(kr[,"Profit_L_10"])/nrow(kr[,kr=="Profit_L_10"])
mat<-matrix(c(s_1,n_1,pr_1,s_2,n_2,pr_2,s_3,n_3,pr_3,s_4,n_4,pr_4,s_5,n_5,pr_5,s_6,n_6,pr_6,s_7,n_7,pr_7,s_8,n_8,pr_8,s_9,n_9,pr_9,s_10,n_10,pr_10),ncol=3,nrow=10,dimnames=list(c(1:10),c("s","n","pr")))
df<-as.data.frame(mat)
return(df)
}
result<-evaluation(kr)
result
And I need to help in several cases.
1, in evaluation function the way the matrix is made is wrong (s_1,n_1,pr_1 are starting in first column but I need to start the order by rows)
2, I need to use some loop/lapply function to go trough all possible combinations (not only the first one like in this case (var<-paste(rc[,1],collapse=" & ")) and have the understandable output where is evaluation function used on every combination and I will be able to see for which combination of variables is the evaluation done (understand I need to recognize for what is this evaluation made) and compare evaluation results for each combination.
3, This is not main point, BUT I generally want to evaluate all possible combinations (it means for 2:n number of variables and also all combinations in each of them) and then get the best possible combination according to specific DV (Profit_L_1 or Profit_L_2 and so on). And I am so weak in looping now, so, if it this possible, keep in mind what am I going to do with it later.
Thanks, feel free to update, repair or improve the question (if there is something which could be done way more easily, effectively - do it - I am open for every senseful advice.

name.cols command in Splus to R

I have a code in Splus, but have to convert it into R, which is not a big thing. However I am very new to both softwares. This is the code I am struggling with:
name.x<-name.cols(x)
x is a matrix of independent variables where first length(keep1) columns correspond to variables that are always kept in BMA (Bayesian Model Averaging -- this isn't important. Essentially, x is a matrix)
R does not recognize this command. What is name.cols doing, and how can I do the same thing in R? How do I modify this command?
The function colnames returns the column names of an object in R:
name.x <- colnames(x)

How to aggregate on IQR in SPSS?

I have to aggregate (of course with a categorical break variable) a quite big data table containing some continuous variables by resulting the mean, median, standard deviation and interquartile range (IQR) of the required variables.
The first three is an easy one with the SPSS Aggregate command, but I have no idea how to compute IQR by aggregating the data table.
I know I could compute IQR by using Descriptives (by quartiles), but as I need the calculations in aggregation - this is not an option. Unfortunately using R fails also thanks to some odd circumstances (not able to load a huge comma separated file in R neither with base:: read.table, neither with sqldf, neither with bigmemory and neither with ff packages).
Any idea is welcomed! And of course: thank you in advance.
P.S.: I thought about estimating IQR by multiplying the standard deviation by 1.5, but that method would not work as the distributions are skewed, so assuming normality does not stands.
P.S.: do you think using R within SPSS would not result in memory problems like while opening the dataset in pure R?
This syntax should do the trick. There is no need to migrate back and forth between SPSS and R solely for this task.
*making fake data, 4 million records and 150 variables.
input program.
loop i = 1 to 4000000.
end case.
end loop.
end file.
end input program.
dataset name Temp.
execute.
vector X(150).
do repeat X = X1 to X150.
compute X = RV.NORMAL(0,1).
end repeat.
*This is the command you are interested in, puts the stats table into a new dataset.
Dataset declare IQR.
OMS
/SELECT TABLES
/IF SUBTYPES = 'Statistics'
/DESTINATION FORMAT = SAV outfile = 'IQR' VIEWER=NO.
freq var = X1
/format = notable
/ntiles = 4.
OMSEND.
This takes along time still with such a large dataset, but thats to be expected. Just search the SPSS help files for "OMS" to find the example syntax with how OMS works.
Given the further constraint that you want to calculate the IQR for many groups, there is a few different ways I could see to proceed. One would be just use the split file command and run the above frequency command again.
split file by group.
freq var = X1 X2
/format = notable
/ntiles = 4.
split file end.
You could also get specific percentiles within ctables (and can do whatever grouping/nesting you want for that). Potentially a more useful solution at this point though is to make a program that actually saves separate files (or reduces the full dataset the specific group while still loaded), does the calculation on each separate file and dumps it into a dataset. Working with the dataset that has the 4 million records is a pain, and it does not appear to be necessary if you are just splitting the file up anyway. This could be accomplished via macro commands.
OMS can capture any pivot table as a dataset, so any statistical results displayed that way can be used as a dataset. Another approach, however, in this case would be to use the RANK command. RANK allows for grouping variables, so you could get rank within group, and it can compute the quartiles and percentiles within group. For example,
RANK VARIABLES=salary (A) BY jobcat minority
/RANK /NTILES(4) /PERCENT. Then aggregating with FIRST and the group variables as breaks would give you a dataset of the quartiles by group from which to compute the iqr.
Many ways to skin a cat.
-Jon Peck

Resources