Exporting the truth table from QCA package in R - r

This almost drove me crazy today, so i figured I share it:
If you are working with the QCA (Qualitative Comparative Analysis) package in R, you can create so called truth tables. They are an essential part in the process of data analysis, so if you want to report your findings, it would be very useful to be able to export the truth table.
One option export is to jus copy the output from R. This is not very convenient however, because it means that you are limited to a fixed-width font like courier new.
You can export tables in R using the write.table() function, however, the truthtable() function does not create a dataframe as output, so you can not export it as table.
Thus, the question is how do you export the truth table as an actual table?

The answer is simple, but hard to find if you don't know where to look.
if you write the truth table into a variable, you can access the object ttwithin that variable for the corresponding data frame. The export shoud look like this:
myTable <- truthtable(parameters.....)
write.table(myTable$tt, file = "filename.txt", sep = "\t", quote = FALSE)
I hope this saves someone the painful process I had to go through to find this out. For more information check out the reference below.
Thiem, A., & Dusa, A. (2013). Qualitative Comparative Analysis with R.

Related

Best practise: lookup tables in R

Background: I work with animals, and I have a way of scoring each pet based on a bunch of variables. This score indicates to me the health of the animal, say a dog.
Best practise:
Let's say I wanted to create a bunch of lookup tables to recreate this scoring model of mine, that I have only stored in my head. Nowhere else. My goal is to import my scoring model into R. What is the best practise of doing this? I'm gonna use it to make a function where I just type in the variables and I will get a result back.
I started writing it directly into Excel, with an idea of importing it into R, but I wonder if this is bad practise?
I have thought of json-files, which I have no experience with, or just hardcoding a bunch of lists in R...
Writing the tables to an excel file (or multiple excel files) and reading them with R is not bad practice. I guess it comes down to the number of excel files you have and your preferences.
R can connect to pretty much any of the most common file types (csv, txt, xlsx, etc.) and you will be fine reading them in R. Another type is the .RData files which are native in R. You can use them with save, load:
df2 <- mtcars
save(df2, file = 'your_path_here')
load(file = 'your_path_here')
Any of the above is fine as long as your data is not too big (e.g. if you start having 100 excel files which you need to update frequently, your data is probably becoming too big to maintain in excel). If that ever happens, then you should consider creating a data base (e.g. MySQL, SQLite, etc.) and storing your data there. R would then connect to the data base to access the data.

How do I apply the same functions to multiple files in R

Hi I am quite new to R programming. What I want to do is to replicate a series of actions to multiple files. My first step is to create a function that reads a file, and then performs subsequent actions.
For example
analyze <- function(filename){data<- read.csv(filename, header=TRUE)
average<- mean(data[,2])
print (average)}
analyze ("my first file")
However, I am having a problem with the code, because it does not give the right result. data is not updated when I change the filename. I don't know what went wrong. Can anyone give me a simpler alternative solution? Many thanks.

R-Studio - computing crosstab and creating a table

I am new to programming and R. My R experience thus far has been Udemy's courses; specifically the Beginner and Intermediate R courses.
My data analysis background is heavily Excel and SPSS, as such I am trying to carry over those skills and find applicable analysis strategies in R.
I am attempting to compute a crosstabs, which will output the frequencies for the sets of 'character' data I am analyzing.
Below is a piece of code I am used to create a crosstab:
crosstab(Survey, row.vars = c("testcode","outcome"), col.vars = "svy1", type = "j")
I am able to see the data output in the Console, but I am unable to move it/put it into its own matrix like table in the Environment; the purpose being to create matrix like tables for reporting. I am sure there is an easy fix I am overlooking but any help is appreciated.

How to see which data is used in an example of a package

I am using the library(eventstudies)(Event Studies Package). In the sample they use:
(data(StockPriceReturns))
(data(SplitDates))
(head(SplitDates))
However I do not know how to set up my own dataset to use the package. My quesiton is:
How to look into the StockPriceReturns data?
I appreciate your answer!
I think you want to read a data set into a data frame or table.
I'm not familiar with that package, so I'm not sure about required format. If the data set you read in matches the schema of StockPriceReturns, I'm sure R will process it just fine. This PDF appears to explain it well.

pass value from R (run in SPSS) to open SPSS dataset

Completely new to R here. I ran R in SPSS to solve some complex polynomials from SPSS datasets. I managed to get the result from R back into SPSS, but it was a very inelegant process:
begin program R.
z <- polyroot(unlist(spssdata.GetDataFromSPSS(variables=c("qE","qD","qC","qB","qA"),cases=1),use.names=FALSE))
otherVals <- spssdata.GetDataFromSPSS(variables=c("b0","b1","Lc","tInv","sR","c0","c1","N2","xBar","DVxSq"),cases=1)
b0<-unlist(otherVals["b0"],use.names=FALSE)
b1<-unlist(otherVals["b1"],use.names=FALSE)
Lc<-unlist(otherVals["Lc"],use.names=FALSE)
tInv<-unlist(otherVals["tInv"],use.names=FALSE)
sR<-unlist(otherVals["sR"],use.names=FALSE)
c0<-unlist(otherVals["c0"],use.names=FALSE)
c1<-unlist(otherVals["c1"],use.names=FALSE)
N2<-unlist(otherVals["N2"],use.names=FALSE)
xBar<-unlist(otherVals["xBar"],use.names=FALSE)
DVxSq<-unlist(otherVals["DVxSq"],use.names=FALSE)
z2 <- Re(z[abs(c(abs(b0+b1*Re(z)-tInv*sR*sqrt(1/(c0+c1*Re(z))^2+1/N2+(Re(z)-xBar)^2/DVxSq))-Lc))==min(abs(c(abs(b0+b1*Re(z)-tInv*sR*sqrt(1/(c0+c1*Re(z))^2+1/N2+(Re(z)-xBar)^2/DVxSq))-Lc)))])
varSpec1 <- c("Xd","Xd",0,"F8","scale")
dict <- spssdictionary.CreateSPSSDictionary(varSpec1)
spssdictionary.SetDictionaryToSPSS("results", dict)
new = data.frame(z2)
spssdata.SetDataToSPSS("results", new)
spssdictionary.EndDataStep( )
end program.
Honestly, it was mostly pieced together from somewhat-related examples and seems more complicated than it should be. I had to take the new dataset created by R and run MATCH FILES with my original dataset. All I want to do is a) pull numbers from SPSS into R, b) manipulate them-in this case, finding a polyroot that fit certain criteria- , and c) put the results right back into the SPSS dataset without messing up any of the previous data.
Am I missing something that would make this more simple? Keep in mind that I have zero R experience outside of this attempt, but I have decent experience in programming SPSS and matlab.
Thanks in advance for any help you give!
R in SPSS can create new SPSS datasets, but it can't modify an existing one. There are a lot of situations where the data from R would be dimensionally inconsistent with the active SPSS dataset. So you need to create a dictionary and data frame using the apis above and then do whatever is appropriate on the SPSS side if you need to match back. You might want to submit an enhancement request for SPSS at suggest#us.ibm.com

Resources