R-Studio - computing crosstab and creating a table - r

I am new to programming and R. My R experience thus far has been Udemy's courses; specifically the Beginner and Intermediate R courses.
My data analysis background is heavily Excel and SPSS, as such I am trying to carry over those skills and find applicable analysis strategies in R.
I am attempting to compute a crosstabs, which will output the frequencies for the sets of 'character' data I am analyzing.
Below is a piece of code I am used to create a crosstab:
crosstab(Survey, row.vars = c("testcode","outcome"), col.vars = "svy1", type = "j")
I am able to see the data output in the Console, but I am unable to move it/put it into its own matrix like table in the Environment; the purpose being to create matrix like tables for reporting. I am sure there is an easy fix I am overlooking but any help is appreciated.

Related

Sentiment Analysis in R - coding problem with switching raw data

I am trying to run code to complete Sentiment Analysis. My main goal is to do a Word Cloud and possibly a sentiment analysis score. I am using a script and swopping out the raw data but I don't really understand what the outcomes are and also I get stuck trying to create a TDM. I am new to programming so apologies if this is not clear.

How to export multivariate forecast results from R to excel

I'm terribly new with R, so I apologize if there's a way to do this using a slight variation of an existing code/package.
I've created yearly forecasts of a variable (student enrollment) for 129 countries using the predict command, and then i have them binded. I've done this because I'm forecasting using a multivariate regression.
Here's what I'm doing (if this helps)
`fm1=lm(log(y+1)~Var.Ind)
XNew=data.frame(Var.Ind)
(rse<-summary(fit)$sigma(fm1)* df.residual(fm1))/2
rse<-summary(fm1)$sigma
yhat1=exp(predict(fm1,XNew)+rse*rse/2)-1
pos2014=which(Var.Ind[,1]==c(2014))
Var.Ind.2015=model.matrix(~as.matrix(Imp.Data4[pos2014,-2])-1)
head(Var.Ind.2015)
Var.Ind.2015=data.frame(Var.Ind.2015)
Var.Ind.2015.Ord=as.data.frame(Var.Ind.2015[order(Var.Ind.2015[,3],Var.Ind.2015[,1]), ])
head(Var.Ind.2015.Ord)
X.New.New=data.frame(cbind(model.matrix(~as.matrix(Var.Ind.2015.Ord))))
head(X.New.New)
ColNames.N=ColNames[-2]
colnames(X.New.New)=c("Int",ColNames.N,"Lag1","Lag2")
head(X.New.New)
Beta.Coef=matrix(as.numeric(fm1$coefficients),ncol=1)
Beta.Coef
Pred2015=as.data.frame(cbind(X.New.New[,3],exp(as.matrix(X.New.New)%*%Beta.Coef+rse*rse/2)-1))
dim(Pred2015)
colnames(Pred2015)=c("country","Yhat")
*And so on for subsequent years until 2030)
cbind(Pred2015, Pred2016, Pred2017, Pred2018, Pred2019)`
I need to figure out if there is a way to make sense of these results:
a) how to export the forecast results to excel
b) alternatively, if I could put these results into a table using R.
Also, these results do not appear in the Global Environment, only in the results section of the program, which is why I am not asking how to export data, but rather these specific results.
As previously mentioned, my coding knowledge is limited to my 1 week experience with R (I usually work with STATA).
Any help would be greatly appreciated!

reaching max.print on R

I just found a bunch of weather data that I would like to play around with in glmnet in R. First I've been reading and organizing the data in R, and right now I am just trying to look at the raw data of each variable. Unfortunately, each variable has a lot of data and R isn't able to print it all. Is there a way I can view all the raw data in R or just in the file itself? I've tried opening the file in excel to no success. Thanks!
Try to use Frequency tables, you can group by segments.
str() , summary(), table(), pairs(), plots() etc. There are several libraries (such as decr) which facilitate analyzing numerical and factor levels. Let me know if you need help with any.

Exporting the truth table from QCA package in R

This almost drove me crazy today, so i figured I share it:
If you are working with the QCA (Qualitative Comparative Analysis) package in R, you can create so called truth tables. They are an essential part in the process of data analysis, so if you want to report your findings, it would be very useful to be able to export the truth table.
One option export is to jus copy the output from R. This is not very convenient however, because it means that you are limited to a fixed-width font like courier new.
You can export tables in R using the write.table() function, however, the truthtable() function does not create a dataframe as output, so you can not export it as table.
Thus, the question is how do you export the truth table as an actual table?
The answer is simple, but hard to find if you don't know where to look.
if you write the truth table into a variable, you can access the object ttwithin that variable for the corresponding data frame. The export shoud look like this:
myTable <- truthtable(parameters.....)
write.table(myTable$tt, file = "filename.txt", sep = "\t", quote = FALSE)
I hope this saves someone the painful process I had to go through to find this out. For more information check out the reference below.
Thiem, A., & Dusa, A. (2013). Qualitative Comparative Analysis with R.

pass value from R (run in SPSS) to open SPSS dataset

Completely new to R here. I ran R in SPSS to solve some complex polynomials from SPSS datasets. I managed to get the result from R back into SPSS, but it was a very inelegant process:
begin program R.
z <- polyroot(unlist(spssdata.GetDataFromSPSS(variables=c("qE","qD","qC","qB","qA"),cases=1),use.names=FALSE))
otherVals <- spssdata.GetDataFromSPSS(variables=c("b0","b1","Lc","tInv","sR","c0","c1","N2","xBar","DVxSq"),cases=1)
b0<-unlist(otherVals["b0"],use.names=FALSE)
b1<-unlist(otherVals["b1"],use.names=FALSE)
Lc<-unlist(otherVals["Lc"],use.names=FALSE)
tInv<-unlist(otherVals["tInv"],use.names=FALSE)
sR<-unlist(otherVals["sR"],use.names=FALSE)
c0<-unlist(otherVals["c0"],use.names=FALSE)
c1<-unlist(otherVals["c1"],use.names=FALSE)
N2<-unlist(otherVals["N2"],use.names=FALSE)
xBar<-unlist(otherVals["xBar"],use.names=FALSE)
DVxSq<-unlist(otherVals["DVxSq"],use.names=FALSE)
z2 <- Re(z[abs(c(abs(b0+b1*Re(z)-tInv*sR*sqrt(1/(c0+c1*Re(z))^2+1/N2+(Re(z)-xBar)^2/DVxSq))-Lc))==min(abs(c(abs(b0+b1*Re(z)-tInv*sR*sqrt(1/(c0+c1*Re(z))^2+1/N2+(Re(z)-xBar)^2/DVxSq))-Lc)))])
varSpec1 <- c("Xd","Xd",0,"F8","scale")
dict <- spssdictionary.CreateSPSSDictionary(varSpec1)
spssdictionary.SetDictionaryToSPSS("results", dict)
new = data.frame(z2)
spssdata.SetDataToSPSS("results", new)
spssdictionary.EndDataStep( )
end program.
Honestly, it was mostly pieced together from somewhat-related examples and seems more complicated than it should be. I had to take the new dataset created by R and run MATCH FILES with my original dataset. All I want to do is a) pull numbers from SPSS into R, b) manipulate them-in this case, finding a polyroot that fit certain criteria- , and c) put the results right back into the SPSS dataset without messing up any of the previous data.
Am I missing something that would make this more simple? Keep in mind that I have zero R experience outside of this attempt, but I have decent experience in programming SPSS and matlab.
Thanks in advance for any help you give!
R in SPSS can create new SPSS datasets, but it can't modify an existing one. There are a lot of situations where the data from R would be dimensionally inconsistent with the active SPSS dataset. So you need to create a dictionary and data frame using the apis above and then do whatever is appropriate on the SPSS side if you need to match back. You might want to submit an enhancement request for SPSS at suggest#us.ibm.com

Resources