Related
From what I've seen, R cannot very easily produce usable output for large correlation matrices (50-100 variables). For instance, "corr.test" or "cor" output is horrendously wrapped (each variable should have only one row and one column, but this is certainly not the case) and does not copy well into Excel for later examination. Is there a way to produce SPSS-like correlation output in R? Namely, correlation matrices that can be copied and pasted easily into something like Excel, where each row and each column pertains to one variable (no wrapping of text), and ideally, sample-sizes and significance values are somehow available. Corr.test provides this information, albeit in an inconvenient format, and when variables exceed output viewer space in R, the output is basically unreadable. Any thoughts would be greatly, greatly appreciated, as I'm frequently working with many variables at once.
Is there anything wrong with
z <- matrix(rnorm(10000),100)
write.csv(cor(z),file="cortmp.csv")
? View(cor(z)) works for me, although I don't know if it's copy-and-pasteable.
For psych::corr.test
dimnames(z) <- list(1:100,1:100)
z[1,2] <- NA ## unbalance to induce sample size matrix
ct <- psych::corr.test(z)
write.csv(ct$n,file="ntmp.csv") ## sample sizes
write.csv(ct$t,file="ttmp.csv") ## t statistics
write.csv(ct$p,file="ptmp.csv") ## p-values
et cetera. (See str(ct).)
R's paradigm is that if you want to transfer information to another program you're going to output it to a file rather than copying and pasting it from the console ...
I've got this dataset
install.packages("combinat")
install.packages("quantmod")
library(quantmod)
library(combinat)
library(utils)
getSymbols("AAPL",from="2012-01-01")
data<-AAPL
p1<-4
dO<-data[,1]
dC<-data[,4]
emaO<-EMA(dO,n=p1)
emaC<-EMA(dC,n=p1)
Pos_emaO_dO_UP<-emaO>dO
Pos_emaO_dO_D<-emaO<dO
Pos_emaC_dC_UP<-emaC>dC
Pos_emaC_dC_D<-emaC<dC
Pos_emaC_dO_D<-emaC<dO
Pos_emaC_dO_UP<-emaC>dO
Pos_emaO_dC_UP<-emaO>dC
Pos_emaO_dC_D<-emaO<dC
Profit_L_1<-((lag(dC,-1)-lag(dO,-1))/(lag(dO,-1)))*100
Profit_L_2<-(((lag(dC,-2)-lag(dO,-1))/(lag(dO,-1)))*100)/2
Profit_L_3<-(((lag(dC,-3)-lag(dO,-1))/(lag(dO,-1)))*100)/3
Profit_L_4<-(((lag(dC,-4)-lag(dO,-1))/(lag(dO,-1)))*100)/4
Profit_L_5<-(((lag(dC,-5)-lag(dO,-1))/(lag(dO,-1)))*100)/5
Profit_L_6<-(((lag(dC,-6)-lag(dO,-1))/(lag(dO,-1)))*100)/6
Profit_L_7<-(((lag(dC,-7)-lag(dO,-1))/(lag(dO,-1)))*100)/7
Profit_L_8<-(((lag(dC,-8)-lag(dO,-1))/(lag(dO,-1)))*100)/8
Profit_L_9<-(((lag(dC,-9)-lag(dO,-1))/(lag(dO,-1)))*100)/9
Profit_L_10<-(((lag(dC,-10)-lag(dO,-1))/(lag(dO,-1)))*100)/10
which are given to this frame
frame<-data.frame(Pos_emaO_dO_UP,Pos_emaO_dO_D,Pos_emaC_dC_UP,Pos_emaC_dC_D,Pos_emaC_dO_D,Pos_emaC_dO_UP,Pos_emaO_dC_UP,Pos_emaO_dC_D,Profit_L_1,Profit_L_2,Profit_L_3,Profit_L_4,Profit_L_5,Profit_L_6,Profit_L_7,Profit_L_8,Profit_L_9,Profit_L_10)
colnames(frame)<-c("Pos_emaO_dO_UP","Pos_emaO_dO_D","Pos_emaC_dC_UP","Pos_emaC_dC_D","Pos_emaC_dO_D","Pos_emaC_dO_UP","Pos_emaO_dC_UP","Pos_emaO_dC_D","Profit_L_1","Profit_L_2","Profit_L_3","Profit_L_4","Profit_L_5","Profit_L_6","Profit_L_7","Profit_L_8","Profit_L_9","Profit_L_10")
There is vector with variables for later usage
vector<-c("Pos_emaO_dO_UP","Pos_emaO_dO_D","Pos_emaC_dC_UP","Pos_emaC_dC_D","Pos_emaC_dO_D","Pos_emaC_dO_UP","Pos_emaO_dC_UP","Pos_emaO_dC_D")
I made all possible combination with 4 variables of the vector (there are no depended variables)
comb<-as.data.frame(combn(vector,4))
comb
and get out the ,,nonsense" combination (where are both possible values of variable)
rc<-comb[!sapply(comb, function(x) any(duplicated(sub('_D|_UP', '', x))))]
rc
Then I prepare the first combination to later subseting
var<-paste(rc[,1],collapse=" & ")
var
and subset the frame (with all DVs)
kr<-eval(parse(text=paste0('subset(frame,' , var,')' )))
kr
Now I have the subseted df by the first combination of 4 variables.
Then I used the evaluation function on it
evaluation<-function(x){
s_1<-nrow(x[x$Profit_L_1>0,])/nrow(x)
s_2<-nrow(x[x$Profit_L_2>0,])/nrow(x)
s_3<-nrow(x[x$Profit_L_3>0,])/nrow(x)
s_4<-nrow(x[x$Profit_L_4>0,])/nrow(x)
s_5<-nrow(x[x$Profit_L_5>0,])/nrow(x)
s_6<-nrow(x[x$Profit_L_6>0,])/nrow(x)
s_7<-nrow(x[x$Profit_L_7>0,])/nrow(x)
s_8<-nrow(x[x$Profit_L_8>0,])/nrow(x)
s_9<-nrow(x[x$Profit_L_9>0,])/nrow(x)
s_10<-nrow(x[x$Profit_L_10>0,])/nrow(x)
n_1<-nrow(x[x$Profit_L_1>0,])/nrow(frame)
n_2<-nrow(x[x$Profit_L_2>0,])/nrow(frame)
n_3<-nrow(x[x$Profit_L_3>0,])/nrow(frame)
n_4<-nrow(x[x$Profit_L_4>0,])/nrow(frame)
n_5<-nrow(x[x$Profit_L_5>0,])/nrow(frame)
n_6<-nrow(x[x$Profit_L_6>0,])/nrow(frame)
n_7<-nrow(x[x$Profit_L_7>0,])/nrow(frame)
n_8<-nrow(x[x$Profit_L_8>0,])/nrow(frame)
n_9<-nrow(x[x$Profit_L_9>0,])/nrow(frame)
n_10<-nrow(x[x$Profit_L_10>0,])/nrow(frame)
pr_1<-sum(kr[,"Profit_L_1"])/nrow(kr[,kr=="Profit_L_1"])
pr_2<-sum(kr[,"Profit_L_2"])/nrow(kr[,kr=="Profit_L_2"])
pr_3<-sum(kr[,"Profit_L_3"])/nrow(kr[,kr=="Profit_L_3"])
pr_4<-sum(kr[,"Profit_L_4"])/nrow(kr[,kr=="Profit_L_4"])
pr_5<-sum(kr[,"Profit_L_5"])/nrow(kr[,kr=="Profit_L_5"])
pr_6<-sum(kr[,"Profit_L_6"])/nrow(kr[,kr=="Profit_L_6"])
pr_7<-sum(kr[,"Profit_L_7"])/nrow(kr[,kr=="Profit_L_7"])
pr_8<-sum(kr[,"Profit_L_8"])/nrow(kr[,kr=="Profit_L_8"])
pr_9<-sum(kr[,"Profit_L_9"])/nrow(kr[,kr=="Profit_L_9"])
pr_10<-sum(kr[,"Profit_L_10"])/nrow(kr[,kr=="Profit_L_10"])
mat<-matrix(c(s_1,n_1,pr_1,s_2,n_2,pr_2,s_3,n_3,pr_3,s_4,n_4,pr_4,s_5,n_5,pr_5,s_6,n_6,pr_6,s_7,n_7,pr_7,s_8,n_8,pr_8,s_9,n_9,pr_9,s_10,n_10,pr_10),ncol=3,nrow=10,dimnames=list(c(1:10),c("s","n","pr")))
df<-as.data.frame(mat)
return(df)
}
result<-evaluation(kr)
result
And I need to help in several cases.
1, in evaluation function the way the matrix is made is wrong (s_1,n_1,pr_1 are starting in first column but I need to start the order by rows)
2, I need to use some loop/lapply function to go trough all possible combinations (not only the first one like in this case (var<-paste(rc[,1],collapse=" & ")) and have the understandable output where is evaluation function used on every combination and I will be able to see for which combination of variables is the evaluation done (understand I need to recognize for what is this evaluation made) and compare evaluation results for each combination.
3, This is not main point, BUT I generally want to evaluate all possible combinations (it means for 2:n number of variables and also all combinations in each of them) and then get the best possible combination according to specific DV (Profit_L_1 or Profit_L_2 and so on). And I am so weak in looping now, so, if it this possible, keep in mind what am I going to do with it later.
Thanks, feel free to update, repair or improve the question (if there is something which could be done way more easily, effectively - do it - I am open for every senseful advice.
I have a dataset, which is broken into 20 groups. The matrices storing the data for each group (2 columns of data), are stored in a list, so that I can perform functions on each set within a loop. I would like to store the output of any function that I might run in another matrix.
For example, if I run a fitdistr() on all 20 groups, I would like the output of the function stored in a matrix, so that I can call distribution[1] to call the results from group 1. I have tried the following:
distribution<-ls()
for(i in (1:20))
{ distribution[[i]]<-fitdistr(as.numeric(data[[i]]$Column2,"normal") }
This sucessfully stores the outputs, and I can call:
distribution[1]
The issue is that the fitdistr() results in 2 columns of data - a mean and a standard deviation. I checked that I cannot call the mean for a given point:
names(distribtuion)
"NULL"
So I obviously cannot call get the means, say by:
distribution[1]$mean
I will be looking for trends in the means and standard deviations (and other parameters for other distributions), so I would like to have the results of fitdistr() stored in a matrix somehow if at all possible. Even if I could somehow call only say, the mean, when running the function, then I can just create an empty vector and populated it in a loop, then repeat for the standard deviation.
I have considered creating an empty matrix large enough to store the data (so it would be 20 rows, 1 for each group, and 2 columns, 1 for each calculated value). I'm still not sure how I would dictate that I want the calculated mean stored in column 1 and the calculated standard deviation stored in column 2. Again, it is an issue of asking the function for only one of its multiple outputs at a time.
I've also looked into one of the apply functions, but these do not seem to be appropriate for what I am doing.
ls() is a function that lists the objects in a given environment. It returns a character vector.
You (probably) mean to have list().
But then you would be growing your list within a loop. Which is the second circle of R hell.
Instead use lapply with the appropriate function (hard to tell where you want the as.numeric to go, but it is not correct in your example).
something like..
distribution <- lapply(data, function(x) fitdistr(as.numeric(x[['Column2']]),"normal"))
I'm trying to analyze data from the University of Minnesota IPUMS dataset for the 1990 US census in R. I'm using the survey package because the data is weighted. Just taking the household data (and ignoring the person variables to keep things simple), I am attempting to calculate the mean of hhincome (household income). To do this I created a survey design object using the svydesign() function with the following code:
> require(foreign)
> ipums.household <- read.dta("/path/to/stata_export.dta")
> ipums.household[ipums.household$hhincome==9999999, "hhincome"] <- NA # Fix missing
> ipums.hh.design <- svydesign(id=~1, weights=~hhwt, data=ipums.household)
> svymean(ipums.household$hhincome, ipums.hh.design, na.rm=TRUE)
mean SE
[1,] 37029 17.365
So far so good. However, I get a different standard error if I attempt the same calculation in Stata (using code meant for a different portion of the same dataset):
use "C:\I\Hate\Backslashes\stata_export.dta"
replace hhincome = . if hhincome == 9999999
(933734 real changes made, 933734 to missing)
mean hhincome [fweight = hhwt] # The code from the link above.
Mean estimation Number of obs = 91746420
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
hhincome | 37028.99 3.542749 37022.05 37035.94
--------------------------------------------------------------
And, looking at another way to skin this cat, the author of survey, has this suggestion for frequency weighting:
expanded.data<-as.data.frame(lapply(compressed.data,
function(x) rep(x,compressed.data$weights)))
However, I can't seem to get this code to work:
> hh.dataframe <- data.frame(ipums.household$hhincome, ipums.household$hhwt)
> expanded.hh.dataframe <- as.data.frame(lapply(hh.dataframe, function(x) rep(x, hh.dataframe$hhwt)))
Error in rep(x, hh.dataframe$hhwt) : invalid 'times' argument
Which I can't seem to fix. This may be related to this issue.
So in sum:
Why don't I get the same answers in Stata and R?
Which one is right (or am I doing something wrong in both cases)?
Assuming I got the rep() solution working, would that replicate Stata's results?
What's the right way to do it? Kudos if the answer allows me to use the plyr package for doing arbitrary calculations, rather than being limited to the functions implemented in survey (svymean(), svyglm() etc.)
Update
So after the excellent help I've received here and from IPUMS via email, I'm using the following code to properly handle survey weighting. I describe here in case someone else has this problem in future.
Initial Stata Preparation
Since IPUMS don't currently publish scripts for importing their data into R, you'll need to start from Stata, SAS, or SPSS. I'll stick with Stata for now. Begin by running the import script from IPUMS. Then before continuing add the following variable:
generate strata = statefip*100000 + puma
This creates a unique integer for each PUMA of the form 240001, with first two digits as the state fip code (24 in the case of Maryland) and the last four a PUMA id which is unique on a per state basis. If you're going to use R you might also find it helpful to run this as well
generate statefip_num = statefip * 1
This will create an additional variable without labels, since importing .dta files into R apply the labels and lose the underlying integers.
Stata and svyset
As Keith explained, survey sampling is handled by Stata by invoking svyset.
For an individual level analysis I now use:
svyset serial [pweight=perwt], strata(strata)
This sets the weighting to perwt, the stratification to the variable we created above, and uses the household serial number to account for clustering. If we were using multiple years, we might want to try
generate double yearserial = year*100000000 + serial
to account for longitudinal clustering as well.
For household level analysis (without years):
svyset serial [pweight=hhwt], strata(strata)
Should be self-explanatory (though I think in this case serial is actually superfluous). Replacing serial with yearserial will take into account a time series.
Doing it in R
Assuming you're importing a .dta file with the additional strata variable explained above and analysing at the individual letter:
require(foreign)
ipums <- read.dta('/path/to/data.dta')
require(survey)
ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt)
Or at the household level:
ipums.hh.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=hhwt)
Hope someone finds this helpful, and thanks so much to Dwin, Keith and Brandon from IPUMS.
1&2) The comment you cited from Lumley was written in 2001 and predates any of his published work with the survey package which has only been out a few years. You are probably using "weights" in two different senses. (Lumley describes three possible senses early in his book.) The survey function svydesign is using probability weights rather than frequency weights. Seems likely that these are not really frequency weights but rather probability weights, given the massive size of that dataset, and that would mean that the survey package result is correct and the Stata result incorrect. If you are not convinced, then the survey package offers the function as.svrepdesign() with which Lumley's book describes how to create a replicate weight vector from a svydesign-object.
3) I think so, but as RMN said ..."It would be wrong."
4) Since it's wrong (IMO) it's not necessary.
You shouldn't be using frequency weights in Stata. That is pretty clear. If IPUMS doesn't have a "complex" survey design, you can just use:
mean hhincome [pw = hhwt]
Or, for convenience:
svyset [pw = hhwt]
svy: mean hhincome
svy: regress hhincome `x'
What's nice about the second option is that you can use it for more complex survey designs (via options on svyset. Then you can run lots of commands without having to typ [pw...] all the time.
Slight addition for people who don't have access to Stata or SAS; (I would put this in comments but...)
The library SAScii can use the SAS code file to read in the IPUMS downloaded data. The code to read in the data is from the doc
library(SAScii)
IPUMS.file.location <- "..\\usa_00007dat\\usa_00007.dat"
IPUMS.SAS.read.in.instructions <- "..\\usa_00007dat\\usa_00007.sas"
#store the IPUMS extract as an R data frame!
IPUMS.df <-
read.SAScii (
IPUMS.file.location ,
IPUMS.SAS.read.in.instructions ,
zipped = F )
How can I plot a very large data set in R?
I'd like to use a boxplot, or violin plot, or similar. All the data cannot be fit in memory. Can I incrementally read in and calculate the summaries needed to make these plots? If so how?
In supplement to my comment to Dmitri answer, a function to calculate quantiles using ff big-data handling package:
ffquantile<-function(ffv,qs=c(0,0.25,0.5,0.75,1),...){
stopifnot(all(qs<=1 & qs>=0))
ffsort(ffv,...)->ffvs
j<-(qs*(length(ffv)-1))+1
jf<-floor(j);ceiling(j)->jc
rowSums(matrix(ffvs[c(jf,jc)],length(qs),2))/2
}
This is an exact algorithm, so it uses sorting -- and thus may take a lot of time.
Problem is you can't load all data into the memory. So you could do sampling of the data, as indicated earlier by #Marek. On such a huge datasets, you get essentially the same results even if you take only 1% of the data. For the violin plot, this will give you a decent estimate of the density. Progressive calculation of quantiles is impossible, but this should give a very decent approximation. It is essentially the same as the "randomized method" described in the link #aix gave.
If you can't subset the date outside of R, it can be done using connections in combination with sample(). Following function is what I use to sample data from a dataframe in text format when it's getting too big. If you play a bit with the connection, you could easily convert this to a socketConnection or other to read it from a server, a database, whatever. Just make sure you open the connection in the correct mode.
Good, take a simple .csv file, then following function samples a fraction p of the data:
sample.df <- function(f,n=10000,split=",",p=0.1){
con <- file(f,open="rt",)
on.exit(close(con,type="rt"))
y <- data.frame()
#read header
x <- character(0)
while(length(x)==0){
x <- strsplit(readLines(con,n=1),split)[[1]]
}
Names <- x
#read and process data
repeat{
x <- tryCatch(read.table(con,nrows=n,sep=split),error = function(e) NULL )
if(is.null(x)) {break}
names(x) <- Names
nn <- nrow(x)
id <- sample(1:nn,round(nn*p))
y <- rbind(y,x[id,])
}
rownames(y) <- NULL
return(y)
}
An example of the usage :
#Make a file
Df <- data.frame(
X1=1:10000,
X2=1:10000,
X3=rep(letters[1:10],1000)
)
write.csv(Df,file="test.txt",row.names=F,quote=F)
# n is number of lines to be read at once, p is the fraction to sample
DF2 <- sample.df("test.txt",n=1000,p=0.2)
str(DF2)
#clean up
unlink("test.txt")
All you need for a boxplot are the quantiles, the "whisker" extremes, and the outliers (if shown), which is all easily precomputed. Take a look at the boxplot.stats function.
You should also look at the RSQLite, SQLiteDF, RODBC, and biglm packages. For large datasets is can be useful to store the data in a database and pull only pieces into R. The databases can also do sorting for you and then computing quantiles on sorted data is much simpler (then just use the quantiles to do the plots).
There is also the hexbin package (bioconductor) for doing scatterplot equivalents with very large datasets (probably still want to use a sample of the data, but works with a large sample).
You could put the data into a database and calculate the quantiles using SQL. See : http://forge.mysql.com/tools/tool.php?id=149
This is an interesting problem.
Boxplots require quantiles. Computing quantiles on very large datasets is tricky.
The simplest solution that may or may not work in your case is to downsample the data first, and produce plots of the sample. In other words, read a bunch of records at a time, and retain a subset of them in memory (choosing either deterministically or randomly.) At the end, produce plots based on the data that's been retained in memory. Again, whether or not this is viable very much depends on the properties of your data.
Alternatively, there exist algorithms that can economically and approximately compute quantiles in an "online" fashion, meaning that they are presented with one observation at a time, and each observation is shown exactly once. While I have some limited experience with such algorithms, I have not seen any readily-available R implementations.
The following paper presents a brief overview of some relevant algorithms: Quantiles on Streams.
You could make plots from manageable sample of your data. E.g. if you use only 10% randomly chosen rows then boxplot on this sample shouldn't differ from all-data boxplot.
If your data are on some database there you be able to create some random flag (as I know almost every database engine has some kind of random number generator).
Second thing is how large is your dataset? For boxplot you need two columns: value variable and group variable. This example:
N <- 1e6
x <- rnorm(N)
b <- sapply(1:100, function(i) paste(sample(letters,40,TRUE),collapse=""))
g <- factor(sample(b,N,TRUE))
boxplot(x~g)
needs 100MB of RAM. If N=1e7 then it uses <1GB of RAM (which is still manageable to modern machine).
Perhaps you can think about using disk.frame to summarise the data down first before running the plotting?
The problem with R (and other languages like Python and Julia) is that you have to load all your data into memory to plot it. As of 2022, the best solution is to use DuckDB (there is an R connector), it allows you to query very large datasets (CSV, parquet, among others), and it comes with many functions to compute summary statistics. The idea is to use DuckDB to compute those statistics, load such statistics into R/Python/Julia, and plot.
Computing a boxplot with SQL + R
You need a bunch of statistics to plot a boxplot. If you want a complete reference, you can look at matplotlib's code. The code is in Python, but the code is pretty straightforward, so you'll get it even if you don't know Python.
The most critical piece are percentiles; you can compute those in DuckDB like this (just change the placeholders):
SELECT
percentile_disc(0.25) WITHIN GROUP (ORDER BY "{{column}}") AS q1,
percentile_disc(0.50) WITHIN GROUP (ORDER BY "{{column}}") AS med,
percentile_disc(0.75) WITHIN GROUP (ORDER BY "{{column}}") AS q3,
AVG("{{column}}") AS mean,
COUNT(*) AS N
FROM "{{path/to/data.parquet}}"
You need some other statistics to create the boxplot with all its details. For full implementation, check this (note: it's written in Python). I had to implement this for a package I wrote called JupySQL, which allows plotting very large datasets in Jupyter by leveraging SQL engines such as DuckDB.
Once you compute the statistics, you can use R to generate the boxplot.