Datasets for Running Statistical Analysis on [closed] - r
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What datasets exist out on the internet that I can run statistical analysis on?
The datasets package is included with base R. Run this command to see a full list:
library(help="datasets")
Beyond that, there are many packages that can pull data, and many others that contain important data. Of these, you may want to start by looking at the HistData package, which "provides a collection of small data sets that are interesting and important in the history of statistics and data visualization".
For financial data, the quantmod package provides a common interface for pulling time series data from google, yahoo, FRED, and others:
library(quantmod)
getSymbols("YHOO",src="google") # from google finance
getSymbols("GOOG",src="yahoo") # from yahoo finance
getSymbols("DEXUSJP",src="FRED") # FX rates from FRED
FRED (the Federal Reserve of St. Louis) is really a landmine of free economic data.
Many R packages come bundled with data that is specific to their goal. So if you're interested in genetics, multilevel models, etc., the relevant packages will frequently have the canonical example for that analysis. Also, the book packages typically ship with the data needed to reproduce all the examples.
Here are some examples of relevant packages:
alr3: includes data to accompany Applied Linear Regression (http://www.stat.umn.edu/alr)
arm: includes some of the data from Gelman's "Data Analysis Using Regression and Multilevel/Hierarchical Models" (the rest of the data and code is on the book's website)
BaM: includes data from "Bayesian Methods: A Social and Behavioral Sciences Approach"
BayesDA: includes data from Gelman's "Bayesian Data Analysis"
cat: includes data for analysis of categorical-variable datasets
cimis: from retrieving data from CIMIS, the California Irrigation Management Information System
cshapes: includes GIS data boundaries and data
ecdat: data sets for econometrics
ElemStatLearn: includes data from "The Elements of Statistical Learning, Data Mining, Inference, and Prediction"
emdbook: data from "Ecological Models and Data"
Fahrmeir: data from the book "Multivariate Statistical Modelling Based on Generalized Linear Models"
fEcoFin: "Economic and Financial Data Sets" for Rmetrics
fds: functional data sets
fma: data sets from "Forecasting: methods and applications"
gamair: data for "Generalized Additive Models: An Introduction with R"
geomapdata: data for topographic and Geologic Mapping
nutshell: contains all the data from the "R in a Nutshell" book
nytR: provides access to congressional vote data through the NY Times API
openintro: data from the book
primer: includes data for "A Primer of Ecology with R"
qtlbook: includes data for the R/qtl book
RGraphics: includes data from the "R Graphics" book
Read.isi: access to old World Fertility Survey data
A broad selection on the Web. For instance, here's a massive directory of sports databases (all providing the data free of charge, at least that's my experience). In that directory is databaseBaseball.com, which contains among other things, complete datasets for every player who has ever played professional baseball since about 1915.
StatLib is an other excellent resource--beautifully convenient. This single web page lists 4-5 line summaries of over a hundred databases, all of which are available in flat-file form just by clicking the 'Table' link at the beginning of each data set summary.
The base distribution of R comes pre-packaged with a large and varied collection of datasts (122 in R 2.10). To get a list of them (as well as a one-line description):
data(package="datasets")
Likewise, most packages come with several data sets (sometimes a lot more). You can see those the same way:
data(package="latticeExtra")
data(package="vcd")
These data sets are the ones mentioned in the package manuals and vignettes for a given package, and used to illustrate the package features.
A few R packages with a lot of datasets (which again are easy to scan so you can choose what's interesting to you): AER, DAAG, and vcd.
Another thing i find so impressive about R is its I/O. Suppose you want to get some very specific financial data via the yahoo finance API. Let's say closing open and closing price of S&P 500 for every month from 2001 to 2009, just do this:
tick_data = read.csv(paste("http://ichart.finance.yahoo.com/table.csv?",
"s=%5EGSPC&a=03&b=1&c=2001&d=03&e=1&f=2009&g=m&ignore=.csv"))
In this one line of code, R has fetched the tick data, shaped it to a dataframe and bound it to 'tick_data' all . (Here's a handy cheat sheet w/ the Yahoo Finance API symbols used to build the URLs as above)
http://www.data.gov.uk/data
Recently setup by Tim Berners-Lee
Obviously UK based data, but that shouldn't matter. Covers everything from abandoned cars to school absenteeism to agricultural price indexes
Have you considered Stack Overflow Data Dumps?
You are already familiar with what the data represents i.e. the business logic it tracks
A good start to look for economic data are always the following three addresses:
World Bank - Research Datasets
IMF - Data and Statistics
National Bureau of Economic Research
A nice summary of dataset links for development economists can be found at:
Devecondata
Edit:
The World Bank decided last week to open up a lot of its previously non-free datasets and published them online on its revised homepage. The new internet appearance looks pretty nice as well.
The World Bank - Open Data
Another good site is UN Data.
The United Nations Statistics Division
(UNSD) of the Department of Economic
and Social Affairs (DESA) launched a
new internet based data service for
the global user community. It brings
UN statistical databases within easy
reach of users through a single entry
point (http://data.un.org/). Users can
now search and download a variety of
statistical resources of the UN
system.
http://www.data.gov/ probably has something you can use.
In their catalog of raw data you can set your criteria for the data and find what you're looking for http://www.data.gov/catalog/raw
A bundle of 268 small text files (the worked examples of "The R Book") can be found in The R Book's companion website.
You could look on this post on FlowingData
Collection of over 800 datasets in ARFF format understood by Weka and other data analysis packages, gathered in TunedIT.org Repository.
See the data competition set up by Hadley Wickham for the Data Expo of the ASA Statistical Computing and Statistical Graphics section. The competition is over, the data is still there.
UC Irvine Machine Learning Repository has currently 190 data sets.
The UCI Machine Learning Repository is
a collection of databases, domain
theories, and data generators that are
used by the machine learning community
for the empirical analysis of machine
learning algorithms.
I've seen on your other questions that you are apparently interested in data visualization. Have then a look at many eyes project (form IBM) and the sample data sets.
Similar to data.gov, but european centered is eurostat
http://epp.eurostat.ec.europa.eu/portal/page/portal/statistics/search_database
and there is a chinese statistics departement, too, as mentioned by Wildebeests
http://www.stats.gov.cn/english/statisticaldata/monthlydata/index.htm
Then there are some "social data services" which offer the download of datasets, such as
swivel, manyeyes, timetric, ckan, infochimps..
The FAO offers the aquastat database with data with various water related indicators differentiated by country.
The Naval Oceanography Portal offers, for instance, Fraction of the Moon Illuminated.
The blog "curving normality" has a list of interesting data sources.
Another collection of datasets.
Here's an R package with several agricultural datasets from books and papers. Example analyses included: agridat
Related
Cluster analysis or Principle Component analysis
Currently I am trying to analyse a data set with multiple variables for a marine science research project for university. Although I have used R before, I am struggling to work out how to carry out and present the analysis. My aim is to complete Cluster analysis or Principle Component Analysis to investigate the links between variables on the measured variable. Any help on the base code or background for the analysis would be greatly welcomed. I have been directed towards the package 'Vegan' so preferably information around using that package to complete it would be best. I have read about 5 books on introductory stats in R with the hope of finding the answer but to no avail. Thank you for your time.
Visualizing network graph data in R (need library/data recommendations)
This is my first foray into doing visualizations of network graph stuff, so bear with me. I have data (it's currently in a .csv of 100k lines, but I can load it up in a SQL DB of some flavor) with the following info: location_name, person_name, time_of_day (discretely broken up as: morning, noon, afternoon, night) Some example rows: park, David, noon park, Tina, morning cafe, John, night cafe, Shirley, night I'm thinking of treating of analyzing time across separate graphcs (so I'll have a morning graph, noon graph, etc.) -- and for all I care I can generate a chart of all times together and treat combinations of location-time as unique connectors, since my primary interest is the relation between the people in this data. So given my data format, what functions and libraries would people recommend me looking into? Also, if there are some golden standards of doing R graph stuff, what kind of data format would people recommend that I transform my current data into? Thanks in advance.
I think there's a CRAN topic view for network/graph models. Try the igraph package. as well. But this question will likely be closed unless you make it considerably more specific....
You don't say much about what you intend to do with the data. The cran packages network and igraph have plotting for static networks, networkDynamic provides data structures for dynamic networks and can be used by ndtv along with the animation library to generate network movies. Perhaps these suit your needs?
Can I perform Generalized Iterative Scaling in R?
I'm looking to port our home-grown platform of various machine learning algorithms from C# to a more robust data mining platform such as R. While it's obvious R is great at many types of data mining tasks, it is not clear to me if it can be used for text classification. Specifically, we extract a list of bigrams from the text and then classify it into one of 15 different categories, eg: Bigram list: jewelry, books, watches, shoes, department store -> Category: Shopping We'd want to both train the models in R as well as hook up to a database to perform this on a larger scale. Can it be done in R?
Hmm, I am rather starting to look into Machine Learning, but I might have a suggestion: have you considered Weka? There's a bunch of various algorithms around and there'S IS some documentation. Plus, there is an R package RWeka that makes use of the Weka jars. EDIT: There is also a nice, comprehensive read by Witten et al. : Data mining that contains an extensive description of Weka among other interesting things. Look into the API opportunities.
Comparing R to Matlab for Data Mining
Instead of starting to code in Matlab, I recently started learning R, mainly because it is open-source. I am currently working in data mining and machine learning field. I found many machine learning algorithms implemented in R, and I am still exploring different packages implemented in R. I have quick question: how do you compare R to Matlab for data mining application, its popularity, pros and cons, industry and academic acceptance etc.? Which one would you choose and why? I went through various comparisons for Matlab vs R against various metrics but I am specifically interested to get answer for its applicability in Data Mining and ML. Since both language are pretty new for me I was just wondering if R would be a good choice or not. I appreciate any kind of suggestions.
For the past three years or so, i have used R daily, and the largest portion of that daily use is spent on Machine Learning/Data Mining problems. I was an exclusive Matlab user while in University; at the time i thought it was an excellent set of tools/platform. I am sure it is today as well. The Neural Network Toolbox, the Optimization Toolbox, Statistics Toolbox, and Curve Fitting Toolbox are each highly desirable (if not essential) for someone using MATLAB for ML/Data Mining work, yet they are all separate from the base MATLAB environment--in other words, they have to be purchased separately. My Top 5 list for Learning ML/Data Mining in R: Mining Association Rules in R This refers to a couple things: First, a group of R Package that all begin arules (available from CRAN); you can find the complete list (arules, aruluesViz, etc.) on the Project Homepage. Second, all of these packages are based on a data-mining technique known as Market-Basked Analysis and alternatively as Association Rules. In many respects, this family of algorithms is the essence of data-mining--exhaustively traverse large transaction databases and find above-average associations or correlations among the fields (variables or features) in those databases. In practice, you connect them to a data source and let them run overnight. The central R Package in the set mentioned above is called arules; On the CRAN Package page for arules, you will find links to a couple of excellent secondary sources (vignettes in R's lexicon) on the arules package and on Association Rules technique in general. The standard reference, The Elements of Statistical Learning by Hastie et al. The most current edition of this book is available in digital form for free. Likewise, at the book's website (linked to just above) are all data sets used in ESL, available for free download. (As an aside, i have the free digital version; i also purchased the hardback version from BN.com; all of the color plots in the digital version are reproduced in the hardbound version.) ESL contains thorough introductions to at least one exemplar from most of the major ML rubrics--e.g., neural metworks, SVM, KNN; unsupervised techniques (LDA, PCA, MDS, SOM, clustering), numerous flavors of regression, CART, Bayesian techniques, as well as model aggregation techniques (Boosting, Bagging) and model tuning (regularization). Finally, get the R Package that accompanies the book from CRAN (which will save the trouble of having to download the enter the datasets). CRAN Task View: Machine Learning The +3,500 Packages available for R are divided up by domain into about 30 package families or 'Task Views'. Machine Learning is one of these families. The Machine Learning Task View contains about 50 or so Packages. Some of these Packages are part of the core distribution, including e1071 (a sprawling ML package that includes working code for quite a few of the usual ML categories.) Revolution Analytics Blog With particular focus on the posts tagged with Predictive Analytics ML in R tutorial comprised of slide deck and R code by Josh Reich A thorough study of the code would, by itself, be an excellent introduction to ML in R. And one final resource that i think is excellent, but didn't make in the top 5: A Guide to Getting Stared in Machine Learning [in R] posted at the blog A Beautiful WWW
Please look at the CRAN Task Views and in particular at the CRAN Task View on Machine Learning and Statistical Learning which summarises this nicely.
Both Matlab and R are good if you are doing matrix-heavy operations. Because they can use highly optimized low-level code (BLAS libraries and such) for this. However, there is more to data-mining than just crunching matrixes. A lot of people totally neglect the whole data organization aspect of data mining (as opposed to say, plain machine learning). And once you get to data organization, R and Matlab are a pain. Try implementing an R*-tree in R or matlab to take an O(n^2) algorithm down to O(n log n) runtime. First of all, it totally goes against the way R and Matlab are designed (use bulk math operations wherever possible), secondly it will kill your performance. Interpreted R code for example seems to run at around 50% of the speed of the C code (try R built-in k-means vs. flexclus k-means); and the BLAS libraries are optimized to an insane level, exploiting cache sizes, data alignment, advanced CPU features. If you are adventurous, try implementing a manual matrix multiplication in R or Matlab, and benchmark it against the native one. Don't get me wrong. There is a lot of stuff where R and matlab are just elegant and excellent for prototyping. You can solve a lot of things in just 10 lines of code, and get a decent performance out of it. Writing the same thing by hand would be hundreds of lines, and probably 10x slower. But sometimes you can optimize by a level of complexity, which for large data sets does beat the optimized matrix operations of R and matlab. If you want to scale up to "Hadoop size" on the long run, you will have to think about data layout and organization, too, unless all you need is a linear scan over the data. But then, you could just be sampling, too!
Yesterday I found two new books about Data mining. These series of books entitled by ‘Data Mining’ address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters.The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. Books are: “New Fundamental Technologies in Data Mining” here http://www.intechopen.com/books/show/title/new-fundamental-technologies-in-data-mining & “Knowledge-Oriented Applications in Data Mining” here http://www.intechopen.com/books/show/title/knowledge-oriented-applications-in-data-mining These are open access books so you can download it for free or just read on online reading platform like I do. Cheers!
We should not forget the origin sources for these two software: scientific computation and also signal processing leads to Matlab but statistics leads to R. I used matlab a lot in University since we have one installed on Unix and open to all students. However, the price for Matlab is too high especially compared to free R. If your major focus is not on matrix computation and signal processing, R should work well for your needs.
I think it also depends in which field of study you are. I know of people in coastal research that use a lot of Matlab. Using R in this group would make your life more difficult. If a colleague has solved a problem, you can't use it because he fixed it using Matlab.
I would also look at the capabilities of each when you are dealing with large amounts of data. I know that R can have problems with this, and might be restrictive if you are used to an iterative data mining process. For example looking at multiple models concurrently. I don't know if MATLAB has a data limitation.
I admit to favoring MATLAB for data mining problems, and I give some of my reasoning here: Why MATLAB for Data Mining? I will admit to only a passing familiarity with R/S-Plus, but I'll make the following observations: R definitely has more of a statistical focus than MATLAB. I prefer building my own tools in MATLAB, so that I know exactly what they're doing, and I can customize them, but this is more of a necessity in MATLAB than it would be in R. Code for new statistical techniques (spatial statistics, robust statistics, etc.) often appears early in S-Plus (I assume that this carries over to R, at least some). Some years ago, I found the commercial version of R, S-Plus to have an extremely limited capacity for data. I cannot say what the state of R/S-Plus is today, but you may want to check if your data will fit into such tools comfortably.
Where can I find useful R tutorials with various implementations?
I'm using R language and the manuals on the R site are really informative. However, I'd like to see some more examples and implementations with R which can help me develop my knowledge faster. Any suggestions?
Just to add some more Programming in R INTRODUCTION TO STATISTICAL MODELLING IN R Linear algebra in R The R Inferno R by example The R Clinic Survey analysis in R R & Bioconductor Manual Rtips Resources to help you learn and use R General R Links
I'll mention a few that i think are excellent resources but that i haven't seen mentioned on SO. They are all free and freely available on the Web (links supplied). Data Analysis Examples A collection of individual examples from the UCLA Statistics Dept. which you can browse by major category (e.g., "Count Models", "Multivariate Analysis", "Power Analysis") then download examples with complete R code under any of these rubrics (e.g., under "Count Models" are "Poisson Regression", "Negative Binomial Regression", and so on). Verzani: SimpleR: Using R for Introductory Statistics A little over 100 pages, and just outstanding. It's easy to follow but very dense. It is a few years old, still i've only found one deprecated function in this text. This is a resource for a brand new R user; it also happens to be an excellent statistics refresher. This text probably contains 20+ examples (with R code and explanation) directed to fundamental statistics (e.g., hypothesis testing, linear regression, simple simulation, and descriptive statistics). Statistics with R (Vincent Zoonekynd) You can read it online or print it as a pdf. Printed it's well over 1000 pages. The author obviously got a lot of the information by reading the source code for the various functions he discusses--a lot of the information here, i haven't found in any other source. This resource contains large sections on Graphics, Basic Statistics, Regression, Time Series--all w/ small examples (R code + explanation). The final three sections contain the most exemplary code--very thorough application sections on Finance (which seems to be the author's professional field), Genetics, and Image Analysis.
All the packages on CRAN are open source, so you can download all the source code from there. I recommend starting there by looking at the packages you use regularly to see how they're implemented. Beyond that, Rosetta Code has many R examples. And you may want to follow R-Bloggers.
Book like tutorials Book like tutorials are usually spread in the form of PDF. Many of them are available on the R-project homepage here: http://cran.r-project.org/other-docs.html#english (This link includes many of the texts others have mentioned) Article like tutorials These are usually present inside blogs. The biggest list of R-bloggers I know of exists here: http://www.r-bloggers.com/ And many of these bloggers posts (many of which are tutorials) are listed here: http://www.r-bloggers.com/archive/ (although inside each blog there are usually more tutorials).