I'm trying to learn R (via this video) and immediately ran into problems. As directed, I created a dataset in Excel with column A being numbers 1 through 10 and column B being random integers. Saved as .xlsx and .csv.
Next I tried to read the data in R with
> data1 <- read.table(file.choose(), header=TRUE, sep="\t")
and that's as far as I got. There's no Workspace like in the video, or an option anywhere to view it. There are many windows in the video, but I only have "R Console".
So, how do I get the workspace?
You may be looking for "R Studio." It's a user-friendly shell that sits on top of R... It shows you your current work space, etc.
http://www.rstudio.com/
Also, you want to use sep="," not sep="\t" if you have a CSV. \t is tab-delimited...
I think you are using the R basic program (not basic in core functionality, just basic in terms of user interface features) that you probably downloaded from http://www.r-project.org/
The video you are watching is running a productive user interface called RStudio. You can download it for free from here: http://www.rstudio.com/ Works the same for all your purposes.
Related
I have a 174603 rows and 178 column dataframe, which I'm importing to Excel using openxlsx::saveWorkbook, (Using this package to obtain the aforementioned format of cells, with colors, header styles and so on). But the process is extremely slow, (depending on the amount of memory used by the machine it can take from 7 to 17 minutes!!) and I need a way to reduce this significantly (Doesn't need to be seconds, but anything bellow 5 min would be OK)
I've already searched other questions but they all seem to focus either in exporting to R (I have no problem with this) or writing non-formatted files to R (using write.csv and other options of the like)
Apparently I can't use xlsx package because of the settings on my computer (industrial computer, Check comments on This question)
Any suggestions regarding packages or other functionalities inside this package to make this run faster would be highly appreciated.
This question has some time ,but I had the same problem as you and came up with a solution worth mentioning.
There is package called writexl that has implemented a way to export a data frame to Excel using the C library libxlsxwriter. You can export to excel using the next code:
library(writexl)
writexl::write_xlsx(df, "Excel.xlsx",format_headers = TRUE)
The parameter format_headers only apply centered and bold titles, but I had edited the C code of the its source in github writexl library made by ropensci.
You can download it or clone it. Inside src folder you can edit write_xlsx.c file.
For example in the part that he is inserting the header format
//how to format headers (bold + center)
lxw_format * title = workbook_add_format(workbook);
format_set_bold(title);
format_set_align(title, LXW_ALIGN_CENTER);
you can add this lines to add background color to the header
format_set_pattern (title, LXW_PATTERN_SOLID);
format_set_bg_color(title, 0x8DC4E4);
There are lots of formating you can do searching in the libxlsxwriter library
When you have finished editing that file and given you have the source code in a folder called writexl, you can build and install the edited package by
shell("R CMD build writexl")
install.packages("writexl_1.2.tar.gz", repos = NULL)
Exporting again using the first chunk of code will generate the Excel with formats and faster than any other library I know about.
Hope this helps.
Have you tried ;
write.table(GroupsAlldata, file = 'Groupsalldata.txt')
in order to obtain it in txt format.
Then on Excel, you can simply transfer you can 'text to column' to put your data into a table
good luck
I have been practicing with tabulizer package in R and have following problem. Unfortunately I can't offer reproducible example, as pdf is firms property, but I will describe problem in detail.
I'm trying to read PDF that has start and end date in upperright corner. When I open PDF they look normal
Start: 01-Mar-2018
End: 31-Mar-2018
Now the fun part. When I highlight them and use Ctrl+C to copy them here is result when pasted to R.
:tttt: 11-rrr-8118
tt:: 11-rrr-8118
This is exactly same kind of nonsense that extract_text(path, pages=1) will give. A lot of t::ttttt:ttt... My question is that is there some security in this PDF or do I just need to figure out correct encoding or because this PDF is automatically created from system, there is some weird notation to everything?
I figured it out. This PDF is mainly created by metadata (didn't know) and great tool in R for accessing metadata in PDFs is pdftools.
library(pdftools)
pdf_info(path.pdf)
and you can wrangle out all the important metadata bits.
My question:
Can I change the parameters in R to use the source editor to also view >5MB data sets in R?
If not, what is your advice?
Background:
I recently stopped looking at data in Excel and switched to R entirely. As I did in Excel and still prefer to do in R, I like to look at the entire frame and then decide on filters.
Problem: Working with the World Development Indicators (WDI) data set which is over 100MB, opening it in the source editor does not work. View(df) opens an empty tab in RStudio as also shown below:
R threw another error when I selected the data set from the Files Tab in column on the right of RStudio which read:
The selected file 'wdi.csv' is too large to open in the source editor (the file is 104.5 MB and the maximum file size is 5MB).
Solutions?
My alter ego would tell me to increase the threshold of datasets' file size for the source editor, so I could investigate it there. In brief: change 5 to 200 MB. My alter ego would also tell me that I would probably encounter performance issues (since I am using a MacAir).
How I resolved the issue:
I used head() and dplyr's glimpse() to get a better idea, but ended up looking at the wdi matrix in excel and then filtered it out in R. Newly created dataframes could be opened in the source editor without any problems.
Thanks in advance!
I'm trying to download the following url into an R dataframe:
http://www.fantasypros.com/nfl/rankings/qb.php/?export=xls
(It's the 'Export' link on the public page: http://www.fantasypros.com/nfl/rankings/qb.php/)
However, I'm not sure how to 'parse' the data? I'm also looking to automate this and perform it weekly, so any thoughts on how to build this into a weekly-access workflow would be greatly appreciated! Have been google searching and scouring stackoverflow for a couple hours now to no avail... :-)
Thank you,
Justin
Attempted Code:
getURL("http://www.fantasypros.com/nfl/rankings/qb.php?export=xls")
This just gives me a string that starts like:
[1] "FantasyPros.com \t \nWeek 8 - QB Rankings \t \nExpert Consensus Rankings (ECR) \t \n\n Rank \t Player Name \tTeam \t Matchup \tBest Rank \t Worst Rank \t Ave Rank \t Std Dev \t\n1\tPeyton Manning\tDEN\t vs. WAS\t1\t5\t1.2105263157895\t0.58877509625419\t\t\n2\tDrew Brees\tNO\t vs. BUF\t1\t7\t2.6287878787879\t1.0899353819483\t\t\n3\tA...
Welcome to R. It sounds like you love to do your analysis in Excel. Thats completely fine, but the fact that you are asking to crawl data from the web AND are asking about R, I think its safe to assume that you will start to find programming your analyses is the way to go.
That said, what you really want to do is crawl the web. There are tons of examples of how to do this with R, right here on SO. Look for things like "web scraping", "crawling", and "screen scraping".
Ok, dialogue aside. Don't worry about grabbing the data in XL format. You can parse the data directly with R. Most websites use a consistent naming convention, so using a for loop and building the URLs for your datasets will be easy.
Below is an example of parsing your page, directly with R, into a data.frame which acts very similar to tablular data in XL.
## load the packages you will need
# install.packages("XML")
library(XML)
## Define the URL -- you could dynamically build this
URL = "http://www.fantasypros.com/nfl/rankings/qb.php"
## Read the tables form the page into R
tables = readHTMLTable(URL)
## how many do we have
length(tables)
## look at the first one
tables[1]
## thats not it
## lets look at the 2nd table
tables[2]
## bring it into a dataframe
df = as.data.frame(tables[2])
If you are using R for the first time, you can install external packages pretty easily with the command install.packages("PackageNameHere"). However, if you are serious about learning R, I would look into using the RStudio IDE. It really flattened the learning curve for me on a ton of levels.
You can probably just use download.file and read.xls from the gdata library. I don't think you can skip lines reading in .xls files but you can supply a pattern argument so that it will read in the file until that pattern is seen in your row of data.
library(gdata)
download.file("http://www.fantasypros.com/nfl/rankings/qb.php?export=xls", destfile="file.xls")
ffdata<- read.xls("file.xls", header=TRUE, pattern="Rank")
Is it possible to read an MSWord 2010 file into R? I have Windows 7 and a Dell PC.
I am using the line:
my.data <- readLines('c:/users/mark w miller/simple R programs/test_for_r.docx')
to try to read an MSWord file containing the following text:
A 20 1000 AA
B 30 1001 BB
C 10 1500 CC
I get a warning message that says:
Warning message:
In readLines("c:/users/mark w miller/simple R programs/test_for_r.docx") :
incomplete final line found on 'c:/users/mark w miller/simple R programs/test_for_r.docx'
and my.data appears to be gibberish:
# [1] "PK\003\004\024" "¤l" "ÈFÃË‹Átí"
I know with this simple example I could easily convert the MSWord file to a different format. However, my actual data files consist of complex tables that were typed decades ago and then scanned into pdf documents later. Age of the original paper document and perhaps imperfections in the original paper, typing and/or scanning process has resulted in some letters and numbers not being very clear. So far converting the pdf files to MSWord seems to be the most successful at correctly translating the tables. Converting the MSWord files to Excel or rich text, etc, has not been very successful. Even after conversion to MSWord the resulting files are very complex and contain numerous errors. I thought if I could read the MSWord files into R that might be the most efficient way to edit and correct them.
I am aware of 'package tm' that I guess can read MSWord files into R, but I am a little concerned about using it because it seems to require installing third-party software.
Thank you for any suggestions.
First, readLines() is not the correct solution, since a Word file is not a text (that is plain, ASCII text) file.
The Word-related function in the tm package is called readDOC() but both it and the required third-party tool (Antiword) are for older Word files (up to Word 2003) and will not work using newer .docx files.
The best I can suggest is that you try readPDF(), also found in the tm package. Note: it requires that the tool pdftotext is installed on your system. Easy for Linux, no idea about Windows. Alternatively, find a Windows tool which converts PDF to plain, ASCII text files (not Word files) - they should open and display correctly using Notepad on Windows - then try readLines() again. However, given that your PDF files are old and come from a scanner, conversion to text might be difficult.
Finally: I realise that you did not make the original decision in this instance, but for anybody else - Word and PDF are not appropriate formats for storing data that you want to parse.
In case it helps anyone else, https://cran.r-project.org/web/packages/readtext/vignettes/readtext_vignette.html, it appears there's a new package dedicated specifically to reading text data, including Word files (also new .docx format).
I have not figured out how to read the MSWord file into R, but I have gotten the contents into a format that R can read.
I converted a pdf to MSWord with Acrobat X Pro
The original tables had solid vertical lines separating columns. It turns out these vertical lines were disrupting the format of the data when I converted an MSWord file to a text file, but I was able to delete the lines from an MSWord file before creating a text file.
Convert the MSWord file to a text file after deleting vertical lines in Step 2.
Resulting text files still require extensive editing, but at least the data are largely present in a format R can read and I will not have to re-enter all data in the pdfs by hand, saving many hours of work.
You can do this with RDCOMClient very easily.
In saying so, some characters will not read in correctly.
require(RDCOMClient)
# Create the connection
wordApp <- COMCreate("Word.Application")
# Let's set visible to true so you can see it run
wordApp[["Visible"]] <- TRUE
# Define the file we want to open
wordFileName <- "c:/path/to/word/doc.docx"
# Open the file
doc <- wordApp[["Documents"]]$Open(wordFileName)
# Print the text
print(doc$range()$text())