Read a text file in R line by line - r

I would like to read a text file in R, line by line, using a for loop and with the length of the file. The problem is that it only prints character(0). This is the code:
fileName="up_down.txt"
con=file(fileName,open="r")
line=readLines(con)
long=length(line)
for (i in 1:long){
linn=readLines(con,1)
print(linn)
}
close(con)

You should take care with readLines(...) and big files. Reading all lines at memory can be risky. Below is a example of how to read file and process just one line at time:
processFile = function(filepath) {
con = file(filepath, "r")
while ( TRUE ) {
line = readLines(con, n = 1)
if ( length(line) == 0 ) {
break
}
print(line)
}
close(con)
}
Understand the risk of reading a line at memory too. Big files without line breaks can fill your memory too.

Just use readLines on your file:
R> res <- readLines(system.file("DESCRIPTION", package="MASS"))
R> length(res)
[1] 27
R> res
[1] "Package: MASS"
[2] "Priority: recommended"
[3] "Version: 7.3-18"
[4] "Date: 2012-05-28"
[5] "Revision: $Rev: 3167 $"
[6] "Depends: R (>= 2.14.0), grDevices, graphics, stats, utils"
[7] "Suggests: lattice, nlme, nnet, survival"
[8] "Authors#R: c(person(\"Brian\", \"Ripley\", role = c(\"aut\", \"cre\", \"cph\"),"
[9] " email = \"ripley#stats.ox.ac.uk\"), person(\"Kurt\", \"Hornik\", role"
[10] " = \"trl\", comment = \"partial port ca 1998\"), person(\"Albrecht\","
[11] " \"Gebhardt\", role = \"trl\", comment = \"partial port ca 1998\"),"
[12] " person(\"David\", \"Firth\", role = \"ctb\"))"
[13] "Description: Functions and datasets to support Venables and Ripley,"
[14] " 'Modern Applied Statistics with S' (4th edition, 2002)."
[15] "Title: Support Functions and Datasets for Venables and Ripley's MASS"
[16] "License: GPL-2 | GPL-3"
[17] "URL: http://www.stats.ox.ac.uk/pub/MASS4/"
[18] "LazyData: yes"
[19] "Packaged: 2012-05-28 08:47:38 UTC; ripley"
[20] "Author: Brian Ripley [aut, cre, cph], Kurt Hornik [trl] (partial port"
[21] " ca 1998), Albrecht Gebhardt [trl] (partial port ca 1998), David"
[22] " Firth [ctb]"
[23] "Maintainer: Brian Ripley <ripley#stats.ox.ac.uk>"
[24] "Repository: CRAN"
[25] "Date/Publication: 2012-05-28 08:53:03"
[26] "Built: R 2.15.1; x86_64-pc-mingw32; 2012-06-22 14:16:09 UTC; windows"
[27] "Archs: i386, x64"
R>
There is an entire manual devoted to this.

Here is the solution with a for loop. Importantly, it takes the one call to readLines out of the for loop so that it is not improperly called again and again. Here it is:
fileName <- "up_down.txt"
conn <- file(fileName,open="r")
linn <-readLines(conn)
for (i in 1:length(linn)){
print(linn[i])
}
close(conn)

I write a code to read file line by line to meet my demand which different line have different data type follow articles: read-line-by-line-of-a-file-in-r and determining-number-of-linesrecords. And it should be a better solution for big file, I think. My R version (3.3.2).
con = file("pathtotargetfile", "r")
readsizeof<-2 # read size for one step to caculate number of lines in file
nooflines<-0 # number of lines
while((linesread<-length(readLines(con,readsizeof)))>0) # calculate number of lines. Also a better solution for big file
nooflines<-nooflines+linesread
con = file("pathtotargetfile", "r") # open file again to variable con, since the cursor have went to the end of the file after caculating number of lines
typelist = list(0,'c',0,'c',0,0,'c',0) # a list to specific the lines data type, which means the first line has same type with 0 (e.g. numeric)and second line has same type with 'c' (e.g. character). This meet my demand.
for(i in 1:nooflines) {
tmp <- scan(file=con, nlines=1, what=typelist[[i]], quiet=TRUE)
print(is.vector(tmp))
print(tmp)
}
close(con)

I suggest you check out chunked and disk.frame. They both have functions for reading in CSVs chunk-by-chunk.
In particular, disk.frame::csv_to_disk.frame may be the function you are after?

fileName = "up_down.txt"
### code to get the line count of the file
length_connection = pipe(paste("cat ", fileName, " | wc -l", sep = "")) # "cat fileName | wc -l" because that returns just the line count, and NOT the name of the file with it
long = as.numeric(trimws(readLines(con = length_connection, n = 1)))
close(length_connection) # make sure to close the connection
###
for (i in 1:long){
### code to extract a single line at row i from the file
linn_connection_cmd = paste("head -n", format(x = i, scientific = FALSE, big.mark = ""), fileName, "| tail -n 1", sep = " ") # extracts one line from fileName at the desired line number (i)
linn_connection = pipe(linn_connection_cmd)
linn = readLines(con = linn_connection, n = 1)
close(linn_connection) # make sure to close the conection
###
# the line is now loaded into R and anything can be done with it
print(linn)
}
close(con)
By using R's pipe() command, and using shell commands to extract what we want, the full file is never loaded into R, and is read in line by line.
paste("head -n", format(x = i, scientific = FALSE, big.mark = ""), fileName, "| tail -n 1", sep = " ")
It is this command that does all the work; it extracts one line from the desired file.
Edit: R's default behavior is for i to return as normal number when less than 100,000, but begins returning i in scientific notation when it is greater than or equal to 100,000 (1e+05). Thus, format(x = i, scientific = FALSE, big.mark = "") is used in our pipe command to make sure that the pipe() command always receives a number in normal form, which is all that the command can understand. If the pipe() command is given any number like 1e+05, it will not be able to comprehend it and will result in the following error:
head: 1e+05: invalid number of lines

Related

How to create a For-Loop to edit all text files located in a given folder?

I need to add additional line gaps between paragraphs in about 5,000 text files. With help from someone here (thanks #IRTFM), I can now pull up a txt file and add the spaces I need to between paragraphs with the code here:
txt <- readLines("~/Box/Atlas.ti_originals/WP/WP_decade1/WP_1979.9.16.txt")
spread <- ifelse( nchar(txt) < 50,
paste0(txt, "\n") , # these lines are left alone
paste0( txt, "\n\n\n\n") ) # longer lines are padded
cat(spread, file="~/Box/Atlas.ti_originals/WP/WP_decade1/spread.txt" )
This code works perfectly on an individual file, but I would now like to scale it up with a for loop (or something else) so that the code runs through all the files in a given folder. I've looked at several questions on this site already and some give me the general idea, but they still aren't working. This is the code I am using:
#list of file names needed to conduct for loop
filenames = list.files(path="~/Box/Atlas.ti_originals/WP/WP_decade1/", pattern="*.txt")
setwd("~/Box/Atlas.ti_originals/WP/WP_decade1")
#FOR LOOP
for (i in 1:length(filenames)) {
#code to loop
txt <- readLines(con = (paste0("~/Box/Atlas.ti_originals/WP/WP_decade1/", i)))
spread <- ifelse( nchar(txt) < 50,
paste0(txt, "\n") , # these lines are left alone
paste0( txt, "\n\n\n\n") ) # longer lines are padded
cat(spread, file=paste0("~/Box/Atlas.ti_originals/WP/WP_decade1/","c_", i))
}
But I keep getting connection error saying it cannot open the connection or that there is "No such file or directory". I did notice that in the warning that it also seems to indicate that the character it is adding to the end is the number 1. The problem seems to be located around the readLines command.
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open file '/Users/lessig7/Box/Atlas.ti_originals/WP/WP_decade1/1': No such file or directory
But I have checked multiple times the values of the filenames list, and they aren't appearing as numbers (see below).
> filenames
[1] "spread.txt" "WP_1977.6.29.txt" "WP_1979.9.16.txt" "WP_1980.9.7.txt" "WP_1981.7.24.txt" "WP_1982.1.12.txt"
[7] "WP_1982.1.7.txt" "WP_1982.2.4.txt" "WP_1982.8.6.txt" "WP_1983.11.8.txt" "WP_1983.3.6.txt" "WP_1983.6.30.txt"
[13] "WP_1983.7.12.txt" "WP_1984.1.13.txt" "WP_1984.1.28.txt" "WP_1984.3.11.txt" "WP_1984.3.13_2.txt" "WP_1984.3.13.txt"
[19] "WP_1984.3.20.txt" "WP_1984.3.25.txt" "WP_1984.3.7.txt" "WP_1984.8.29.txt" "WP_1984.9.11.txt" "WP_1984.9.15.txt"
[25] "WP_1984.9.21.txt" "WP_1984.9.4.txt" "WP_1985.1.19.txt" "WP_1985.11.2.txt" "WP_1985.11.28.txt" "WP_1985.6.16.txt"
[31] "WP_1985.6.6.txt" "WP_1985.7.16.txt" "WP_1985.7.2.txt" "WP_1985.8.12.txt" "WP_1985.8.16.txt" "WP_1985.8.2.txt"
[37] "WP_1986.10.11.txt" "WP_1986.11.1.txt" "WP_1986.11.8.txt" "WP_1986.12.19.txt" "WP_1986.12.21.txt" "WP_1986.12.27.txt"
[43] "WP_1986.8.18.txt" "WP_1986.8.25.txt" "WP_1987.12.12.txt" "WP_1987.12.25.txt" "WP_1987.3.8.txt" "WP_1987.4.2.txt"
[49] "WP_1988.1.17.txt" "WP_1988.1.5.txt" "WP_1988.1.7.txt" "WP_1988.10.22.txt" "WP_1988.10.29.txt" "WP_1988.10.8.txt"
[55] "WP_1988.10.9.txt" "WP_1988.11.27.txt" "WP_1988.12.11.txt" "WP_1988.12.6.txt" "WP_1988.3.22.txt" "WP_1988.4.21.txt"
[61] "WP_1988.4.8.txt" "WP_1988.7.20.txt" "WP_1988.8.29.txt" "WP_1988.8.30.txt" "WP_1988.9.17.txt" "WP_1988.9.28.txt"
[67] "WP_1988.9.6.txt" "WP_1989.10.12.txt" "WP_1989.10.21.txt" "WP_1989.10.27.txt" "WP_1989.12.1.txt" "WP_1989.12.5.txt"
[73] "WP_1989.2.13.txt" "WP_1989.2.9.txt" "WP_1989.3.17.txt" "WP_1989.3.3_2.txt" "WP_1989.3.3.txt" "WP_1989.5.11.txt"
[79] "WP_1989.5.31.txt" "WP_1989.7.10.txt" "WP_1989.7.24.txt" "WP_1989.7.29.txt" "WP_1989.7.8.txt" "WP_1989.8.21.txt"
[85] "WP_1989.9.17.txt" "WP_1989.9.9.txt"
Any advice that might help me create a loop through the code I provide in the first code box would be greatly appreciated.

Can I import variables into R from a global file?

I am integrating an R script to produce some graphics into a larger project that is pulled together with a Makefile. In this larger project, I have a file called globals.mk that contains global variables used by many other scripts in the project. For example, the number of simulations I want to run is a global that I want to use in this R script. Can I "import" this as a variable, or is it necessary to manually define every variable within the R script?
Edit: here is a sample of the globals that I would need to read in.
num = 100
path = ./here/is/a/path
file = $(path)/file.csv
And I would like the R script to set the variables num as 100 (or "100"), path as "./here/is/a/path" and file as "./here/is/a/path/file.csv".
If it is ok to replace the parentheses with brace brackets then readRenviron will read in such files and perform the substitutions returning the contents as environmental variables.
# write out test file globals2.mk which uses brace brackets
Lines <- "num = 100
path = ./here/is/a/path
file = ${path}/file.csv"
cat(Lines, file = "globals2.mk")
readRenviron("globals2.mk")
Sys.getenv("num")
## [1] "100"
Sys.getenv("path")
## [1] "./here/is/a/path"
Sys.getenv("file")
## [1] "./here/is/a/path/file.csv"
If it is important to use parentheses rather than brace brackets, read in globals.mk, replace the parentheses with brace brackets and then write the file out again.
# write out test file - this one uses parentheses as in question
Lines <- "num = 100
path = ./here/is/a/path
file = $(path)/file.csv"
cat(Lines, file = "globals.mk")
# read globals.mk, perform () to {} substitutions, write out and then re-read
tmp <- tempfile()
L <- readLines("globals.mk")
cat(paste(chartr("()", "{}", L), collapse = "\n"), file = tmp)
readRenviron(tmp)
If the .mk file has anything other than direct variable expansion (such as more complex make-rules/tricks/functions), it might be better to trust make to do the expansion for you, and then read it in. There's a post here that I found that dumps all variable contents (after processing).
TL;DR
expand_mkvars <- function(path, aslist = FALSE) {
stopifnot(file.exists(mk <- Sys.which("make")))
tf <- tempfile(fileext = ".mk")
# needed on my windows system
tf <- normalizePath(tf, winslash = "/", mustWork = FALSE) # tempfile should suffice
on.exit(suppressWarnings(file.remove(tf)), add = TRUE)
writeLines(c(".PHONY: printvars",
"printvars:",
"\t#$(foreach V,$(sort $(.VARIABLES)), \\",
"\t $(if $(filter-out environment% default automatic, \\",
"\t $(origin $V)),$(warning $V=$($V))))"), con = tf)
out <- system2(mk, c("-f", shQuote(path), "-f", shQuote(tf), "-n", "printvars"),
stdout = TRUE, stderr = TRUE)
out <- out[grepl(paste0("^", tf), out)]
out <- gsub(paste0("^", tf, ":[0-9]+:\\s*"), "", out)
known_noneed <- c(".DEFAULT_GOAL", "CURDIR", "GNUMAKEFLAGS", "MAKEFILE_LIST", "MAKEFLAGS")
out <- out[!grepl(paste0("^(", paste(known_noneed, collapse = "|"), ")="), out)]
if (aslist) {
spl <- strsplit(out, "=")
nms <- sapply(spl, `[[`, 1)
rest <- lapply(spl, function(a) paste(a[-1], collapse = "="))
setNames(rest, nms)
} else out
}
In action:
expand_mkvars("~/StackOverflow/karthikt.mk")
# [1] "file=./here/is/a/path/file.csv" "num=100"
# [3] "path=./here/is/a/path"
expand_mkvars("~/StackOverflow/karthikt.mk", aslist = TRUE)
# $file
# [1] "./here/is/a/path/file.csv"
# $num
# [1] "100"
# $path
# [1] "./here/is/a/path"
I have not tested on other systems, so you might need to adjust known_noneed to add extra variables that popup. Depending on your needs, you might be able to filter more-intelligently (e.g., none of your variables lead with a capital letter), but for this example I kept it to the known-not-wanted variables that make is giving us.
The blog post suggests using a phony target of
.PHONY: printvars
printvars:
#$(foreach V,$(sort $(.VARIABLES)), \
$(if $(filter-out environment% default automatic, \
$(origin $V)),$(warning $V=$($V))))
(some are tabs, not all spaces, very important for make)
Unfortunately, it produces more output than you technically need:
$ /c/Rtools/bin/make.exe -f ~/StackOverflow/karthikt.mk printvars
C:/Users/r2/StackOverflow/karthikt.mk:10: .DEFAULT_GOAL=all
C:/Users/r2/StackOverflow/karthikt.mk:10: CURDIR=/Users/r2/Projects/Ford/shiny/shinyobjects/inst
C:/Users/r2/StackOverflow/karthikt.mk:10: GNUMAKEFLAGS=
C:/Users/r2/StackOverflow/karthikt.mk:10: MAKEFILE_LIST= C:/Users/r2/StackOverflow/karthikt.mk
C:/Users/r2/StackOverflow/karthikt.mk:10: MAKEFLAGS=
C:/Users/r2/StackOverflow/karthikt.mk:10: SHELL=sh
C:/Users/r2/StackOverflow/karthikt.mk:10: file=./here/is/a/path/file.csv
C:/Users/r2/StackOverflow/karthikt.mk:10: num=100
C:/Users/r2/StackOverflow/karthikt.mk:10: path=./here/is/a/path
make: Nothing to be done for 'printvars'.
so we need a little filtering, ergo the majority of code in the function.
Edit: it the readRenviron-to-envvar is the best way for you, it would not be difficult to redirect the output of this make call to another file, parse out the relevant lines, and then do readRenviron on that new file. It seems more indirect due to the use of two temp files, but they're cleaned up so that should be nothing to worry about.

r - source script file that contains unicode (Farsi) character

write the text below in a buffer and save it as a .r script:
letters_fa <- c('الف','ب','پ','ت','ث','ج','چ','ح','خ','ر','ز','د')
then try these lines to source() it:
script <- "path/to/script.R"
file(script,
encoding = "UTF-8") %>%
readLines() # works fine
file(script,
encoding = "UTF-8") %>%
source() # works fine
source(script) # the Farsi letters in the environment are misrepresented
source(script,
encoding = "UTF-8") # gives error
The last line throws error. I tried to debug it and I believe there is a bug in the source function, in the following lines:
...
loc <- utils::localeToCharset()[1L]
...
The error occurs at .Internal(parse( line.
...
exprs <- if (!from_file) {
if (length(lines))
.Internal(parse(stdin(), n = -1, lines, "?",
srcfile, encoding))
else expression()
}
else .Internal(parse(file, n = -1, NULL, "?", srcfile,
encoding))
...
The exact error is:
Error in source(script, encoding = "UTF-8") :
script.R:2:17: unexpected INCOMPLETE_STRING
1: #' #export
2: letters_fa <- c('
^
The solution to this problem is to either change the OS Locale to a native Locale (e.g. Persian in this case) or use R built-in function Sys.setlocale(locale="Persian") to change an R session native Locale.
Use source without specifying the encoding, and then modify the vector's encoding with Encoding:
source(script)
letters_fa
# [1] "الÙ\u0081" "ب" "Ù¾" "ت" "Ø«"
# [6] "ج" "چ" "ح" "خ" "ر"
# [11] "ز" "د"
Encoding(letters_fa) <- "UTF-8"
letters_fa
# [1] "الف" "ب" "پ" "ت" "ث" "ج" "چ" "ح" "خ" "ر" "ز" "د"

Best way to count unique element in a string in r

I'm a still a beginner in R and I have a question!
I have data frame of 222.000 observations and I'm interesting by a specific column which name is id. The problem is it can be further ids separate by a ',' in the same string and I want to count unique element in a each string (I mean in each string of the first data frame).
For example:
id results
0000001,0000003 2
0000002,0000002 1
0010001,0001006,0010001 2
I have used the function 'str_split_fixed' to separate all id in the same string and I put the result in a new data frame(so know I have only 1 id by string or nothing in a string). The problem is that can be as many as 68 ',' so the new data frame is huge with 68columns and 220.000 observations and it take much time(15 secondes maybe). After a used a apply function to know all unique.
Does someone know a more efficient way or have an idea?
Finally, I used the following code:
sapply(id, function(x)
length( # count items
unique( # that are unique
scan( # when arguments are presented to scan as text
text=x, what="", sep =",", # when separated by ","
quiet=TRUE))) )
But there is a message error:
Error in textConnection(text, encoding = "UTF-8") :
argument 'text' incorrect
6 textConnection(text, encoding = "UTF-8")
5 scan(text = x, what = "", sep = ",", quiet = TRUE)
4 unique(scan(text = x, what = "", sep = ",", quiet = TRUE))
3 FUN(X[[i]], ...)
2 lapply(X = X, FUN = FUN, ...)
1 sapply(id, function(x) length(unique(scan(text = x,
what = "", sep = ",", quiet = TRUE))))
My R version is:
R version 3.2.2 (2015-08-14)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.10.5 (Yosemite)
locale:
[1] fr_FR.UTF-8/fr_FR.UTF-8/fr_FR.UTF-8/C/fr_FR.UTF-8/fr_FR.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] stringr_1.0.0 plyr_1.8.3
loaded via a namespace (and not attached):
[1] magrittr_1.5 tools_3.2.2 Rcpp_0.12.2 stringi_1.0-1
>
I've tried this: Encoding(id) <- "UTF-8"
But the result is:
Error in `Encoding<-`(`*tmp*`, value = "UTF-8")
and the output of dput(id) is from this:
[9987,] "2320212,2320230"
[9988,] "4530090,4530917"
[9989,] "8532412"
[9990,] "4560292"
[9991,] "4540375"
[9992,] "3311324"
[9993,] "4540030"
[9994,] "9010000"
[9995,] "2811810"
[9996,] "3311000"
[9997,] "4540030"
[9998,] "4540215"
[9999,] "1541201"
[10000,] "2423810"
[ getOption("max.print") est atteint -- 90000 lignes omises ]
the ouput is huge so I post just the end and the first line:
[9002,] "9460000"
and for dput( head(data$id) ):
"9460000,9433000", "9460000,9436000", "9460000,9437000",
"9510000", "9510010", "9510030", "9510090", "9910000", "9910020",
"9910040", "9910090", "D", "FIELD_NOT_FOUND", "I"), class = "factor")
Thanks in advance, Jef
sapply(id, function(x)
length( # count items
unique( # that are unique
scan( # when arguments are presented to scan as text
text=x, what="", sep =",", # when separated by ","
quiet=TRUE))) )
# --- result: first typed line is 'names' of the items, not the results.
1 2,3,4 1,1
1 3 1
The argument text=x should allow scan to accept a character element of length-1 and break it into components at divisions of the separator argument value. These will get passed element-by-element to the anonymous function from the id vector(or row by row if it were coming from a dataframe).

Getting example codes of R functions into knitr using helpExtract function

I want to get the example codes of R functions to use in knitr. There might be an easy way but tried the following code using helpExtract function which can be obtained from here (written by #AnandaMahto). With my approach I have to look whether a function has Examples or not and have to include only those functions which have Examples.
This is very inefficient and naive approach. Now I'm trying to include only those functions which have Examples. I tried the following code but it is not working as desired. How can I to extract Examples codes from an R package?
\documentclass{book}
\usepackage[T1]{fontenc}
\begin{document}
<< label=packages, echo=FALSE>>=
library(ggplot2)
library(devtools)
source_gist("https://gist.github.com/mrdwab/7586769")
library(noamtools) # install_github("noamtools", "noamross")
#
\chapter{Linear Model}
<< label = NewTest1, results="asis">>=
tryCatch(
{helpExtract(lm, section="Examples", type = "s_text");
cat(
"\\Sexpr{
knit_child(
textConnection(helpExtract(lm, section=\"Examples\", type = \"s_text\"))
, options = list(tidy = FALSE, eval = TRUE)
)
}", "\n"
)
}
, error=function(e) FALSE
)
#
\chapter{Modify properties of an element in a theme object}
<< label = NewTest2, results="asis">>=
tryCatch(
{helpExtract(add_theme , section="Examples", type = "s_text");
cat(
"\\Sexpr{
knit_child(
textConnection(helpExtract(add_theme , section=\"Examples\", type = \"s_text\"))
, options = list(tidy = FALSE, eval = TRUE)
)
}", "\n"
)
}
, error=function(e) FALSE
)
#
\end{document}
I've done some quick work modifying the function (which I've included at this Gist). The Gist also includes a sample Rnw file (I haven't had a chance to check an Rmd file yet).
The function now looks like this:
helpExtract <- function(Function, section = "Usage", type = "m_code", sectionHead = NULL) {
A <- deparse(substitute(Function))
x <- capture.output(tools:::Rd2txt(utils:::.getHelpFile(utils::help(A)),
options = list(sectionIndent = 0)))
B <- grep("^_", x) ## section start lines
x <- gsub("_\b", "", x, fixed = TRUE) ## remove "_\b"
X <- rep(FALSE, length(x)) ## Create a FALSE vector
X[B] <- 1 ## Initialize
out <- split(x, cumsum(X)) ## Create a list of sections
sectionID <- vapply(out, function(x) ## Identify where the section starts
grepl(section, x[1], fixed = TRUE), logical(1L))
if (!any(sectionID)) { ## If the section is missing...
"" ## ... just return an empty character
} else { ## Else, get that list item
out <- out[[which(sectionID)]][-c(1, 2)]
while(TRUE) { ## Remove the extra empty lines
out <- out[-length(out)] ## from the end of the file
if (out[length(out)] != "") { break }
}
switch( ## Determine the output type
type,
m_code = {
before <- "```r"
after <- "```"
c(sectionHead, before, out, after)
},
s_code = {
before <- "<<eval = FALSE>>="
after <- "#"
c(sectionHead, before, out, after)
},
m_text = {
c(sectionHead, paste(" ", out, collapse = "\n"))
},
s_text = {
before <- "\\begin{verbatim}"
after <- "\\end{verbatim}"
c(sectionHead, before, out, after)
},
stop("`type` must be either `m_code`, `s_code`, `m_text`, or `s_text`")
)
}
}
What has changed?
A new argument sectionHead has been added. This is used to be able to specify the section title in the call to the helpExtract function.
The function checks to see whether the relevant section is available in the parsed document. If it is not, it simply returns a "" (which doesn't get printed).
Example use would be:
<<echo = FALSE>>=
mySectionHeading <- "\\section{Some cool section title}"
#
\Sexpr{knit_child(textConnection(
helpExtract(cor, section = "Examples", type = "s_code",
sectionHead = mySectionHeading)),
options = list(tidy = FALSE, eval = FALSE))}
Note: Since Sexpr doesn't allow curly brackets to be used ({), we need to specify the title outside of the Sexpr step, which I have done in a hidden code chunk.
This is not a complete answer so I'm marking it as community wiki. Here are two simple lines to get the examples out of the Rd file for a named function (in this case lm). The code is much simpler than Ananda's gist in my opinion:
x <- utils:::.getHelpFile(utils::help(lm))
sapply(x[sapply(x, function(z) attr(z, "Rd_tag") == "\\examples")][[1]], `[[`, 1)
The result is a simple vector of all of the text in the Rd "examples" section, which should be easy to parse, evaluate, or include in a knitr doc.
[1] "\n"
[2] "require(graphics)\n"
[3] "\n"
[4] "## Annette Dobson (1990) \"An Introduction to Generalized Linear Models\".\n"
[5] "## Page 9: Plant Weight Data.\n"
[6] "ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)\n"
[7] "trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)\n"
[8] "group <- gl(2, 10, 20, labels = c(\"Ctl\",\"Trt\"))\n"
[9] "weight <- c(ctl, trt)\n"
[10] "lm.D9 <- lm(weight ~ group)\n"
[11] "lm.D90 <- lm(weight ~ group - 1) # omitting intercept\n"
[12] "\n"
[13] "\n"
[14] "opar <- par(mfrow = c(2,2), oma = c(0, 0, 1.1, 0))\n"
[15] "plot(lm.D9, las = 1) # Residuals, Fitted, ...\n"
[16] "par(opar)\n"
[17] "\n"
[18] "\n"
[19] "### less simple examples in \"See Also\" above\n"
Perhaps the following might be useful.
get.examples <- function(pkg=NULL) {
suppressWarnings(f <- unique(utils:::index.search(TRUE, find.package(pkg))))
out <- setNames(sapply(f, function(x) {
tf <- tempfile("Rex")
tools::Rd2ex(utils:::.getHelpFile(x), tf)
if (!file.exists(tf)) return(invisible())
readLines(tf)
}), basename(f))
out[!sapply(out, is.null)]
}
ex.base <- get.examples('base')
This returns the examples for all functions (that have documentation containing examples) within the specified vector of packages. If pkg=NULL, it returns the examples for all functions within loaded packages.
For example:
ex.base['scan']
# $scan
# [1] "### Name: scan"
# [2] "### Title: Read Data Values"
# [3] "### Aliases: scan"
# [4] "### Keywords: file connection"
# [5] ""
# [6] "### ** Examples"
# [7] ""
# [8] "cat(\"TITLE extra line\", \"2 3 5 7\", \"11 13 17\", file = \"ex.data\", sep = \"\\n\")"
# [9] "pp <- scan(\"ex.data\", skip = 1, quiet = TRUE)"
# [10] "scan(\"ex.data\", skip = 1)"
# [11] "scan(\"ex.data\", skip = 1, nlines = 1) # only 1 line after the skipped one"
# [12] "scan(\"ex.data\", what = list(\"\",\"\",\"\")) # flush is F -> read \"7\""
# [13] "scan(\"ex.data\", what = list(\"\",\"\",\"\"), flush = TRUE)"
# [14] "unlink(\"ex.data\") # tidy up"
# [15] ""
# [16] "## \"inline\" usage"
# [17] "scan(text = \"1 2 3\")"
# [18] ""
# [19] ""
# [20] ""
# [21] ""

Resources