What I'd like to do:
I would like to use r-exams in the following procedure:
Providing electronic exams in pdf format to students (using exams2pdf(..)
Let the students upload excel file with their answers
Grade the answers using (using eval_nops(...))
My Question:
Is calling the function eval_nops() the preferred way to manually grad questions in r-exams?
If not, which way is to be prefered?
What I have tried:
I'm aware of the exam2nops() function, and I know that it gives back an .RDS file where the correct answers are stored. Hence, I basically have what I need. However, I found that procedure to be not very straightforward, as the correct answers are buried rather deeply inside the RDS file.
Overview
You are right that there is no readily available system for administering/grading exams outside of a standard learning management system (LMS) like Moodle or Canvas, etc. R/exams does provide some building blocks for the grading, though, especially exams_eval(). This can be complemented with tools like Google forms etc. Below I start with the "hard facts" regarding exams_eval() even though this is a bit technical. But then I also provide some comments regarding such approaches.
Using exams_eval()
Let us consider a concrete example
eval <- exams_eval(partial = TRUE, negative = FALSE, rule = "false2")
indicating that you want partial credits for multiple-choice exercises but the overall points per item must not become negative. A correctly ticked box yields 1/#correct points and an incorrectly ticked box 1/#false. The only exception is where there is only one false item (which would then cancel all points) then 1/2 is used.
The resulting object eval is a list with the input parameters (partial, negative, rule) and three functions checkanswer(), pointvec(), pointsum(). Imagine that you have the correct answer pattern
cor <- "10100"
The associated points for correctly and incorrectly ticked boxed would be:
eval$pointvec(cor)
## pos neg
## 0.5000000 -0.3333333
Thus, for the following answer pattern you get:
ans <- "11100"
eval$checkanswer(cor, ans)
## [1] 1 -1 1 0 0
eval$pointsum(cor, ans)
## [1] 0.6666667
The latter would still need to be multiplied with the overall points assigned to that exercise. For numeric answers you can only get 100% or 0%:
eval$pointsum(1.23, 1.25, tolerance = 0.05)
## [1] 1
eval$pointsum(1.23, 1.25, tolerance = 0.01)
## [1] 0
Similarly, string answers are either correct or false:
eval$pointsum("foo", "foo")
## [1] 1
eval$pointsum("foo", "bar")
## [1] 0
Exercise metainformation
To obtain the relevant pieces of information for a given exercise, you can access the metainformation from the nested list that all exams2xyz() interfaces return:
x <- exams2xyz(...)
For example, you can then extract the metainfo for the i-th random replication of the j-th exercise as:
x[[i]][[j]]$metainfo
This contains the correct $solution, the $type, and also the $tolerance etc. Sure, this is somewhat long and inconvenient to type interactively but should be easy enough to cycle through programatically. This is what nops_eval() for example does base on the .rds file containing exactly the information in x.
Administering exams without a full LMS
My usual advice here is to try to leverage your university's services (if available, of course). Yes, there can be problems with the bandwidth/stability etc. but you can have all of the same if you're running your own system (been there, done that). Specifically, a discussion of Moodle vs. PDF exams mailed around is available here:
Create fillable PDF form with exams2nops
http://www.R-exams.org/general/distancelearning/#replacing-written-exams
If I were to provide my exams outside of an LMS I would use HTML, though, and not PDF. In HTML it is much easier to embed additional information (data, links, etc.) than in PDF. Also HTML can be viewed on mobile device moch more easily.
For collecting the answers, some R/exams users employ Google forms, see e.g.:
https://R-Forge.R-project.org/forum/forum.php?thread_id=34076&forum_id=4377&group_id=1337. Others have been interested in using learnr or webex for that:
http://www.R-exams.org/general/distancelearning/#going-forward.
Regarding privacy, though, I would be very surprised if any of these are better than using the university's LMS.
Related
I am working on a simple code to find the square root of the following elements:
dat <- c(4,9,16,25,36,49,64,81,100,121,144,169,196,225)
num<-sample(dat ,1,replace=F)
Parts of the seed file are configured like this:
examen01<-c("SinRad.Rmd")
semilla<-sample(100:1000, 1)
set.seed(semilla)
exams2moodle(examen01,n=14,svg=TRUE,name="SinRadConRad",
encoding="UTF-8",dir="salida",edir="ejercicios",
mchoice = list(shuffle = TRUE,answernumbering = "ABCD",
solution = FALSE,
eval = list(partial = TRUE,rule = "none")))
I intend that, with n=14, all the responses to the options found in the "dat" vector are included, but I see that there are responses that are repeated.
How to achieve 14 answers for the 14 possibilities, without repeating or missing any?
Thank you very much
The exams2xyz() functions have been written to draw large numbers of random variations from sets of exercises. There is no dedicated functionality that draws a small number of deterministic variations. So in your case I would just draw, say, a hundred variations from the exercise template even if it can only yield 14 distinct versions. Sure, this wastes a bit of memory but not so much that I would worry about this.
Having said that, it is possible to set up a temporary file with a specific version of an exercise by using the expar() function. For example, expar("SinRad.Rmd", num = 4) would yield an exercise where the num parameter has been fixed to 4. Then in the same way you can cycle through the other 13 numbers you want. In the following post we also provide an expargrid() function that does this for all possible combinationso of parameters: Making deterministic versions of a parametrized question
Then you can run exams2moodle() on the resulting 14 deterministic exercise files.
I'm a new to R and package development so bear with me. I am writing test cases to keep package is line with standard practices. But I'm confused if I do the checks in testthat, should I not perform if/else checks in the package function?
my_function<-function(dt_genetic, dt_gene, dt_snpBP){
if((is.data.table(dt_genetic) & is.data.table(dt_gene) & is.data.table(dt_snpBP))== FALSE){
stop("data format unacceptable")
}
## similary more checks on column names and such
} ## function ends
In my test-data_integrity.R
## create sample data.table
test_gene_coord<-data.table(GENE=c("ABC","XYG","alpha"),"START"=c(10,200,320),"END"=c(101,250,350))
test_snp_pos<-data.table(SNP=c("SNP1","SNP2","SNP3"),"BP"=c(101,250,350))
test_snp_gene<-data.table(SNP=c("SNP1","SNP2","SNP3"),"GENE"=c("ABC","BRCA1","gamma"))
## check data type
test_that("data types correct works", {
expect_is(test_data_table,'data.table')
expect_is(test_gene_coord,'data.table')
expect_is(test_snp_pos,'data.table')
expect_is(test_snp_gene,'data.table')
expect_is(test_gene_coord$START, 'numeric')
expect_is(test_gene_coord$END, 'numeric')
expect_is(test_snp_pos$BP, 'numeric')
})
## check column names
test_that("column names works", {
expect_named(test_gene_coord, c("GENE","START","END"))
expect_named(test_snp_pos, c("SNP","BP"))
expect_named(test_snp_gene, c("SNP","GENE"))
})
when I run devtools::test() all tests are passed, but does it mean that I should not test within my function?
Pardon me if this seems naive but this is confusing as this is completely alien to me.
Edited: data.table if check.
(This is an expansion on my comments on the question. My comments are from a quasi-professional programmer; some of what I say here may be good "in general" but not perfectly complete from a theoretical standpoint.)
There are many "types" of tests, but I'll focus on distinguishing between "unit-tests" and "assertions". For me, the main difference is that unit-tests are typically run by the developer(s) only, and assertions are run at run-time.
Assertions
When you mention adding tests to your function, which to me sounds like assertions: a programmatic statement that an object meets specific property assumptions. This is often necessary when the data is provided by the user or from an external source (database), where the size or quality of the data is previously unknown.
There are "formal" packages for assertions, including assertthat, assertr, and assertive; while I have little experience with any of them, there is also sufficient support in base R that these aren't strictly required. The most basic method is
if (!inherits(mtcars, "data.table")) {
stop("'obj' is not 'data.table'")
}
# Error: 'obj' is not 'data.table'
which gives you absolute control at the expense of several lines of code. There's another function which shortens this a little:
stopifnot(inherits(mtcars, "data.table"))
# Error: inherits(mtcars, "data.table") is not TRUE
Multiple conditions can be provided, all must be TRUE to pass. (Unlike many R conditionals such as if, this statement must resolve to exactly TRUE: stopifnot(3) does not pass.) In R < 4.0, the error messages were uncontrolled, but starting in R-4.0 one can now name them:
stopifnot(
"mtcars not data.frame" = inherits(mtcars, "data.frame"),
"mtcars data.table error" = inherits(mtcars, "data.table")
)
# Error: mtcars data.table error
In some programming languages, these assertions are more declarative/deliberate so that compilation can optimize them out of a production executable. In this sense, they are useful during development, but for production it is assumed that some steps that worked before no longer need validation. I believe there is not an automatic way to do this in R (especially since it is generally not "compiled into an executable"), but one could fashion a function in a way to mimic this behavior:
myfunc <- function(x, ..., asserts = getOption("run_my_assertions", FALSE)) {
# this one only runs when the user explicitly says "asserts=TRUE"
if (asserts) stopifnot("'x' not a data.frame" = inherits(x, "data.frame"))
# this assertion runs all the time
stopifnot("'x' not a data.table" = inherits(x, "data.table"))
}
I have not seen that logic or flow often in R packages.
Regardless, my assumption of assertions is that those not optimized out (due to compilation or user arguments) execute every time the function runs. This tends to ensure a "safer" flow, and is a good idea especially for less-experienced developers who do not have the experience ("have not been burned enough") to know how many ways certain calls can go wrong.
Unit Tests
These are a bit different, both in their purpose and runtime effect.
First and foremost, unit-tests are not run every time a function is used. They are typically defined in a completely different file, not within the function at all[^1]. They are deliberate sets of calls to your functions, testing/confirming specific behaviors given certain inputs.
With the testthat package, R scripts (that match certain filename patterns) in the package's ./tests/testthat/ sub-directory will be run on command as unit-tests. (Other unit-test packages exist.) (Unit-tests do not require that they operate on a package; they can be located anywhere, and run on any set of files or directories of files. I'm using a "package" as an example.)
Side note: it is certainly feasible to include some of the testthat tools within your function for runtime validation as well. For instance, one might replace stopifnot(inherits(x, "data.frame")) with expect_is(x, "data.frame"), and it will fail with non-frames, and pass with all three types of frames tested above. I don't know that this is always the best way to go, and I haven't seen its use in packages I use. (Doesn't mean it isn't there. If you see testthat in a package's "Imports:", then it's possible.)
The premise here is not validation of runtime objects. The premise is validation of your function's performance given very specific inputs[^2]. For instance, one might define a unit-test to confirm that your function operates equally well on frames of class "data.frame", "tbl_df", and "data.table". (This is not a throw-away unit-test, btw.)
Consider a meek function that one would presume can work equally well on any data.frame-like object:
func <- function(x, nm) head(x[nm], n = 2)
To test that this accepts various types, one might simply call it on the console with:
func(mtcars, "cyl")
# cyl
# Mazda RX4 6
# Mazda RX4 Wag 6
When a colleague complains that this function isn't working, you might wonder that they're using either the tidyverse (and tibble) or data.table, so you can quickly test on the console:
func(tibble::as_tibble(mtcars), "cyl")
# # A tibble: 2 x 1
# cyl
# <dbl>
# 1 6
# 2 6
func(data.table::as.data.table(mtcars), "cyl")
# Error in `[.data.table`(x, nm) :
# When i is a data.table (or character vector), the columns to join by must be specified using 'on=' argument (see ?data.table), by keying x (i.e. sorted, and, marked as sorted, see ?setkey), or by sharing column names between x and i (i.e., a natural join). Keyed joins might have further speed benefits on very large data due to x being sorted in RAM.
So now you know where the problem lies (if not yet how to fix it). If you test this "as is" with data.table, one might think to try something like this (obviously wrong) fix:
func <- function(x, nm) head(x[,..nm], n = 2)
func(data.table::as.data.table(mtcars), "cyl")
# cyl
# 1: 6
# 2: 6
While this works, unfortunately it now fails for the other two frame-like objects.
The answer to this dilemma is to make tests so that when you make a change to your function, if previously-successful property assumptions now change, you will know immediately. Had all three of those tests been incorporated into a unit-test, one might have done something such as
library(testthat)
test_that("func works with all frame-like objects", {
expect_silent(func(mtcars, "cyl"))
expect_silent(func(tibble::as_tibble(mtcars), "cyl"))
expect_silent(func(data.table::as.data.table(mtcars), "cyl"))
})
# Error: Test failed: 'func works with all frame-like objects'
Given some research, you find one method that you think will satisfy all three frame-like objects:
func <- function(x, nm) head(subset(x, select = nm), n = 2)
And then run your unit-tests again:
test_that("func works with all frame-like objects", {
expect_silent(func(mtcars, "cyl"))
expect_silent(func(tibble::as_tibble(mtcars), "cyl"))
expect_silent(func(data.table::as.data.table(mtcars), "cyl"))
})
(No output ... silence is golden.)
Similar to many things in programming, there are many opinions on how to organize, fashion, or even when to create these unit-tests. Many of these opinions are right for somebody. One strategy that I tend to start with is this:
since I know that my functions can be used on all three frame-like objects, I often preemptively set up a test given one object of each type (you'd be surprised at some of the lurking differences between them);
when I find or receive a bug report, one of the first things I do after confirming the bug is write a test that triggers that bug, given the minimum inputs required to do so; then I fix the bug, and run my unit-tests to ensure that this new test now passes (and no other test now fails)
Experience will dictate types of tests to write preemptively before the bugs even come.
Tests don't always have to be about "no errors", by the way. They can test for a lot of things:
silence (no errors)
expected messages, warnings, or stop errors (whether internally generated or passed from another function)
output class (matrix or numeric), dimensions, attributes
expected values (returning 3 vice 3.14 might be a problem)
Some will say that unit-tests are no fun to write, and abhor efforts on them. While I don't disagree that unit-tests are not fun, I have burned myself countless times when making a simple fix to a function inadvertently broke several other things ... and since I deployed the "simple fix" without applicable unit-tests, I just shifted the bug reports from "this title has "NA" in it" to "the app crashes and everybody is angry" (true story).
For some packages, unit-testing can be done in moments; for others, it may take minutes or hours. Due to complexity in functions, some of my unit-tests deal with "large" data structures, so a single test takes several minutes to reveal its success. Most of my unit-tests are relatively instantaneous with inputs of vectors of length 1 to 3, or frames/matrices with 2-4 rows and/or columns.
This is by far not a complete document on testing. There are books, tutorials, and countless blogs about different techniques. One good reference is Hadley's book on R Packages, Testing chapter: http://r-pkgs.had.co.nz/tests.html. I like that, but it is far from the only one.
[^1] Tangentially, I believe that one power the roxygen2 package affords is the convenience of storing a function's documentation in the same file as the function itself. Its proximity "reminds" me to update the docs when I'm working on code. It would be nice if we could determine a sane way to similarly add formal testthat (or similar) unit-tests to the function file itself. I've seen (and at times used) informal unit-tests by including specific code in the roxygen2 #examples section: when the file is rendered to an .Rd file, any errors in the example code will alert me on the console. I know that this technique is sloppy and hasty, and in general I only suggest it when more formal unit-testing will not be done. It does tend to make help documentation a lot more verbose than it needs to be.
[^2] I said above "given very specific inputs": an alternative is something called "fuzzing", a technique where functions are called with random or invalid input. I believe this is very useful for searching for stack overflow, memory-access, or similar problems that cause a program to crash and/or execute the wrong code. I've not seen this used in R (ymmv).
Consider creating exams using the exams package in R.
When using exams2nops there is a parameter showpoints that, when set to TRUE will show the points of each exercise. However, for exams2pdf this parameter is not available.
How to display the points per exercise when using exams2pdf?
(The answer below is adapted from the R/exams forum at https://R-Forge.R-project.org/forum/forum.php?thread_id=33884&forum_id=4377&group_id=1337.)
There is currently no built-in solution to automatically display the number of points in exams2pdf(). The points= argument only stores the number of points in the R object that exams2pdf() creates (as in other exams2xyz() interfaces) but not in the individual PDF files.
Thus, if you want the points to be displayed you need to do it in some way yourself. A simple solution would be to include it in the individual exercises already, possibly depending on the kind of interface used, e.g., something like this for an .Rmd exercise:
pts <- 17
pts_string <- if(match_exams_call() == "exams2pdf") {
sprintf("_(%s points)_", pts)
} else {
""
}
And then at the beginning of the "Question":
Question
========
`r pts_string` And here starts the question text...
Finally in the meta-information
expoints: `r pts`
This always includes the desired points in the meta-information but only displays them in the question when using exams2pdf(...). This is very flexible and can be easily customized further. The only downside is that it doesn't react to the exams2pdf(..., points = ...) argument.
In .Rnw exercises one would have to use \Sexpr{...} instead of r .... Also the pts_string should be something like sprintf("\\emph{(%s points)}", pts).
Finally, a more elaborate solution would be to create a suitable \newcommand in the .tex template you use. If all exercises have the same number of points, this is not hard to do. But if all the different exercises could have different numbers of points, it would need to be more involved.
The main reason for supporting this in exams2nops() but not exams2pdf() is that the former has a rather restrictive format and vocabulary. In the latter case, however, the point is to give users all freedom regarding layout, language, etc. Hence, I didn't see a solution that is simple enough but also flexible enough to cover all use-cases of exams2pdf().
Is there a way to include open-ended/free-form questions that are ungraded or skipped by r-exams?
Use case: we want to have an exam with mostly multiple choice questions using the package and its grading capability, but also have 5-10 open ended questions that are printed in the same exam. Ideally, r-exams would provide the grade for the first MCQ section, and we could manually add the grade of the open-ended questions.
I forked the package and made some small changes that allows one to control how many questions are printed on the first page and to remove the string-question pages.
The new parameters are number_of_closed_questions and include_string_pages. It is far away from being ideal, but works for me.
As an example let us have 6 mpc/single-choice questions and one essay question (essayreg):
# install devtools if you do not have it!
# install the fork
devtools::install_github("johannes-titz/exams")
library("exams")
myexam <- list(
"tstat2.Rnw",
"ttest.Rnw",
"relfreq.Rnw",
"anova.Rnw",
c("boxplots.Rnw", "scatterplot.Rnw"),
"cholesky.Rnw",
"essayreg.Rnw"
)
set.seed(403)
ex1 <- exams2nops(myexam, n = 2,
dir = "nops_pdf", name = "demo", date = "2015-07-29",
number_of_closed_questions = 6, include_string_pages = FALSE)
This will produce only 6 questions on the front page (instead of 7) and will also exclude the string-question pages.
If you want normal behavior, just exclude the new parameters. Obviously, one will have to set the number of closed questions manually, so one should be really careful.
I guess one could automatically detect how many string questions are loaded and from this determine the number of open-ended/closed-ended questions, but I currently do not have the time to write this and the presented solution is usable for my case.
I am not 100% sure that the scans will work this way, but I assume there should not be any bigger problems as I did not really change much. Maybe Achim Zeileis could comment on that? See my commit: https://github.com/johannes-titz/exams/commit/def044e7e171ea032df3553acec0ea0590ae7f5e
There is built-in support for up to three open-ended "string" questions that are printed on a separate sheet that has to be marked by hand. The resulting sheet can then be scanned and evaluated along with the main sheet using nops_scan() and nops_eval(). It's on the wish list for the package to extend that number but it hasn't been implemented yet.
Another "trick" you could do is to use the pages= argument of exams2nops() to include a separate PDF sheet with the extra questions. But this would have to be handled completely separately "by hand" afterwards.
I'm reading the R FAQ source in texinfo, and thinking that it would be easier to manage and extend if it was parsed as an R structure. There are several existing examples related to this:
the fortunes package
bibtex entries
Rd files
each with some desirable features.
In my opinion, FAQs are underused in the R community because they lack i) easy access from the R command-line (ie through an R package); ii) powerful search functions; iii) cross-references; iv) extensions for contributed packages. Drawing ideas from packages bibtex and fortunes, we could conceive a new system where:
FAQs can be searched from R. Typical calls would resemble the fortune() interface: faq("lattice print"), or faq() #surprise me!, faq(51), faq(package="ggplot2").
Packages can provide their own FAQ.rda, the format of which is not clear yet (see below)
Sweave/knitr drivers are provided to output nicely formatted Markdown/LaTeX, etc.
QUESTION
I'm not sure what is the best input format, however. Either for converting the existing FAQ, or for adding new entries.
It is rather cumbersome to use R syntax with a tree of nested lists (or an ad hoc S3/S4/ref class or structure,
\list(title = "Something to be \\escaped", entry = "long text with quotes, links and broken characters", category = c("windows", "mac", "test"))
Rd documentation, even though not an R structure per se (it is more a subset of LaTeX with its own parser), can perhaps provide a more appealing example of an input format. It also has a set of tools to parse the structure in R. However, its current purpose is rather specific and different, being oriented towards general documentation of R functions, not FAQ entries. Its syntax is not ideal either, I think a more modern markup, something like markdown, would be more readable.
Is there something else out there, maybe examples of parsing markdown files into R structures? An example of deviating Rd files away from their intended purpose?
To summarise
I would like to come up with:
1- a good design for an R structure (class, perhaps) that would extend the fortune package to more general entries such as FAQ items
2- a more convenient format to enter new FAQs (rather than the current texinfo format)
3- a parser, either written in R or some other language (bison?) to convert the existing FAQ into the new structure (1), and/or the new input format (2) into the R structure.
Update 2: in the last two days of the bounty period I got two answers, both interesting but completely different. Because the question is quite vast (arguably ill-posed), none of the answers provide a complete solution, thus I will not (for now anyway) accept an answer. As for the bounty, I'll attribute it to the answer most up-voted before the bounty expires, wishing there was a way to split it more equally.
(This addresses point 3.)
You can convert the texinfo file to XML
wget http://cran.r-project.org/doc/FAQ/R-FAQ.texi
makeinfo --xml R-FAQ.texi
and then read it with the XML package.
library(XML)
doc <- xmlParse("R-FAQ.xml")
r <- xpathSApply( doc, "//node", function(u) {
list(list(
title = xpathSApply(u, "nodename", xmlValue),
contents = as(u, "character")
))
} )
free(doc)
But it is much easier to convert it to text
makeinfo --plaintext R-FAQ.texi > R-FAQ.txt
and parse the result manually.
doc <- readLines("R-FAQ.txt")
# Split the document into questions
# i.e., around lines like ****** or ======.
i <- grep("[*=]{5}", doc) - 1
i <- c(1,i)
j <- rep(seq_along(i)[-length(i)], diff(i))
stopifnot(length(j) == length(doc))
faq <- split(doc, j)
# Clean the result: since the questions
# are in the subsections, we can discard the sections.
faq <- faq[ sapply(faq, function(u) length(grep("[*]", u[2])) == 0) ]
# Use the result
cat(faq[[ sample(seq_along(faq),1) ]], sep="\n")
I'm a little unclear on your goals. You seem to want all the R-related documentation converted into some format which R can manipulate, presumably so the one can write R routines to extract information from the documentation better.
There seem to be three assumptions here.
1) That it will be easy to convert these different document formats (texinfo, RD files, etc.) to some standard form with (I emphasize) some implicit uniform structure and semantics.
Because if you cannot map them all to a single structure, you'll have to write separate R tools for each type and perhaps for each individual document, and then the post-conversion tool work will overwhelm the benefit.
2) That R is the right language in which to write such document processing tools; suspect you're a little biased towards R because you work in R and don't want to contemplate "leaving" the development enviroment to get information about working with R better. I'm not an R expert, but I think R is mainly a numerical language, and does not offer any special help for string handling, pattern recognition, natural language parsing or inference, all of which I'd expect to play an important part in extracting information from the converted documents that largely contain natural language. I'm not suggesting a specific alternative language (Prolog??), but you might be better off, if you succeed with the conversion to normal form (task 1) to carefully choose the target language for processing.
3) That you can actually extract useful information from those structures. Library science was what the 20th century tried to push; now we're all into "Information Retrieval" and "Data Fusion" methods. But in fact reasoning about informal documents has defeated most of the attempts to do it. There are no obvious systems that organize raw text and extract deep value from it (IBM's Jeopardy-winning Watson system being the apparent exception but even there it isn't clear what Watson "knows"; would you want Watson to answer the question, "Should the surgeon open you with a knife?" no matter how much raw text you gave it) The point is that you might succeed in converting the data but it isn't clear what you can successfully do with it.
All that said, most markup systems on text have markup structure and raw text. One can "parse" those into tree-like structures (or graph-like structures if you assume certain things are reliable cross-references; texinfo certainly has these). XML is widely pushed as a carrier for such parsed-structures, and being able to represent arbitrary trees or graphs it is ... OK ... for capturing such trees or graphs. [People then push RDF or OWL or some other knoweldge encoding system that uses XML but this isn't changing the problem; you pick a canonical target independent of R]. So what you really want is something that will read the various marked-up structures (texinfo, RD files) and spit out XML or equivalent trees/graphs. Here I think you are doomed into building separate O(N) parsers to cover all the N markup styles; how otherwise would a tool know what the value markup (therefore parse) was? (You can imagine a system that could read marked-up documents when given a description of the markup, but even this is O(N): somebody still has to describe the markup). One this parsing is to this uniform notation, you can then use an easily built R parser to read the XML (assuming one doesn't already exist), or if R isn't the right answer, parse this with whatever the right answer is.
There are tools that help you build parsers and parse trees for arbitrary lanuages (and even translators from the parse trees to other forms). ANTLR is one; it is used by enough people so you might even accidentally find a texinfo parser somebody already built. Our DMS Software Reengineering Toolkit is another; DMS after parsing will export an XML document with the parse tree directly (but it won't necessarily be in that uniform representation you ideally want). These tools will likely make it relatively easy to read the markup and represent it in XML.
But I think your real problem will be deciding what you want to extract/do, and then finding a way to do that. Unless you have a clear idea of how to do the latter, doing all the up front parsers just seems like a lot of work with unclear payoff. Maybe you have a simpler goal ("manage and extend" but those words can hide a lot) that's more doable.