Questions exported to BBLearn are imported into individual pools, not one pool - r-exams

I'm using the latest version of R/Exams. When I export questions to Blackboard (exams2blackboard) in a ZIP file, then import the ZIP file into Blackboard in the "Test" page (to build a test) or the "Pool" page (to build a pool), each question is imported into its own individual pool, rather than all in a single pool. I must then create a new pool and manually copy each question from its own pool to the new pool, then delete the single-question pools one by one. It's very inconvenient and time-consuming. I don't know if this is a problem of Blackboard or an issue of R/Exams. Is there any way to export questions to a single pool in Blackboard?

The default behavior of R/exams is designed for the case with dynamic exercises where you want to draw a (potentially large) number of random variations of each exercise. Hence, the random variations for one exercise form one "section" each of which is imported into a separate "pool" in Blackboard.
From your description I suspect that you have static questions and want to include each question exactly once within the same section/pool. This is possible by making the argument with the exercise files not a vector or a list but a matrix, e.g.:
library("exams")
exm <- cbind(c("capitals.Rmd", "swisscapital.Rmd", "switzerland.Rmd"))
exm
## [,1]
## [1,] "capitals.Rmd"
## [2,] "swisscapital.Rmd"
## [3,] "switzerland.Rmd"
exams2blackboard(exm)
This creates a single section/pool with the three exercises in the order provided in the column of the matrix. It would also be possible to make a matrix with several columns that would then lead to separate sections.
(Disclaimer: Lacking access to Blackboard, I couldn't test this myself, I have only inspected the XML code in the resulting ZIP which looked ok.)

Related

How to grade exams/questions manually?

What I'd like to do:
I would like to use r-exams in the following procedure:
Providing electronic exams in pdf format to students (using exams2pdf(..)
Let the students upload excel file with their answers
Grade the answers using (using eval_nops(...))
My Question:
Is calling the function eval_nops() the preferred way to manually grad questions in r-exams?
If not, which way is to be prefered?
What I have tried:
I'm aware of the exam2nops() function, and I know that it gives back an .RDS file where the correct answers are stored. Hence, I basically have what I need. However, I found that procedure to be not very straightforward, as the correct answers are buried rather deeply inside the RDS file.
Overview
You are right that there is no readily available system for administering/grading exams outside of a standard learning management system (LMS) like Moodle or Canvas, etc. R/exams does provide some building blocks for the grading, though, especially exams_eval(). This can be complemented with tools like Google forms etc. Below I start with the "hard facts" regarding exams_eval() even though this is a bit technical. But then I also provide some comments regarding such approaches.
Using exams_eval()
Let us consider a concrete example
eval <- exams_eval(partial = TRUE, negative = FALSE, rule = "false2")
indicating that you want partial credits for multiple-choice exercises but the overall points per item must not become negative. A correctly ticked box yields 1/#correct points and an incorrectly ticked box 1/#false. The only exception is where there is only one false item (which would then cancel all points) then 1/2 is used.
The resulting object eval is a list with the input parameters (partial, negative, rule) and three functions checkanswer(), pointvec(), pointsum(). Imagine that you have the correct answer pattern
cor <- "10100"
The associated points for correctly and incorrectly ticked boxed would be:
eval$pointvec(cor)
## pos neg
## 0.5000000 -0.3333333
Thus, for the following answer pattern you get:
ans <- "11100"
eval$checkanswer(cor, ans)
## [1] 1 -1 1 0 0
eval$pointsum(cor, ans)
## [1] 0.6666667
The latter would still need to be multiplied with the overall points assigned to that exercise. For numeric answers you can only get 100% or 0%:
eval$pointsum(1.23, 1.25, tolerance = 0.05)
## [1] 1
eval$pointsum(1.23, 1.25, tolerance = 0.01)
## [1] 0
Similarly, string answers are either correct or false:
eval$pointsum("foo", "foo")
## [1] 1
eval$pointsum("foo", "bar")
## [1] 0
Exercise metainformation
To obtain the relevant pieces of information for a given exercise, you can access the metainformation from the nested list that all exams2xyz() interfaces return:
x <- exams2xyz(...)
For example, you can then extract the metainfo for the i-th random replication of the j-th exercise as:
x[[i]][[j]]$metainfo
This contains the correct $solution, the $type, and also the $tolerance etc. Sure, this is somewhat long and inconvenient to type interactively but should be easy enough to cycle through programatically. This is what nops_eval() for example does base on the .rds file containing exactly the information in x.
Administering exams without a full LMS
My usual advice here is to try to leverage your university's services (if available, of course). Yes, there can be problems with the bandwidth/stability etc. but you can have all of the same if you're running your own system (been there, done that). Specifically, a discussion of Moodle vs. PDF exams mailed around is available here:
Create fillable PDF form with exams2nops
http://www.R-exams.org/general/distancelearning/#replacing-written-exams
If I were to provide my exams outside of an LMS I would use HTML, though, and not PDF. In HTML it is much easier to embed additional information (data, links, etc.) than in PDF. Also HTML can be viewed on mobile device moch more easily.
For collecting the answers, some R/exams users employ Google forms, see e.g.:
https://R-Forge.R-project.org/forum/forum.php?thread_id=34076&forum_id=4377&group_id=1337. Others have been interested in using learnr or webex for that:
http://www.R-exams.org/general/distancelearning/#going-forward.
Regarding privacy, though, I would be very surprised if any of these are better than using the university's LMS.

How do I insert text before a group of exercises in an exam?

I am very new to R and to R/exams. I've finally figured out basic things like compiling a simple exam with exams2pdf and exams2canvas, and I've figured out how to arrange exercises such that this group of X exercises gets randomized in the exam and others don't.
In my normal written exams, sometimes I have a group of exercises that require some introductory text (e.g,. a brief case study on which the next few questions are based, or a specific set of instructions for the questions that follow).
How do I create this chunk of text using R/exams and Rmd files?
I can't figure out if it's a matter of creating a particular Rmd file and then simply adding that to the list when creating the exam (like a dummy file of sorts that only shows text, but isn't numbered), or if I have to do something with the particular tex template I'm using.
There's a post on R-forge about embedding a plain LaTeX file between exercises that seems to get at what I'm asking, but I'm using Rmd files to create exercises, not Rnw files, and so, frankly, I just don't understand it.
Thank you for any help.
There are two strategies for this:
1. Separate exercise files in the same sequence
Always use the same sequence of exercises, say, ex1.Rmd, ex2.Rmd, ex3.Rmd where ex1.Rmd creates and describes the setting and ex2.Rmd and ex3.Rmd simply re-use the variables created internally by ex1.Rmd. In the exams2xyz() interface you have to assure that all exercises are processed in the same environment, e.g., the global environment:
exams2pdf(c("ex1.Rmd", "ex2.Rmd", "ex3.Rmd"), envir = .GlobalEnv)
For .Rnw exercises this is not necessary because they are always processed in the global environment anyway.
2. Cloze exercises
Instead of separate exercise files, combine all exercises in a single "cloze" exercise ex123.Rmd that combines three sub-items. For a simple exercise with two sub-items, see: http://www.R-exams.org/templates/lm/
Which strategy to use?
For exams2pdf() both strategies work and it is more a matter of taste whether one prefers all exercises together in a single file or split across separate files. However, for other exams2xyz() interfaces only one or none of these strategies work:
exams2pdf(): 1 + 2
exams2html(): 1 + 2
exams2nops(): 1
exams2moodle(): 2
exams2openolat(): 2
exams2blackboard(): -
exams2canvas(): -
Basically, strategy 1 is only guaranteed to work for interfaces that generate separate files for separate exams like exams2pdf(), exams2nops(), etc. However, for interfaces that create pools of exercises for learning management systems like exams2moodle(), exams2canvas(), etc. it typically often cannot be assured that the same random replication is drawn for all three exercises. (Thus, if there are two random replications per exercise, A and B, participants might not get A/A/A or B/B/B but A/B/A.)
Hence, if ex1/2/3 are multiple-choice exercises that you want to print and scan automatically, then you could use exams2nops() in combination with strategy 1. However, strategy 2 would not work because cloze exercises cannot be scanned automatically in exams2nops().
In contrast, if you want to use Moodle, then exams2moodle() could be combined with strategy 2. In contrast, strategy 1 would not work (see above).
As you are interested in Canvas export: In Canvas neither of the two strategies works. It doesn't support cloze exercises. And to the best of my knowledge it is not straightforward to assure that exercises are sampled "in sync".

arcmap network analyst iteration over multiple files using model builder

I have 10+ files that I want to add to ArcMap then do some spatial analysis in an automated fashion. The files are in csv format which are located in one folder and named in order as "TTS11_path_points_1" to "TTS11_path_points_13". The steps are as follows:
Make XY event layer
Export the XY table to a point shapefile using the feature class to feature class tool
Project the shapefiles
Snap the points to another line shapfile
Make a Route layer - network analyst
Add locations to stops using the output of step 4
Solve to get routes between points based on a RouteName field
I tried to attach a snapshot of the model builder to show the steps visually but I don't have enough points to do so.
I have two problems:
How do I iterate this procedure over the number of files that I have?
How to make sure that every time the output has a different name so it doesn't overwrite the one form the previous iteration?
Your help is much appreciated.
Once you're satisfied with the way the model works on a single input CSV, you can batch the operation 10+ times, manually adjusting the input/output files. This easily addresses your second problem, since you're controlling the output name.
You can use an iterator in your ModelBuilder model -- specifically, Iterate Files. The iterator would be the first input to the model, and has two outputs: File (which you link to other tools), and Name. The latter is a variable which you can use in other tools to control their output -- for example, you can set the final output to C:\temp\out%Name% instead of just C:\temp\output. This can be a little trickier, but once it's in place it tends to work well.
For future reference, gis.stackexchange.com is likely to get you a faster response.

How can I check that new data extracts have the same structure?

I work for a research consortium with a web-based data management system that's managed by another agency. I can download the underlying data from that system as a collection of CSV files. Using R and knitr, I've built a moderately complex reporting system on top of those files. But every so often, the other agency changes the format of the data extracts and blows up my reports (or worse, changes it in a subtle yet nefarious way that I don't notice for weeks).
They'll probably never notify me when these things happen, so I suppose I should be testing more. I'd like to start by testing that those CSV files have the same structure each time (but allowing different numbers of rows as we collect more data). What's the best way to do that? R is my preferred tool but I'm interested in hearing about others that are free and on Windows.
If your files are just CSVs, here is an example (assuming you keep a reference file around):
reference.file <- read.csv("ref.csv")
new.file <- read.csv("new.file")
struct.extract <- function(df) {
list(
vapply(df, class, character(1L)),
attributes(df)[names(attributes(df)) != "row.names"]
)
}
identical(struct.extract(reference.file), struct.extract(new.file))
This compares attributes of the data frame, as well as classes of the columns. If you need to get more detailed on column format you can extend this easily. This assumes the reports are not changing # of rows (or columns), but if that's the case, that should be easy to modify as well.

Strategies for repeating large chunk of analysis

I find myself in the position of having completed a large chunk of analysis and now need to repeat the analysis with slightly different input assumptions.
The analysis, in this case, involves cluster analysis, plotting several graphs, and exporting cluster ids and other variables of interest. The key point is that it is an extensive analysis, and needs to be repeated and compared only twice.
I considered:
Creating a function. This isn't ideal, because then I have to modify my code to know whether I am evaluating in the function or parent environments. This additional effort seems excessive, makes it harder to debug and may introduce side-effects.
Wrap it in a for-loop. Again, not ideal, because then I have to create indexing variables, which can also introduce side-effects.
Creating some pre-amble code, wrapping the analysis in a separate file and source it. This works, but seems very ugly and sub-optimal.
The objective of the analysis is to finish with a set of objects (in a list, or in separate output files) that I can analyse further for differences.
What is a good strategy for dealing with this type of problem?
Making code reusable takes some time, effort and holds a few extra challenges like you mention yourself.
The question whether to invest is probably the key issue in informatics (if not in a lot of other fields): do I write a script to rename 50 files in a similar fashion, or do I go ahead and rename them manually.
The answer, I believe, is highly personal and even then, different case by case. If you are easy on the programming, you may sooner decide to go the reuse route, as the effort for you will be relatively low (and even then, programmers typically like to learn new tricks, so that's a hidden, often counterproductive motivation).
That said, in your particular case: I'd go with the sourcing option: since you plan to reuse the code only 2 times more, a greater effort would probably go wasted (you indicate the analysis to be rather extensive). So what if it's not an elegant solution? Nobody is ever going to see you do it, and everybody will be happy with the swift results.
If it turns out in a year or so that the reuse is higher than expected, you can then still invest. And by that time, you will also have (at least) three cases for which you can compare the results from the rewritten and funky reusable version of your code with your current results.
If/when I do know up front that I'm going to reuse code, I try to keep that in mind while developing it. Either way I hardly ever write code that is not in a function (well, barring the two-liners for SO and other out-of-the-box analyses): I find this makes it easier for me to structure my thoughts.
If at all possible, set parameters that differ between sets/runs/experiments in an external parameter file. Then, you can source the code, call a function, even utilize a package, but the operations are determined by a small set of externally defined parameters.
For instance, JSON works very well for this and the RJSONIO and rjson packages allow you to load the file into a list. Suppose you load it into a list called parametersNN.json. An example is as follows:
{
"Version": "20110701a",
"Initialization":
{
"indices": [1,2,3,4,5,6,7,8,9,10],
"step_size": 0.05
},
"Stopping":
{
"tolerance": 0.01,
"iterations": 100
}
}
Save that as "parameters01.json" and load as:
library(RJSONIO)
Params <- fromJSON("parameters.json")
and you're off and running. (NB: I like to use unique version #s within my parameters files, just so that I can identify the set later, if I'm looking at the "parameters" list within R.) Just call your script and point to the parameters file, e.g.:
Rscript --vanilla MyScript.R parameters01.json
then, within the program, identify the parameters file from the commandArgs() function.
Later, you can break out code into functions and packages, but this is probably the easiest way to make a vanilla script generalizeable in the short term, and it's a good practice for the long-term, as code should be separated from the specification of run/dataset/experiment-dependent parameters.
Edit: to be more precise, I would even specify input and output directories or files (or naming patterns/prefixes) in the JSON. This makes it very clear how one set of parameters led to one particular output set. Everything in between is just code that runs with a given parametrization, but the code shouldn't really change much, should it?
Update:
Three months, and many thousands of runs, wiser than my previous answer, I'd say that the external storage of parameters in JSON is useful for 1-1000 different runs. When the parameters or configurations number in the thousands and up, it's better to switch to using a database for configuration management. Each configuration may originate in a JSON (or XML), but being able to grapple with different parameter layouts requires a larger scale solution, for which a database like SQLite (via RSQLite) is a fine solution.
I realize this answer is overkill for the original question - how to repeat work only a couple of times, with a few parameter changes, but when scaling up to hundreds or thousands of parameter changes in ongoing research, more extensive tools are necessary. :)
I like to work with combination of a little shell script, a pdf cropping program and Sweave in those cases. That gives you back nice reports and encourages you to source. Typically I work with several files, almost like creating a package (at least I think it feels like that :) . I have a separate file for the data juggling and separate files for different types of analysis, such as descriptiveStats.R, regressions.R for example.
btw here's my little shell script,
#!/bin/sh
R CMD Sweave docSweave.Rnw
for file in `ls pdfs`;
do pdfcrop pdfs/"$file" pdfs/"$file"
done
pdflatex docSweave.tex
open docSweave.pdf
The Sweave file typically sources the R files mentioned above when needed. I am not sure whether that's what you looking for, but that's my strategy so far. I at least I believe creating transparent, reproducible reports is what helps to follow at least A strategy.
Your third option is not so bad. I do this in many cases. You can build a bit more structure by putting the results of your pre-ample code in environments and attach the one you want to use for further analysis.
An example:
setup1 <- local({
x <- rnorm(50, mean=2.0)
y <- rnorm(50, mean=1.0)
environment()
# ...
})
setup2 <- local({
x <- rnorm(50, mean=1.8)
y <- rnorm(50, mean=1.5)
environment()
# ...
})
attach(setup1) and run/source your analysis code
plot(x, y)
t.test(x, y, paired = T, var.equal = T)
...
When finished, detach(setup1) and attach the second one.
Now, at least you can easily switch between setups. Helped me a few times.
I tend to push such results into a global list.
I use Common Lisp but then R isn't so different.
Too late for you here, but I use Sweave a lot, and most probably I'd have used a Sweave file from the beginning (e.g. if I know that the final product needs to be some kind of report).
For repeating parts of the analysis a second and third time, there are then two options:
if the results are rather "independent" (i.e. should produce 3 reports, comparison means the reports are inspected side by side), and the changed input comes in the form of new data files, that goes into its own directory together with a copy of the Sweave file, and I create separate reports (similar to source, but feels more natural for Sweave than for plain source).
if I rather need to do the exactly same thing once or twice again inside one Sweave file I'd consider reusing code chunks. This is similar to the ugly for-loop.
The reason is that then of course the results are together for the comparison, which would then be the last part of the report.
If it is clear from the beginning that there will be some parameter sets and a comparison, I write the code in a way that as soon as I'm fine with each part of the analysis it is wrapped into a function (i.e. I'm acutally writing the function in the editor window, but evaluate the lines directly in the workspace while writing the function).
Given that you are in the described situation, I agree with Nick - nothing wrong with source and everything else means much more effort now that you have it already as script.
I can't make a comment on Iterator's answer so I have to post it here. I really like his answer so I made a short script for creating the parameters and exporting them to external JSON files. And I hope someone finds this useful: https://github.com/kiribatu/Kiribatu-R-Toolkit/blob/master/docs/parameter_configuration.md

Resources