I am using a command using PicoCLI v4.0.0-alpha-3. No matter which options I try, the one that shows at the top (when the list of options is displayed in the CL) is always to the right of the other options. How can this be configured so that all options for a command are aligned at the same level?
#CommandLine.Command(name = "",
description = "test",
header = "%n#|green test|#",
footer = {"",
"#|cyan Press Ctrl-D to exit the CLI.|#",
""},
version = "1.0.0",
showDefaultValues = true,
optionListHeading = "#|bold %nOptions|#:%n",
subcommands = {
Abc.class,
Def.class
})
public class Tester implements Callable<Integer> {
#Option(names = {"-v", "--verbose"}, description = "Verbose mode. Helpful for troubleshooting.")
private boolean verboseMode;
#Option(names = {"-a", "--autocomplete"}, description = "Generate sample autocomplete")
private boolean autocomplete;
Display on CLI
Options:
--v, --version Show version info and exit.
-a, --autocomplete Generate sample autocomplete
The first option is always misaligned. I am trying to ensure that the first option is aligned at the same level as other options.
You may have found a bug. I will investigate.
Update:
Looking closer at the output:
Options:
--v, --version Show ...
-a, --autocomplete Generate ...
You can see that both the --v option and the --version option have two leading - hyphens. That’s why picocli considers both as “long options” and puts them in the column for long options.
If you give the --v option a single leading hyphen so it becomes a POSIX-compliant short option -v, you should see it line up correctly.
Related
I use the following to create a D2L exam from the "capital.Rmd" example (I converted the question to schoice)
exams2blackboard("capitals.Rmd", n =3, name = "testquiz" )
After I upload the testquiz.zip file, I notice that the correct answer must be manually chosen on the D2L platform.
I was wondering if there is a workaround.
Many Thanks,
Umut
If you want the correct solution to be selected, do not use the Import option from the Question Library or from the Quiz itself. Use the Import/Export/Copy Components under the Course Admin tab.
If you import the questions through the following steps, BrightSpace correctly picks the right solution. It’s a bit longer but seems to correctly choose the solution.
Under the Course Admin tab of your course, go to
'Import/Export/Copy Components' -> ‘Import Components’ -> Start -> (drag and drop the ZIP file)
Click ‘Advanced Options…’
This step will take a few minutes for large files; if you do not click
Advanced Options, then the import will automatically import the
questions into the 'Question Library' and will generate a Quiz with the
imported questions; you do not want this.
-> Continue -> Continue -> at this point choose 'Question Library' from the section 'Select Components to Import'
I would not choose ‘Quizzes’ because it automatically creates a quiz
and makes it available to students. It has the unfortunate side-effect
of making ALL the questions available, which means all the versions of
various dynamic questions; this is not something we want.
-> Continue -> Continue. This stage takes a few minutes for large
imports.
Now the Questions are available in the Question Library and can be used to generate new quizzes. Each question has the correct answer selected already. This works for ‘schoice’ and ‘mchoice’ versions of questions. Currently, plots are not imported, though, still trying to figure out why.
This problem is new to me. In earlier versions of Brightspace/D2L the import of single-choice and multiple-choice exercises via exams2blackboard() worked well. Possibly, D2L changed in the meantime given that neither the current release version from CRAN nor the development version from R-Forge work for you.
D2L also supports other import formats and we did play around with some of these. See the following discussions in the R/exams forum on R-Forge:
https://R-Forge.R-project.org/forum/forum.php?thread_id=33404&forum_id=4377&group_id=1337
https://R-Forge.R-project.org/forum/forum.php?thread_id=33657&forum_id=4377&group_id=1337
Notably we tried to use the XML-based QTI 2.1 format that seems to be employed by D2L internally. However, D2L apparently uses a particular custom flavor of QTI 2.1. It should be possible to reverse engineer that and improve exams2qti21() correspondingly but so far (to the best of my knowledge) no one put the time and effort into this that would be needed.
For simple single/multiple choice questions a CSV-based exchange format can also be used. I have put together a very basic exams2d2l() function that was posted in the threads above and that I'm also including below. It can set up the CSV file for a single exercise like the capitals.Rmd exercise that you use above. For plain text exercises like that it seems to work well but not for more complex elements (graphics, code, math, etc.).
exams2d2l <- function(file, dir = ".", ## n = 1L, nsamp = NULL disabled for now
name = NULL, quiet = TRUE, edir = NULL, tdir = NULL, sdir = NULL, verbose = FALSE,
resolution = 100, width = 4, height = 4, svg = FALSE,
encoding = "", converter = NULL, ...)
{
## for Rnw exercises use "ttm" converter otherwise "pandoc" converter
if(any(tolower(tools::file_ext(unlist(file))) == "rmd")) {
if(is.null(converter)) converter <- "pandoc"
} else {
if(is.null(converter)) converter <- "ttm"
}
## output directory or display on the fly
## output name processing
if(is.null(name)) name <- tools::file_path_sans_ext(basename(file))
## set up .html transformer and writer function
htmltransform <- make_exercise_transform_html(converter = converter, ...)
## create exam with HTML text
rval <- xexams(file,
driver = list(sweave = list(quiet = quiet, pdf = FALSE, png = !svg, svg = svg,
resolution = resolution, width = width, height = height, encoding = encoding),
read = NULL, transform = htmltransform, write = NULL),
dir = dir, edir = edir, tdir = tdir, sdir = sdir, verbose = verbose)
## currently: only a single exercise
rval <- rval[[1L]][[1L]]
## put together CSV
cleanup <- function(x) gsub('"', '""', paste(x, collapse = "\n"), fixed = TRUE)
rval <- c(
'NewQuestion,MC,,,',
sprintf('ID,"%s",,,', cleanup(rval$metainfo$file)),
sprintf('Title,"%s",,,', cleanup(rval$metainfo$name)),
sprintf('QuestionText,"%s",,,', cleanup(rval$question)),
sprintf('Points,%s,,,', if(is.null(rval$metainfo$points)) 1 else rval$metainfo$points),
'Difficulty,1,,,',
'Image,,,,',
paste0('Option,', ifelse(rval$metainfo$solution, 100, 0), ',"', cleanup(rval$questionlist), '",,"', cleanup(rval$solutionlist), '"'),
'Hint,,,,',
sprintf('Feedback,"%s",,,', cleanup(rval$solution))
)
writeLines(rval, file.path(dir, paste0(name, ".csv")))
invisible(rval)
}
When a List<> option is used in an #ArgGroup, it is duplicated in the short usage help. Consider the following code:
import picocli.CommandLine;
import picocli.CommandLine.*;
import picocli.CommandLine.Model.CommandSpec;
#Command(name = "MyApp")
public class App implements Runnable {
#ArgGroup(exclusive=true) // or false
MyGroup myGroup;
static class MyGroup {
#Option(names="-A", paramLabel="N", split=",") List<Long> A;
}
#Spec CommandSpec spec;
#Override
public void run() {
System.out.printf("OK: %s%n", spec.commandLine().getParseResult().originalArgs());
}
public static void main(String[] args) {
new CommandLine(new App()).execute("-h");
}
}
Shows the following output
Usage: MyApp [[-A=N[,N...]] [-A=N[,N...]]...]
I was expecting the output
Usage: MyApp [-A=N[,N...]]
#ArgGroup is needed in the code for other reasons, it may seem futile in this toy example.
You may have found a bug in picocli.
Would you mind raising this on the picocli issue tracker?
Update:
The short story
This was a bug. In the next version of picocli, the expected synopsis can be achieved by setting the argument group to exclusive = false.
The long story
This synopsis stuff can get quite complex... Let's break it down.
Option Synopsis
Before we go into argument groups, let's first look at simple options. Picocli shows a different synopsis for required and non-required options, and for single-value and multi-value options.
The below table illustrates. Note especially the notation for required multi-value options. Such options must be specified at least once, but possibly multiple times, and the synopsis reflects this:
Required Non-Required
--------- ------------
Single value -x=N [-x=N]
Multi-value -x=N [-x=N]... [-x=N]...
Argument Group Synopsis
Now, let's look at groups. In exclusive groups (the default), all arguments are automatically made required. (There is some history behind this, but basically anything else did not make sense.) In non-exclusive groups, options can be required or optional.
Groups have a multiplicity. The default is multiplicity = "0..1" meaning the group is optional, and this is shown in the synopsis by surrounding the group with [ and ] square brackets.
Now, let's put these together. The table below shows the synopsis for groups with two options, -x and -y:
Exclusive Group Non-Exclusive Group
--------------------------------- -------------------
Single value [-x=N | -y=M] [[-x=N] [-y=M]]
Multi-value [-x=N [-x=N]... | -y=M [-y=M]...] [[-x=N]... [-y=M]...]
Split Regex Synopsis
The final element: when the option accepts a split="," regex, the N parameter label becomes N[,N...] in the synopsis.
Problem: synopsis too long
When I execute your example with picocli 4.3.2, I get the following synopsis:
Usage: MyApp [[-A=N[,N...]] [-A=N[,N...]]...]
This is incorrect, and does not follow the specifications above.
With picocli 4.3.3-SNAPSHOT, I get the correct synopsis:
Usage: MyApp [-A=N[,N...] [-A=N[,N...]]...]
Given the above, we now know why: this is the synopsis for a multi-value option on an exclusive group. The option became a required option because the group is exclusive.
Getting a shorter synopsis
With picocli 4.3.3, one idea is to make the group non-exclusive (after all, with only one option, exclusive or non-exclusive does not matter). The program is almost unchanged (exclusive = false instead of true):
#Command(name = "MyApp")
public class App implements Runnable {
#ArgGroup(exclusive = false) // was: exclusive=true
MyGroup myGroup;
static class MyGroup {
#Option(names="-A", paramLabel="N", split=",")
List<Long> A;
}
// ...
}
The synopsis of the usage help message now looks like this:
Usage: MyApp [[-A=N[,N...]]...]
I hope this explains things.
I've just managed to get mutation testing working for the first time. My usual testing framework is Codeception but as of writing, it is not compatible with mutation testing (although I believe work is being done on it and it's not far off). I'm using PHPUnit and Infection, neither of which seem easy to work out how to use.
My test suite generated ten mutants. Nine were killed and one escaped. However, I don't know what part of the code or the tests needs to be improved to kill the final mutant.
How do you get information about what code allowed the mutant to escape?
I found in this blog what I couldn't find in Infection's documentation: the results are saved in infection.log.
The log file looks like this:
Escaped mutants:
================
1) <full-path-to-source-file>.php:7 [M] ProtectedVisibility
--- Original
+++ New
## ##
use stdClass;
trait HiddenValue
{
- protected function hidden_value($name = null, $value = null)
+ private function hidden_value($name = null, $value = null)
{
static $data = [];
$keys = array_map(function ($item) {
Timed Out mutants:
==================
Not Covered mutants:
====================
It says that the mutation changed the protected visibility to private and that no tests failed as a result. If this is important, I can now either change the code or write another test to cover this case.
Now that I've found this, I've searched on the Infection website for infection.log and found --show-mutations or -s which will output escaped mutants to the console while running.
Having trouble finalizing a dynamic QMenu tree.
The structure and format is perfect, but what remains missing is the return of all branch names when triggering the end-action.
The only implement I have tried with ANY trend toward a solution is the use of self.sender(); which returns only the name of the end-action.
Before adding a ton of the lengthy code snips - starting by conceptualizing the question seemed best in case there is some (obvious) means I am over-looking.
Example;
The ideal return based on the footer figure would be something along the lines of...
Top Image:
'Single Results' - 'Head Results'
Middle Image:
'Batch Results' - 'testBatch_vr3' - 'Run-1' - 'Budget Results'
Bottom Image:
'Single Results' - 'testBatch_vr3' - 'Run-3' - 'Particle Tracks'
To the point;
How can all names in a multi-leveled set of QMenus be retrieved when triggering end-action?
The following complex bits resolved my problem. It might be a bit obscure way to go about it (hovered signal from menu to search for dictionary menu entry) - but it works well for now.
# checks batch processing folder for existing directories and publishes the contents
# into the batch results menu comboBox
def populateBatchResults(self):
self.batchMenuDict = {}
self.runMenuDict = {}
self.runBatchResultsPopup.clear()
self.batchDirNamesMenu.clear()
batchModDir = self.estabBatchModelDir()
for batch in os.listdir(batchModDir):
fullBatchDir = batchModDir+str(batch)
if os.path.isdir(fullBatchDir):
self.batchMenuDict[batch] = QMenu(self.iface.mainWindow())
self.batchMenuDict[batch].setTitle(str(batch))
self.runBatchResultsPopup.addMenu(self.batchMenuDict[batch])
for run in os.listdir(fullBatchDir):
fullRunDir = fullBatchDir+'\\'+str(run)
if os.path.isdir(fullRunDir):
self.runMenuDict[run] = QMenu(self.iface.mainWindow())
self.runMenuDict[run].setTitle(str(run))
self.batchMenuDict[batch].addMenu(self.runMenuDict[run])
self.runMenuDict[run].hovered.connect(self.assertBatchMenuSelection)
# get all current cursor hovered menu names
def assertBatchMenuSelection(self):
self.selectedBatch = self.runBatchResultsPopup.activeAction().text()
self.selectedRun = self.batchMenuDict.get(self.selectedBatch).activeAction().text()
self.selectedAction = self.runMenuDict.get(self.selectedRun).activeAction().text()
According to the documentation, source() takes a default option echo = verbose, which can get old fast when testing functions. How can I set this to be FALSE just for source() in a simple way (such as an .Rprofile setting)?
I tried setting options(echo=FALSE) but that throws a wrench in the terminal functioning:
> options(echo=FALSE)
5
[1] 5
options(echo=TRUE)
>
If you are using RStudio, the Source button can perform either "Source" or "Source with Echo" using the little dropdown arrow to select between then. The button will then continue to run with the last chosen option.
How about
library(Defaults)
setDefaults("source",echo=FALSE)
?
This is similar to (but not quite identical/somewhat simpler than) the answer to this question.
Since the Defaults package was archived 6 months after this question was answered, you either would have to get it from here or use devtools::install_version("Defaults","1.1-1"), or fall back to #KonradRudolph's answer.
Redefine source:
source = function (file, local = FALSE, print.eval = echo,
verbose = getOption("verbose"),
prompt.echo = getOption("prompt"), max.deparse.length = 150,
chdir = FALSE, encoding = getOption("encoding"),
continue.echo = getOption("continue"), skip.echo = 0,
keep.source = getOption("keep.source")) {
base::source(file, local, echo = FALSE, print.eval, verbose, prompt.echo,
max.deparse.length, chdir, encoding, continue.echo, skip.echo,
keep.source)
}
Terrible, I know. But effective.
No, source does not take a "default option". It takes a logical argument echo, which defaults to the value of verbose. If the caller doesn't pass verbose either, that argument, in turn, defaults to getOption("verbose"). So if you wanted to set a global option to affect echoing of the input text, you would do options(verbose=FALSE). BTW, that's the setting of this option by default anyway, so you only need to change any of the above if you set it differently.