exams2openolat: shufflesections and navigation do not work - r-exams

I run the following command from the exams2openolat() video tutorial for summative online exams using R/exams
exams2openolat(exm, n = 50, name = "R-exams-OpenOLAT",
points = 1, maxattempts = 0, cutvalue = 2, solutionswitch = FALSE,
duration = 60, shufflesections = TRUE, navigation = "linear",
stitle = names(exm), ititle = "Question", adescription = "", sdescription = "")
and get the error
## Error in rmarkdown::pandoc_convert(input = infile, output = outfile, from = from, :
## unused Arguments (shufflesections = TRUE, navigation = "linear")
When I leave the two arguments out, it works fine. In the YouTube tutorial the command also works with the two arguments.

The two arguments have been introduced in version 2.4-0 of the package which was still the development version when the question was asked.
This point along with a few other details are explained in a blog post that accompanies the YouTube tutorial: http://www.R-exams.org/tutorials/openolat_exam/

Related

R Knitr Why is the cache invalidated by copying?

Question
It seems the knitr cache becomes invalidated by copying the relevant files (.rmd script and cache directory) to another computer.
Why is that so and
how can I work around this?
Details
I do various lengthy calculations on two computers. I thought the following procedure could work:
Knit a first version of a report on machine A. (includes some lengthy calculations)
Copy the files created, i.e. the script and the cache directory, to machine B.
Continue editing the report on machine B (without recalculations because everything is cached).
This does not work, after copying the files to B, "knit" performs a full recalculation. This is even the case before any editing of the script was performed, i.e. just the act of copying from A to B seems enough to invalidate the cache.
Why is a full recalculation on B performed? As I understood it the caching mechanism boils down to creating and comparing a hash. I had hoped that after copying the hash would remain unchanged.
Is there something else I should copy in addition? Or is there any other way I can make the procedure above work?
Example
Any trivial script works as an example such as the one below:
```{r setup, include=FALSE}
knitr::opts_chunk$set(cache = TRUE)
```
Bla Bla
```{r test}
tmp = sort(runif(1e7))
```
I don't know the details of why that happens, but the workaround is easy: save values to files explicitly, and read them back in. You can use
saveRDS(x, "x.rds")
to save the variable x to a file named x.rds, and then
x <- readRDS("x.rds")
to read it back in. If you want to get fancy, you can check for the existence of x.rds using file.exists("x.rds") and do the full calculation followed by saveRDS if that returns FALSE, otherwise just read the data.
EDITED TO ADD: If you really want to know the answer to your first question, one possible approach would be to copy the folder back from the 2nd computer to the 1st, and see if it works back there. If not, do a binary compare of the original and twice copied directories and see what has changed.
If it does work, it might simply be different RNGkind() settings on the two computers: it's pretty common to have the buggy sample.kind = "Rounding" saved. Not sure that caching would use this. Or perhaps different package versions or R versions: when I updated knitr the cache was invalidated.
MORE additions:
If you want to see what has changed, then turn on debugging on the digest::digest function, and call knitr::knit("src.Rmd"). digest() is called for each cached chunk, and passed a large list in its object argument. It should return the same hash value if the list is the same, so you'll want to save those objects, and compare them between the two computers. For example, with your toy example above, I get this passed as object:
list(eval = TRUE, echo = TRUE, results = "markup", tidy = FALSE,
tidy.opts = NULL, collapse = FALSE, prompt = FALSE, comment = "##",
highlight = TRUE, size = "normalsize", background = "#F7F7F7",
strip.white = TRUE, cache = 3, cache.path = "cache/", cache.vars = NULL,
cache.lazy = TRUE, dependson = NULL, autodep = FALSE, fig.keep = "high",
fig.show = "asis", fig.align = "default", fig.path = "figure/",
dev = "png", dev.args = NULL, dpi = 72, fig.ext = NULL, fig.width = 7,
fig.height = 7, fig.env = "figure", fig.cap = NULL, fig.scap = NULL,
fig.lp = "fig:", fig.subcap = NULL, fig.pos = "", out.width = NULL,
out.height = NULL, out.extra = NULL, fig.retina = 1, external = TRUE,
sanitize = FALSE, interval = 1, aniopts = "controls,loop",
warning = TRUE, error = TRUE, message = TRUE, render = NULL,
ref.label = NULL, child = NULL, engine = "R", split = FALSE,
purl = TRUE, label = "test", code = "tmp = sort(runif(1e7))",
75L)

Issue with TFX Trainer component not outputting model to filesystem

First of all, I am using TFX version 0.21.2 and Tensorflow version 2.1.
I have constructed a pipeline largely following the Chigaco taxi example. When the Trainer component is executed I can see the following in the logs:
INFO - Training complete. Model written to /root/airflow/tfx/pipelines/fish/Trainer/model/9/serving_model_dir
When checking the above directory it is empty. What am I missing?
This is my DAG definition file (import statements omitted):
_pipeline_name = 'fish'
_airflow_config = AirflowPipelineConfig(airflow_dag_config = {
'schedule_interval': None,
'start_date': datetime.datetime(2019, 1, 1),
})
_project_root = os.path.join(os.environ['HOME'], 'airflow')
_data_root = os.path.join(_project_root, 'data', 'fish_data')
_module_file = os.path.join(_project_root, 'dags', 'fishUtils.py')
_serving_model_dir = os.path.join(_project_root, 'serving_model', _pipeline_name)
_tfx_root = os.path.join(_project_root, 'tfx')
_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)
_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,
'metadata.db')
def _create_pipeline(pipeline_name: Text, pipeline_root: Text, data_root: Text,
module_file: Text, serving_model_dir: Text,
metadata_path: Text,
direct_num_workers: int) -> pipeline.Pipeline:
examples = external_input(data_root)
example_gen = CsvExampleGen(input=examples)
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
infer_schema = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
trainer = Trainer(
examples=example_gen.outputs['examples'], schema=infer_schema.outputs['schema'],
module_file=_module_file, train_args= trainer_pb2.TrainArgs(num_steps=10000),
eval_args= trainer_pb2.EvalArgs(num_steps=5000))
model_validator = ModelValidator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'])
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=model_validator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
return pipeline.Pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
components=[
example_gen,
statistics_gen,
infer_schema,
validate_stats,
trainer,
model_validator,
pusher],
enable_cache=True,
metadata_connection_config=metadata.sqlite_metadata_connection_config(
metadata_path),
beam_pipeline_args=['--direct_num_workers=%d' % direct_num_workers]
)
runner = AirflowDagRunner(config = _airflow_config)
DAG = runner.run(
_create_pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
data_root=_data_root,
module_file=_module_file,
serving_model_dir=_serving_model_dir,
metadata_path=_metadata_path,
# 0 means auto-detect based on on the number of CPUs available during
# execution time.
direct_num_workers=0))
And this is my module file:
_DENSE_FLOAT_FEATURE_KEYS = ['length']
real_valued_columns = [tf.feature_column.numeric_column('length')]
def _eval_input_receiver_fn():
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
features = tf.io.parse_example(
serialized=serialized_tf_example,
features={
'length': tf.io.FixedLenFeature([], tf.float32),
'label': tf.io.FixedLenFeature([], tf.int64),
})
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features={'length' : features['length']},
receiver_tensors=receiver_tensors,
labels= features['label'],
)
def parser(serialized_example):
features = tf.io.parse_single_example(
serialized_example,
features={
'length': tf.io.FixedLenFeature([], tf.float32),
'label': tf.io.FixedLenFeature([], tf.int64),
})
return ({'length' : features['length']}, features['label'])
def _input_fn(filenames):
# TFRecordDataset doesn't directly accept paths with wildcards
filenames = tf.data.Dataset.list_files(filenames)
dataset = tf.data.TFRecordDataset(filenames, 'GZIP')
dataset = dataset.map(parser)
dataset = dataset.shuffle(2000)
dataset = dataset.batch(40)
dataset = dataset.repeat(10)
return dataset
def trainer_fn(trainer_fn_args, schema):
estimator = tf.estimator.LinearClassifier(feature_columns=real_valued_columns)
train_input_fn = lambda: _input_fn(trainer_fn_args.train_files)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=trainer_fn_args.train_steps)
eval_input_fn = lambda: _input_fn(trainer_fn_args.eval_files)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=trainer_fn_args.eval_steps,
name='fish-eval')
receiver_fn = lambda: _eval_input_receiver_fn()
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
Thank you in advance for your help!
Posting the solution for anyone that is facing the same problem that I faced.
The reason that the model was not written in the filesystem was that the estimator needs a config argument to know where to write the model.
The following modification to the trainer_fn function should solve the problem.
run_config = tf.estimator.RunConfig(save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)
estimator=tf.estimator.LinearClassifier(feature_columns=real_valued_columns,config=run_config)

unable to get sen2r function working, some arguments missing?

I am trying to use to sen2r() function (Package sen2r_1.3.2) with default parameters but getting the following error:
Error in paste(c(...), collapse = sep) : argument is missing, with no default.
I know the error wants me to fill in some parameters, but the source manual clearly says that the default should work, and the parameters can be set subsequently upon launching the GUI.
Using the s2_gui() launches the shiny app, but keeps hanging when I try to "Save and Close"
R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.4 LTS
Also, can someone with a 'higher reputation' please create a sen2r tag, for easier subsequent communications?
Here is the traceback...
sen2r()
Error in paste(c(...), collapse = sep) :
argument is missing, with no default
> traceback()
7: paste(c(...), collapse = sep)
6: strsplit(paste(c(...), collapse = sep), "\n")
5: unlist(strsplit(paste(c(...), collapse = sep), "\n"))
4: strwrap(unlist(strsplit(paste(c(...), collapse = sep), "\n")),
width = width, indent = indent, exdent = exdent, prefix = prefix,
initial = initial)
3: print_message(type = "waiting", "It seems you are running this package for the first time. ",
"Do you want to verify/install the required dependencies using a GUI (otherwise, an\n automatic check will be performed)? (y/n) ",
)
2: .sen2r(param_list = param_list, pm_arg_passed = pm_arg_passed,
gui = gui, preprocess = preprocess, s2_levels = s2_levels,
sel_sensor = sel_sensor, online = online, order_lta = order_lta,
apihub = apihub, downloader = downloader, overwrite_safe = overwrite_safe,
rm_safe = rm_safe, step_atmcorr = step_atmcorr, sen2cor_use_dem = sen2cor_use_dem,
sen2cor_gipp = sen2cor_gipp, max_cloud_safe = max_cloud_safe,
timewindow = timewindow, timeperiod = timeperiod, extent = extent,
extent_name = extent_name, s2tiles_selected = s2tiles_selected,
s2orbits_selected = s2orbits_selected, list_prods = list_prods,
list_rgb = list_rgb, list_indices = list_indices, index_source = index_source,
rgb_ranges = rgb_ranges, mask_type = mask_type, max_mask = max_mask,
mask_smooth = mask_smooth, mask_buffer = mask_buffer, clip_on_extent = clip_on_extent,
extent_as_mask = extent_as_mask, reference_path = reference_path,
res = res, res_s2 = res_s2, unit = unit, proj = proj, resampling = resampling,
resampling_scl = resampling_scl, outformat = outformat, rgb_outformat = rgb_outformat,
index_datatype = index_datatype, compression = compression,
rgb_compression = rgb_compression, overwrite = overwrite,
path_l1c = path_l1c, path_l2a = path_l2a, path_tiles = path_tiles,
path_merged = path_merged, path_out = path_out, path_rgb = path_rgb,
path_indices = path_indices, path_subdirs = path_subdirs,
thumbnails = thumbnails, parallel = parallel, processing_order = processing_order,
use_python = use_python, tmpdir = tmpdir, rmtmp = rmtmp,
log = log, globenv = sen2r_env, .only_list_names = FALSE)
1: sen2r()
I ran s2_gui() as is...no parameters specified. But i am running the dependency check now, I suspect that should clear things up even for the GUI.
This error was due to a code bug, which was fixed (see the GitHub issue 292).
Until the sen2r CRAN version will be updated, the bug can be:
solved installling the sen2r GitHub version (remotes::install_github("ranghetti/sen2r")), or
bypassed launching check_gdal() before running sen2r().
This is a bug in the original code.
In the traceback that you provided, it included:
3: print_message(type = "waiting", "It seems you are running this package for the first time. ",
"Do you want to verify/install the required dependencies using a GUI (otherwise, an\n automatic check will be performed)? (y/n) ",
)
Notably, I'll truncate most of the strings:
3: print_message(type = "waiting", "It seems ... time. ",
"Do you ... performed)? (y/n) ",
) # ^-- extra comma, invalid R syntax
Notice how it ends with a comma and then a right-paren? Yup, that's a syntax error in R. (This has been submitted as issue 292 on the original repo.)

Hardware convertion : written data is different than my read data

I am testing a program executed partially on a MPC603 and partially on a MPC555.
I have to verify that some data is correctly "moved" from one processor to the other via a DPRAM.
I am guessing that at some point "someone" makes a conversion but I don't know how to find what kind of conversion is done.
Here are some examples:
Pt_Dpram->acq1 at 0x8D00008 = 0x3EB2
acq1 = (0xA010538) = 1182451712 = 0x467AC800
Pt_Dpram->acq2 at 0x8D0000A = 0x5528
acq2 = (0xA010540) = 1185566720 = 0x46AA5000
Pt_Dpram->acq3 at 0x8D0000C = 0x416E
acq3 = (0xA010548) = 1107552036 = 0x4203E724
Pt_Dpram->acq4 at 0x8D0000E = 0x413C
acq4 = (0xA010550) = 1107526232 = 0x42038258
I got my answers from a collegue : the values in acqX are in Motorola binary format : http://en.wikipedia.org/wiki/SREC_(file_format)
Here is a small software that does the conversion : http://www.hexworkshop.com/onlinehelp/500/html/idhelp_baseconv.htm

Setting a loop in R

I have already discussed a similar type of a question in this following post
How to set a for -loop in R
each file contents as follows:
FILE_1.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
FILE_2.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
However, there is another package in R which works like this:
extractAPAAC(x, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL)
I tried creating a function to run it for number of file sequences and the program looks like this
read_and_extract <- function(fasta) {
seq <- readFASTA(fasta)[[1]]
return(extractAPAAC(seq, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL))
}
setwd("H:\\CC")
fasta_files <- dir(pattern = "[.]fasta$")
aa_comp <- vapply(fasta_files, read_and_extract, rep(pi, 80))
write.csv(aa_comp, file = "C:\\Users\\PAAC.csv")
This programs shows an error
Error: unexpected ',' in "w = 0.05,"
But I have given w=0.05 as of default value, could anyone tell me where is the actual problem?

Resources