I use the following to create a D2L exam from the "capital.Rmd" example (I converted the question to schoice)
exams2blackboard("capitals.Rmd", n =3, name = "testquiz" )
After I upload the testquiz.zip file, I notice that the correct answer must be manually chosen on the D2L platform.
I was wondering if there is a workaround.
Many Thanks,
Umut
If you want the correct solution to be selected, do not use the Import option from the Question Library or from the Quiz itself. Use the Import/Export/Copy Components under the Course Admin tab.
If you import the questions through the following steps, BrightSpace correctly picks the right solution. It’s a bit longer but seems to correctly choose the solution.
Under the Course Admin tab of your course, go to
'Import/Export/Copy Components' -> ‘Import Components’ -> Start -> (drag and drop the ZIP file)
Click ‘Advanced Options…’
This step will take a few minutes for large files; if you do not click
Advanced Options, then the import will automatically import the
questions into the 'Question Library' and will generate a Quiz with the
imported questions; you do not want this.
-> Continue -> Continue -> at this point choose 'Question Library' from the section 'Select Components to Import'
I would not choose ‘Quizzes’ because it automatically creates a quiz
and makes it available to students. It has the unfortunate side-effect
of making ALL the questions available, which means all the versions of
various dynamic questions; this is not something we want.
-> Continue -> Continue. This stage takes a few minutes for large
imports.
Now the Questions are available in the Question Library and can be used to generate new quizzes. Each question has the correct answer selected already. This works for ‘schoice’ and ‘mchoice’ versions of questions. Currently, plots are not imported, though, still trying to figure out why.
This problem is new to me. In earlier versions of Brightspace/D2L the import of single-choice and multiple-choice exercises via exams2blackboard() worked well. Possibly, D2L changed in the meantime given that neither the current release version from CRAN nor the development version from R-Forge work for you.
D2L also supports other import formats and we did play around with some of these. See the following discussions in the R/exams forum on R-Forge:
https://R-Forge.R-project.org/forum/forum.php?thread_id=33404&forum_id=4377&group_id=1337
https://R-Forge.R-project.org/forum/forum.php?thread_id=33657&forum_id=4377&group_id=1337
Notably we tried to use the XML-based QTI 2.1 format that seems to be employed by D2L internally. However, D2L apparently uses a particular custom flavor of QTI 2.1. It should be possible to reverse engineer that and improve exams2qti21() correspondingly but so far (to the best of my knowledge) no one put the time and effort into this that would be needed.
For simple single/multiple choice questions a CSV-based exchange format can also be used. I have put together a very basic exams2d2l() function that was posted in the threads above and that I'm also including below. It can set up the CSV file for a single exercise like the capitals.Rmd exercise that you use above. For plain text exercises like that it seems to work well but not for more complex elements (graphics, code, math, etc.).
exams2d2l <- function(file, dir = ".", ## n = 1L, nsamp = NULL disabled for now
name = NULL, quiet = TRUE, edir = NULL, tdir = NULL, sdir = NULL, verbose = FALSE,
resolution = 100, width = 4, height = 4, svg = FALSE,
encoding = "", converter = NULL, ...)
{
## for Rnw exercises use "ttm" converter otherwise "pandoc" converter
if(any(tolower(tools::file_ext(unlist(file))) == "rmd")) {
if(is.null(converter)) converter <- "pandoc"
} else {
if(is.null(converter)) converter <- "ttm"
}
## output directory or display on the fly
## output name processing
if(is.null(name)) name <- tools::file_path_sans_ext(basename(file))
## set up .html transformer and writer function
htmltransform <- make_exercise_transform_html(converter = converter, ...)
## create exam with HTML text
rval <- xexams(file,
driver = list(sweave = list(quiet = quiet, pdf = FALSE, png = !svg, svg = svg,
resolution = resolution, width = width, height = height, encoding = encoding),
read = NULL, transform = htmltransform, write = NULL),
dir = dir, edir = edir, tdir = tdir, sdir = sdir, verbose = verbose)
## currently: only a single exercise
rval <- rval[[1L]][[1L]]
## put together CSV
cleanup <- function(x) gsub('"', '""', paste(x, collapse = "\n"), fixed = TRUE)
rval <- c(
'NewQuestion,MC,,,',
sprintf('ID,"%s",,,', cleanup(rval$metainfo$file)),
sprintf('Title,"%s",,,', cleanup(rval$metainfo$name)),
sprintf('QuestionText,"%s",,,', cleanup(rval$question)),
sprintf('Points,%s,,,', if(is.null(rval$metainfo$points)) 1 else rval$metainfo$points),
'Difficulty,1,,,',
'Image,,,,',
paste0('Option,', ifelse(rval$metainfo$solution, 100, 0), ',"', cleanup(rval$questionlist), '",,"', cleanup(rval$solutionlist), '"'),
'Hint,,,,',
sprintf('Feedback,"%s",,,', cleanup(rval$solution))
)
writeLines(rval, file.path(dir, paste0(name, ".csv")))
invisible(rval)
}
function INV:DeleteInventoryItem( ply, pos, item)
local rarity = INV.PLAYERS[ply:SteamID64()][pos].quality
---print(rarity)
local value = "credits"
if(rarity == "Common")then
local amount = math.floor(math.random(1, 30))
table.remove( INV.PLAYERS[ply:SteamID64()], pos)
local var = ply:ChatPrint("You got ".. amount .." credits from deconstructing!")
ply:INVAddCredits( amount )
self.SAVE:SendInventory( ply )
local updatevalue = INV.PLAYERS[ply:SteamID64()].inventorydata.credits
UpdateDatabase(value, ply:SteamID(), updatevalue)
end
end
The problem I am having is that I cannot get the same value from the amount variable, so it says that you got a certain amount when it actually gave you a different amount.
I am really confused on how I can make it the same amount... Any help would be appreciated!
Thanks for all the help you guys have been, I since have fixed the issue and it was something wrong with the adding of the inventory credits.
-Thanks D12
Im trying to do the following
google_analytics(my_id,
date_range = c("2018-08-25", "2019-12-31"),
metrics = c("ga:pageviews"
,"ga:uniquePageviews"
)
,segments = seg_obj
,dimensions = c("date"
#,"ga:channelGrouping"
#,"ga:deviceCategory";
,"ga:pagePath"
#,"ga:segment"
)
, anti_sample = TRUE
,filters = "ga::pagePath=#x",max = 100000)
But it returns an error saying
API returned: Invalid value 'ga::pagePath-#x' for filters parameter.
Also tried using 2 equal signs and also with filtersExpression but same resulting in error.
Tried to follow another question asked at link:
GoogleAnalyticsR api - FilterExpression
Any idea why this is happening and how i can resolve this?
Stupid error. I was using ga::pagePath when it should be ga:pagePath
I am kind of new on Chainer and I have been struggling with a weird situation recently.
I have a Chain to compute a CNN which I feed with a labeledDataSet.
But no results appears when I use the extensions. When I display the observation value it is empty. But the loss is indeed calculated and the parameters updated (at least they change) so I don't know where is the connection problem.
def convert(batch, device):
return chainer.dataset.convert.concat_examples(batch, device, padding=0)
def print_obs(t):
print("trainer.observation", trainer.observation)
print("updater.loss", updater.loss_func)
print("conv1", model.predictor.conv1.W[0][0])
print("conv20", model.predictor.conv20.W[0][0])
model.predictor.train = True
model.predictor.finetune = False ####or True ??
cuda.get_device(0).use()
model.to_gpu()
optimizer = optimizers.MomentumSGD(lr=learning_rate, momentum=momentum)
optimizer.use_cleargrads()
optimizer.setup(model)
optimizer.add_hook(chainer.optimizer.WeightDecay(weight_decay))
train, test = imageNet_data.train_val_test()
train_iter = iterators.SerialIterator(train, batch_size)
test_iter = iterators.SerialIterator(test, batch_size, repeat=False,shuffle=False)
with chainer.using_config('debug', True):
# Set up a trainer
updater = training.StandardUpdater(train_iter, optimizer, loss_func=model, converter=convert)
trainer = training.Trainer(updater, (10, 'epoch'), out="./backup/result")
trainer.extend(print_obs, trigger=(3, 'iteration'))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.run()
Maybe this is something is miss completely and which is quite obvious.. Thank you for all remarks that would help me a lot.
Chainer4.1, Ubuntu16
If you are using your own Link with the Trainer, you need to report metrics using chainer.report by your own.
See https://docs.chainer.org/en/stable/guides/report.html for instructions.
You can see some examples in Chainer repository:
https://github.com/chainer/chainer/blob/v4.1.0/chainer/links/model/classifier.py#L116
https://github.com/chainer/chainer/blob/v4.1.0/examples/imagenet/alex.py#L40
i am trying to run scollector from bosun.
after I run the scolector, It cannot show me the memory information, but CPU information was right.
this CODE:
Host = "http://localhost:8070"
DisableSelf = true
Freq = 60
Filter = ["snmp-generic", "snmp-ifaces"]
[[SNMP]]
Community = "test"
Host = "name"
MIBs = [ "devicename"]
[Tags]
product = "fw"
[MIBs]
[MIBs.fw]
BaseOid = ".1.3.6.1.4.1.2620"
[[MIBs.fw.Metrics]]
Metric = "os.cpu"
Oid = ".1.6.7.2.4.0"
Unit = "percent"
RateType = "gauge"
[[MIBs.fw.Metrics]]
Metric = "os.mem.used"
Oid = ".1.6.7.4.5.0"
Unit = "bytes"
RateType = "gauge"
THIS IS LOG
**2016/11/07 17:24:42 error: interval.go:64: snmp-generic-name-fw: asn1: structure error: tags don't match (2 vs {class:0 tag:4 length:11 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} #2**
2016/11/07 17:24:43 info: queue.go:90: {"metric":"os.cpu","timestamp":1478539482,"value":2,"tags":{"host":"name","product":"fw"}}
It looks to me like this is an issue converting data types. The error is from deep in the bowels of the asn1 library we are using but I think it boils down to: cpu is represented as an integer, while memory is a string.
Our snmp collector attempts to parse all values into a big.Int, but apparently the string value is not able to be coerced into that by our asn1 library.
Unfortunately I don't see a good way to make this work, except perhaps look for an oid that returns an integer type. Without knowing what device you are using, that is as good as I can offer I'm afraid.