Can SCORM store multiple results? - standards

I was wondering wether SCORM can store multiple scores of a SCO. I have read many articles and same state it can't, and when I restart a SCO it erases the previous score, other articles sais the opposite. How is it? Maybe the 1.2 can't, and the 2004 version can?

Well...it's complicated. You can restart a SCO, but it wipes the tracking data and score and starts fresh.
From the SCORM 2004 4th ed docs:
4.2.8. Exit
The cmi.exit data model element indicates how or why the learner left the SCO [1].
This value is used to indicate the reason
that the SCO was last exited. The cmi.exit data model element is
involved with temporal aspects of the run-time execution of the SCO.
• If the cmi.exit is set to “suspend” then the SCOs current learner
attempt does not end. The SCOs Run-Time Environment data model
element values for the current learner session will be available to
the SCO if the SCO is relaunched in a subsequent learner session.
• If the cmi.exit is set to “normal”, “logout”,“time-out” or “” (empty
characterstring) then the SCOs learner attempt ends. The SCOs
Run-Time Environment data model element values of the current learner
session will NOT be available if the SCO is relaunched.
ADL Note: If an LMS invokes a Suspend All navigation request, then the value of
cmi.exit will be ignored. In these cases, the SCOs current learner
attempt does not end. The SCO’s data model element values shall be
persisted and available to the SCO if the SCO is relaunched.
But if you want to allow learners to have multiple scores for the same SCO, that isn't supported in the SCORM standard as far as I know. It doesn't mean that the LMS couldn't offer such a feature, though, to create "pseudo-learners" that map to the same person, letting you keep multiple attempts. Or, if your LMS has an API or supports webhooks, you could also create your own separate datastore that records scores, so that if the SCO gets wiped and restarted you can still keep the historical data on your own. You would need to check with your LMS vendor to see if either of those options would be supported.

You'll want to look into interactions. That gives you greater ability to journal or update the activities/questions/interactions of your content. That will also enable you to set the result, latency and other values per interaction.
SCORM 1.2 was mostly an optionally supported standard but it did support it. SCORM 2004 made it mandatory so its more likely you'll have the support.
There is a lot of wiki based info on this located here: https://github.com/cybercussion/SCOBot/wiki/SCORM-SCOBot-Documentation#set-interaction
Just remember the SCOBot Content API is a javascript library to interface the LMS Runtime API. It includes the roll-up of the white paper so it's easier to work with all facets of the SCORM Specification. It is obviously not a replacement for reading the specification.
I also recognize you may be asking about attempts here. That would be up to the LMS to record that. SCORM does not make any direct restrictions if the LMS does/does not do that or is required to.

Related

In SCORM 2004 (4th ed.) when are Available Children meant to be selected and randomized?

The pseudocode for the Select Children Process [SR.1] and Randomize Children Process [SR.2] heavily suggests these processes are meant to be run multiple times although for SR.1 no behavior is defined when selection is meant to occur onEachNewAttempt.
Since both the Sequencing Request Process [SB.2.12] and the Navigation Request Process [NB.2.1] expect the Available Children to be selected/randomized and the Content Delivery Environment Process [DB.2] only initializes the new attempt after a traversal over the various Available Children has already happened, it seems like the LMS is meant to run both of these processes during initialization of the activity tree itself before attempting to deliver the first activity or handle any requests.
However this doesn't explain when SR.2 is meant to be re-run. Since DB.2 creates the new attempt progress information by iterating over the activity path from the root to the specified activity, randomizing each activity's Available Children along the way would result in the position of the specified activity within the activity tree changing after selection, which seems unintuitive. Further more, if one were to attempt to implement onEachNewAttempt for SR.1 this could also cause the selected activity to vanish from the available activities (though this would explain why its behavior is undefined in SCORM).
My understanding would be that the Available Children are meant to be initialized to the list of all children followed by SR.1 and SR.2 being applied to all activities starting from the root and that SR.2 is then re-applied in DB.2 for every activity in the path despite this changing the order of activities. Is this correct or am I missing something?
Upon re-reading section 4.7 in SN-4-48 it seems that the answer is that the selection and randomization should indeed happen once at the start of the sequencing session (i.e. on initialization) and then again in the End Attempt Process [UP.4] (although for onEachNewAttempt it actually states "prior to the first attempt", which could also be read as referring to the delivery process, DB.2).
What makes this a bit awkward is that UP.4 is applied in many places including immediately prior to delivery (in DB.2), which still means randomization could occur after an activity has already been selected and that randomization could happen multiple times in between a sequencing request and delivery.

Markov Decision Process absolute clarification of what states have the Markov property

I seem to consistently encounter counter-examples in different texts as to what states constitute having the Markov property.
It seems some presentations assume a MDP to be one in which the current state/observation relays absolutely all necessary environmental information to make an optimal decision.
Other presentations state only that the current state/observation have all necessary details from prior observed states to make the optimal decision (eg see: http://www.incompleteideas.net/book/ebook/node32.html).
The difference between these two definitions is vast since some people seem to state that card games such as poker lack the Markov property since we cannot know the cards our opponent is holding, and this incomplete information thus invalidates the Markov property.
The other definition from my understanding seems to suggest that card games with hidden state (such as hidden cards) are in fact Markov, so long as the agent is basing its decisions as if it had access to all of its own prior observations.
So which one does the Markov property refer to? Does it refer to having complete information about the environment to make the optimal decision, or rather does it accept incomplete information but rather simply refer to the current state/observation of the agent simply being based upon an optimal decision as if that state had access to all prior states of the agent? Ie: In the poker example, as long as the current state gives us all information that we have observed before, even if there is a lot of hidden variables, would this now satisfy the Markov property?

Need a unique user/course ID variable in SCORM 1.2 package

I'm working on a project that requires a unique "enrollment" id inside a file inside a SCORM package. Something that works like this:
<script src="...?enrollmentid=1234567890"></script>
I have figured out that I should be able to obtain a student_id, but this is too broad an identifier for this use. The id I use must describe a single student/course enrollment uniquely, as a student could enroll in multiple courses, and a course could have multiple students enrolled.
The id could be a composite of other fields, like student_id + course id + enrollment date, but I can't see any way to get those sorts of details from the LMS either.
Is what I'm trying to do possible?
SCORM 1.2 or even 2004 did not unfortunately include things like enrollment date, course id, or SCO title/structure unless that was pumped in via Launch Data that comes by way of the imsmanifest.xml at author time. And these are things which you would need to provide.
cmi.core.student_id is the only unique value you'll get directly from SCORM. The LMS was not given a way to also include any Tier IDs or internals it used when it imported the course. And unless they (unreliably) place them in the launch parameters or you have a way of doing some probing with javascript (also unreliable) you'll need to consider some other options.
Launch Data cmi.launch_data would probably be the easiest way to gain access to any values you want to pass thru to the SCO but this relies heavily on the authoring process of the SCO and its imsmanifest.xml. Situations where there is a LCMS setup or some mechanism of a authoring tool could enable these capabilities.
I add this below the <title/> tag in the imsmanifest.xml:
<!-- Launch Data Example uses Querystring format name=value&name=value -->
<adlcp:dataFromLMS><![CDATA[name=value]]></adlcp:dataFromLMS>
When I state unreliable - I mean to hint that unless you can definitively state you know where this content is running, and the LMS will never change, you won't be able to obtain the info you want in any reliable way.

Is there any way to store a pre test score in SCORM 2004

Is there any way to store a pre test score in SCORM 2004. I have developed a module in actionscript 2. The pre test and post test uses the same question set.
The client is now demanding that the pre test scores should also be stored along with post test. Is there any way it can be done. Which value should / can I set for this.
I have spent last two days trying to find a solution to this. Is there any way to set a custom variable? or is there any pre set variable name that I missed?
It sounds like you're trying to put the pre-test and the post-test in the same SCO. The cleanest way to report separate pre and post test results would be to put the pre and post tests in their own SCO, by editing your packages imsmanifest.xml file. (see more on content packaging here: http://scorm.com/scorm-explained/technical-scorm/content-packaging/) You can link back to the same content multiple times in the same manifest and include query string parameters which your content then reads and uses to know what mode it's in (pre-test vs post test).
That said, a lot of people avoid using multiple SCOs so they don't have to think about how their LMS or SCORM manages those SCOs. Using only a single SCO gives your content a lot of control, but the trade off is it looks like one monolithic item to the LMS, so reporting on multiple tests can't be as nice. So, there is no specific pre-test variable because SCORM is designed on the assumption that pre-tests would go in their own SCO, so there is no need for such a variable.
What you can do in a single SCO is create additional named objectives and interactions. If you just want the score for the pre-test, that's going to look better, but if you're tracking responses to each question you'll wind up with a list of items like "PreTest question 1, PreTest question 2" ... and continuing to "PostTest question 1, PostTest question 2"... the naming scheme is up to you of course, but the constraint is that you're dealing with one list of objectives and interactions and can only differentiate them by name.
An example course using objectives and interactions: http://scorm.com/scorm-explained/technical-scorm/golf-examples/#advancedruntime
Some tips on what tests should report (and how): http://scorm.com/blog/2010/11/4-things-every-scorm-test-should-do-when-reporting-interactions/
Sorry for not being more step-by-step, but as you can see you have a couple of options, each of which involve a little more detail than I can really put in one answer.
While you didn't describe the structure of your course, I'll respond based upon the possibilities. I am assuming you have a pretest, content and a posttest:
You have one large SCO, which contains the pretest/content and SCO:
If you need the info only for your course to use and display within the course:
You can save the pretest and posttest scores in the cmi.suspend_data. Most people store this information in name/value pairs up to 64K chars. with the SetValue command you can
rtn = your_api.SetValue('cmi.suspend_data','pretest=69,'+oldSuspendData)
Again, you would only use this if your course needed to display this information within the course only and take action on it based upon the pretest results. Obviously, you should code to make sure you get clean data back and handle any odd conditions, like no data. If however, the client wants the data in the LMS and visible to the LMS admins, you'd need to look at the option below
If you need the LMS admins to have access to the pretest/post test scores:
you'll really need to separate each sco (pretest,content,posttest) but you won't be able to communicate (through SCORM) those scores between each SCO. ie: post test won't know what the pretest score is. You can look at nice examples of how to separate your content into SCOs from the url below. You can easily share the pre and posttest HTML/SWF but pass a querystring to the HTML or use the in the manifest to tell your code what it is. From my experience, there are some LMS which will not pass the querystring and therefore should use both.
Simple MultiSCO: http://scorm.com/wp-content/assets/golf_examples/PIFS/ContentPackagingOneFilePerSCO_SCORM20043rdEdition.zip
If you need to know the pretest score AND have the info sent to the LMS like a SCO
SCORM offers no way to get around this issue in SCORM 2004. I would first tell them the complications. If they still need this hybrid solution, I would us ajax to securely communicate the learnerID, courseID (if any), SCOID (pretest/content/posttest) and score to a server where it can be retrieved. Cookies are a no-no because they assume you'll be on the same machine between SCOs. Additionally, if there is the possibility of xAPI, you could do this much easier.

StatsD/Graphite Naming Conventions for Metrics

I'm beginning the process of instrumenting a web application, and using StatsD to gather as many relevant metrics as possible. For instance, here are a few examples of the high-level metric names I'm currently using:
http.responseTime
http.status.4xx
http.status.5xx
view.renderTime
oauth.begin.facebook
oauth.complete.facebook
oauth.time.facebook
users.active
...and there are many, many more. What I'm grappling with right now is establishing a consistent hierarchy and set of naming conventions for the various metrics, so that the current ones make sense and that there are logical buckets within which to add future metrics.
My question is two fold:
What relevant metrics are you gathering that you have found indespensible?
What naming structure are you using to categorize metrics?
This is a question that has no definitive answer but here's how we do it at Datadog (we are a hosted monitoring service so we tend to obsess over these things).
1. Which metrics are indispensable? It depends on the beholder. But at a high-level, for each team, any metric that is as close to their goals as possible (which may not be the easiest to gather).
System metrics (e.g. system load, memory etc.) are trivial to gather but seldom actionable because they are too hard to reliably connect them to a probable cause.
On the other hand number of completed product tours matter to anyone tasked with making sure new users are happy from the first minute they use the product. StatsD makes this kind of stuff trivially easy to collect.
We have also found that the core set of key metrics for any teamchanges as the product evolves so there is a continuous editorial process.
Which in turn means that anyone in the company needs to be able to pick and choose which metrics matter to them. No permissions asked, no friction to get to the data.
2. Naming structure The highest level of hierarchy is the product line or the process. Our web frontend is internally called dogweb so all the metrics from that component are prefixed with dogweb.. The next level of hierarchy is the sub-component, e.g. dogweb.db., dogweb.http., etc.
The last level of hierarchy is the thing being measured (e.g. renderTime or responseTime).
The unresolved issue in graphite is the encoding of metric metadata in the metric name (and selection using *, e.g. dogweb.http.browser.*.renderTime) It's clever but can get in the way.
We ended up implementing explicit metadata in our data model, but this is not in statsd/graphite so I will leave the details out. If you want to know more, contact me directly.

Resources