Suppose I have two SCORM 2004 modules - instruction.zip and test.zip. The first contains instructional web pages and the second contains an interactive quiz. Each package was authored separately. I want to combine them to create a single course of study in which students read the web pages and then test their knowledge. (I will leave to one side the significant issues of sequencing and navigation.)
What is the recommended way of combining the two? I have tried (i) merging the two (did not work due to differences in file structure and dependencies) and (ii) adding test.zip to instruction.zip as a complete package and adding links (issues with reporting of test results).
I realise that most people author their courses using Captivate or other software to produce an single integrated package. For reasons that need not be discussed here, that is not an option in my case: the test assets will be developed separately and need to be combined with the instructional assets.
Grant,
I've got a packager on my site https://cybercussion.com which may be able to help you. If there is any advanced features your using though I haven't built out support for that yet. There is a 30 day trial for it.
You'd just need to expand the content into something like:
Multi-SCO/
SCO Title 1/ [all SCO 1 files]
SCO Title 2/ [all SCO 2 files]
You can also do this by hand by merging the imsmanifest organization markup together which if your friendly with XML is a option. You'll just need to manage the organization and resource elements. You also may have DTD/XSDs apart of both packages.
Manually zipping this yourself could result in a error importing on the LMS. Some platforms expect the imsmanifest.xml to be in the root of the zip and if its inside a folder it could error. So watch out for that.
We have some great SCORM 2004 samples on our site that may serve as a guide for you as far as sequencing and navigation goes. Check out the golf samples here
If you have any questions, please let us know!
Joe Donnelly
support#scorm.com
Related
I would like to use R objects (e.g., cleaned data) generated in one git-versioned R project in another git-versioned R project.
Specifically, I have multiple git-versioned R projects (that hold drake plans) that do various things for my thesis experiments (e.g., generate materials, import and clean data, generate reports/articles).
The experiment-specific projects should ideally be:
Connectable - so that I can get objects (mainly data and materials) that I generated in these projects into another git-versioned R project that generates my thesis report.
Self-contained - so that I can use them in other non-thesis projects (such as presentations, reports, and journal manuscripts). When sharing such projects, I'd ideally like not to need to share a monolithic thesis project.
Versioned - so that their use in different projects can be independent (e.g., if I make changes to the data cleaning for a manuscript after submitting the thesis, I still want the thesis to be reproducible as it was originally compiled).
At the moment I can see three ways of doing this:
Re-create the data cleaning process
But: this involves copy/paste, which I'd like to avoid, especially if things change upstream.
Access the relevant scripts/functions by changing the working directory
But: even if I used here it seems that this would introduce poor reproducibility.
Make the source projects into packages and make the objects I want to "export" into exported data (as per the data section of Hadley's R packages guide)
But: I'd like to avoid the unnecessary metadata, artefacts, and noise (e.g., see Miles McBain's "Project as an R package: An okay idea") if I can.
Is there any other way of doing this?
Edit: I tried #landau's suggestion of using a single drake plan, which worked well for a while, until (similar to #vrognas' case) I ended up with too many sub-projects (e.g., conference presentations and manuscripts) that relied on the same objects. Therefore, I added some clarifications above to my intentions with the question.
My first recommendation is to use a single drake plan to unite the stages of the overall project that need to share data. drake is designed to handle a lot of moving parts this way, and it will be more seamless when it comes to drake's decisions about what to rerun downstream. But if you really do need different plans in different sub-projects that share data, you can track each shared dataset as a file_out() file in one plan and track it with file_in() in another plan.
upstream_plan <- drake_plan(
export_file = write_csv(dataset, file_out("exported_data/dataset.csv"))
)
downstream_plan <- drake_plan(
dataset = read_csv(file_in("../upstream_project/exported_data/dataset.csv"))
)
You fundamentally misunderstood Miles McBain’s critique. He isn’t saying that you shouldn’t write reusable code nor that you shouldn’t use packages. He’s saying that you shouldn’t use packages for everything. But reusable code (i.e. code that you want to reuse) absolutely belongs in packages (or, better, modules), which can then be used in multiple projects.
That being said, first off, pay attention to Will Landau’s advice.
Secondly, you can make your RStudio projects configurable such that they can load data based on paths given in a configuration. Once that’s accomplished, nothing speaks against hard-coding paths to data in different projects inside that config file.
I am in a similar situation. I have many projects that are spawned from one raw dataset. Previously, when the project was young and small, I had it all in one version controlled project. This got out of hand as more sub-projects were spawned and my git history got cluttered from working on projects in parallel. This could be to my lack of skills with git. My folder structure looked something like this:
project/.git
project/main/
project/sub-project_1/
project/sub-project_2/
project/sub-project_n/
I contemplated having each project in its own git branch, but then I could not access them simultaneously. If I had to change something to the main dataset (eg I might have not cleaned some parts) then project 1 could become outdated and nonfunctional. Once I had finished project 1, I would have liked it to be isolated and contained for reproducibility. This is easier to achieve if the projects are separated. I don't think a drake/targets plan would solve this?
I also looked briefly into having the projects as git submodules but it seemed to add too much complexity. Again, my git ignorance might shine through here.
My current solution is to have the main data as an R-package, and each sub-project as a separate git-versioned folder (they are actually packages as well, but this is not necessary). This way I can load in a specific version of the data (using renv for package versions).
My folder structure now looks something like this:
main/.git
sub-project_1/.git
sub-project_2/.git
sub-project_n/.git
And inside each sub-project, I call library(main) to load the cleaned data. Within each sub-project, a drake/targets plan could be used.
I've a couple training courses that I sometimes provide to clients. They are both a few text pages + 20-25 videos + a link out to take an exam in my LMS.
My preference has always been to provide embed links to the videos, as it allows us to easily push out updates. Then the client embeds that in their own LMS / training package (however they want). But two clients are requesting the courses are delivered in a SCORM package to be delivered on their LMS.
I'm familiar with the authoring tools like Captivate and Storyline Articulate. I'm not a huge fan as they feel like canned powerpoints. I'm also not sure that's what the client wants.
Two questions:
(1) My understanding is I can package a SCORM file manually. How does that content present itself when put into an LMS? Would it present slide-by-slide in a single panel (similar to how I see storyline work) or is it distributed based on how the LMS is set-up?
(2) Would doing it manually be advantageous in any way?
Couple things to watch out for -
Simply zipping your content folder with a imsmanifest.xml maybe not put the files in the root of the zip like the LMS may possibly expect. Instead you end up with folder/imsmanifest.xml vs /imsmanifest.xml. Something to watch out for, but some LMS's do not care.
You can manually edit the imsmanifest.xml, but I'd work to validate it. Performing this task in editors that do not provide feedback can lead to validation issues.
Content Packaging is part of it, but the LMS may have its own subset of additional options like how to launch the content (popup, new window/tab, or in a IFRAME). They may have additional scoring/attempts and other settings outside of what the SCORM CAM provides.
Aside from familiarizing yourself with the IMS Manifest format and the above cautionary items you can do do this without to many hiccups.
There is a packager on https://cybercussion.com if you'd like to demo that after you have prepared your Shareable Content Object.
GL
I am working on a learning project for mobile devices that requires (or would at least be desirable) the ability to export to a SCORM-compatible format. I see that SCORM has a "Package Interchange Format" (PIF) based around a .zip file. I am new to SCORM and am trying to understand exactly what this file must contain. Specifically, is the PIF file just a format for generating interchangeable data between systems, or is it more complicated than that?
For some context, imagine the use case of a set of questions/sections that a user has to run through on a native mobile app, and at the end, we want to offer the ability for the user to "export" their data in a SCORM-compliant fashion. Is this simply a matter of exporting information about a) the questions and b) the answers into some .xml format, or is there more to it? I notice a lot of the documentation around SCORM seems to focus on Javascript and HTML. Is SCORM HTML specific, or are native apps reconcilable with SCORM, at least from the export perspective?
Apologies if any of this is basic stuff. Just trying to wrap my head around the standard and how it does or does not apply to what I'm doing.
The PIF is really a very small detail of SCORM's packaging. It only says that you can distribute your content in zip format, but not what that should contain.
What a SCORM (1.2) file should contain is described in much, much detail in the SCORM CAM book. To summarize very quickly, you need:
All the files necessary for the content to run (images, html files, javascript files, css etc)
A file called imsmanifest.xml that describes a few things about your content, the files it contains and possibly how they interract with the LMS they run on. It can vary from very simple to very complicated.
Optionally, metadata in XML format
So, SCORM does not care if and where you include your questions and answers. It doesn't know about them. This is your content's responsibility and that should be able to include them and present them to the user, when ran. What SCORM can do is make your content communicate with the LMS you're running it on, so that the results of these questions are persisted.
For now, I'd suggest that you have a look at some existing SCORM files, to get an idea of how the imsmanifest.xml file should look like, and then study the SCORM CAM book and things will get rolling.
The trouble with SCORM is that is has to be launched from within LMS. If you're building an external app that has to communicate to a LMS, take a look at either LTI (http://www.imsglobal.org/toolsinteroperability2.cfm) or TinCanAPI (http://tincanapi.com/).
SCORM 2004 sample https://github.com/cybercussion/SCOBot/
You zip the contents of the directory. Some LMSs expect the imsmanifest.xml to be located in the root of the zip.
Some people are using Native Apps in a LMS format and loading the SCO's into an HTML view, but as stated above SCORM is expecting a JavaScript to JavaScript communication.
Suppose you have a content-type that, after 5, 6 websites you do on your company, you still use it, without changing it.
I'm starting to think that copying this content-type to every website you create is not the "optimal" solution... but at the same time, I think creating a full blown package (egg) for only one content-type is overkill. (It would contain a workflow, a content-type definition and a custom view). And by this approach, it means a Plone site would have a lot of packages to install (too many dependencies is bad, I think).
My question is: for you guys (girls) that create a lot of websites to a lot of companies, you see a pattern for a lot of content-types. Do you create a package for only one content-type, or a single product with a bunch of them you use the most?
(I was thinking about creating company.archetypes.mycustomtypethatiusealot)
I create a package for any content I need. If a second site uses only some of that content, I'll refactor into separate packages. If you make a superpackage that includes all the dependencies then you don't need to install many separate packages.
So, for instance, for one customer I have three sites. Two share a theme - that's a package. The third has the theme, a couple of content types (they're another package), and a security package. The content-type package depends on the theme and security packages, and the security package depends on two or three 3rd-party packages. I simply install the content-type package and everything else gets pulled in automatically.
Not packaging your content types seems so 20th century, and also so much work.
At the company I work for, we have an intranet that provides employees with access to a wide variety of documents. These documents fall into several categories and subcategories, and each of these categories have their own web page. Below is one such page (each of the links shown will link to a similar view for that category):
http://img16.imageshack.us/img16/9800/dmss.jpg
We currently store each document as a file on the web server and hand-code links to these documents whenever we need to add a new document. This is tedious and error-prone, and it also means we lack any sort of security for accessing these documents. I began looking into document management systems (like KnowledgeTree and OpenKM), however, none of these systems seem to provide a categorized view like in the preview above.
My question is ... does anyone know of any Document Management System that allow for the type of flexibility we currently have with hand-coding links to our documents into various webpages (major and minor , while also providing security, ease of use, and (less important) version control? Or do you think I'd be better off developing such a system from scratch?
If you are trying to categorize the files or folders in the document management system, That's not a difficult task. You only need to access to admin panel to maintain the folders or categorize the folders
In Laserfiche, You can easily categorize your folders regarding the departments and can also be subcategorized them
You should look into Alfresco. It's extremely extensible and provides a lot of ways of accessing the repository.
Note: click the "Developers" tab for the community edition.
My question is ... does anyone know of
any Document Management System that
allow for the type of flexibility we
currently have with hand-coding links
to our documents into various webpages
(major and minor , while also
providing security, ease of use, and
(less important) version control?
Or do you think I'd be better off developing such a system from scratch?
Well there are companies that make a living selling doc management software. Anything you can get off the shelf is going to be a huge time saver, and its going to be better than anything you could reasonably develop by hand.
I've used a few systems:
Sharepoint: although I hear some people don't like it, I didn't either ;)
HyperOffice worked really well for my company of around 150 employees and has all the features you describe.
Current company uses Confluence, I like it :) But its probably one of those tools whose pricetag isn't worth it, especially if you're only using a subset of its features like doc management.
I haven't used it, but one guy I know raves about Alfresco, a free and open source doc management system. I looked at its website, seems simple enough to use.
We also faced a similar problem. However version control was more on our priority and we did look into many solutions in and around. We found Globodox extremely easy to install and use and more important the support team was absolutely fantastic
Try Mayan EDMS, it's Django based, and open source, used it as a base and build the custom features you wish on top of it.
Code location: https://gitlab.com/mayan-edms/mayan-edms
Homepage at: http://www.mayan-edms.com
The project is also available via PyPI at: https://pypi.python.org/pypi/mayan-edms/