What are the main technical differences between CMIS and WebDAV?
If applicable, what exactly does CMIS improves over WebDAV?
I am not asking about adoption rates or number of implementations, just about the technical differences between each of those standards.
There's no simple answer.
There are things in WebDAV that aren't available in CMIS (locking, redirects, advanced versioning, namespace operations like MOVE). There are certainly other things only available in CMIS. Both could be extended to become a better match (and therefore it may have been a bad idea to start from scratch instead of just adding to/profiling WebDAV).
The main real difference is that the collection model in WebDAV is more specific (in that it exposes a real hierarchy with MOVE/COPY operations), while in CMIS (as per AtomPub) the client has less control. Depending on the requirements, that can be an advantage or a disadvantage.
WebDAV is an older standard and is based entirely on the HTTP specification. In fact, HTTP was extended to move it from read-only to read-write. Before WebDAV the HTTP specification was not able to handle back-and-forth file transfers so it was extended for that purpose. WebDAV is very rudimentary and only lets authors manage in a file-browse mode. The first WebDAV spec that came out did not include versioning capabilities. It was later on in the "Delta V" release of the spec where complete versioning was spec'ed out. While WebDAV is extraordinarily prevalent (Microsoft desktops, some Adobe products, etc) most vendors have only implemented the earlier WebDAV spec. (i.e. not DeltaV)
CMIS on the other hand is a much more complete and rich specification. CMIS is basically a web-service based common API. CMIS includes support for extending metadata, searching, advanced permissions, versioning capabilities, etc and really further advances the notion of a common-plumbing layer for an organizations' various repositories. It is really a common denominator API amongst the various ECM vendors such as Microsoft IBM, OpenText, ECM and so on.
Volumes could be written on CMIS at this point but those are some big differences. One note is that of this writing CMIS is still not a 1.0 spec (almost there) whereas WebDAV had been around for over a decade. There are likely to be considerable changes coming as CMIS evolves.
Related
I want to integrate eLearning to an existing system that I already have, I have been reading a lot about two standards SCORM and xAPI but all what I read was theoritical differences about pros and cons of each standard, anyhow I want to have a technical differences from a developper perspective about implementing those standards in the system, what are the differences by implementing those standards from a developping view?i Just want headlines of the process of developping those standards. any references or documentation about that would also be very helpful.
and would it be doable or logical to integrate one standard in the system and later on integrate the other one? for example SCORM then later if needed I integrate xAPI?
Let me start with the integration part. Yes you can do SCORM and later integrate xAPI, though that might require retooling the SCORM course, or LMS to do the xAPI part. This is done in practice. A lot of what I do is integrate existing SCORM ecosystems with xAPI and LRSs.
As for differences in SCORM and xAPI, here's some high-level info.
SCORM is a set of specifications that defines the way to package content, the way to have your content report data, and the way an LMS launches and manages SCORM content and data. xAPI is a specification that defines a REST style API and JSON data format to track interactions/activity that happened in content.
As Andrew said, SCORM content finds an embedded API object in the browser DOM and uses that to communicate very structured and specific data to an LMS. xAPI uses a REST HTTP API to communicate various data in a well defined format.
Without some creative programming, SCORM typically is content that is delivered via a browser from a Learning Management System to the client. Typically that is in the form of HTML, images, videos. xAPI can be those types of things as well but since the API is HTTP/REST-like, the support of unmanaged content - simulators, phone apps, games - is a little easier.
SCORM's data model is very clearly defined in the specification and not normally extended. xAPI's data format is defined, but the actual data is much looser and open to the needs of the developer.
SCORM has been around since about the year 2000 in various versions. It is well supported in LMSs and content development tools. And many in the eLearning space know it. xAPI is newer. Support is growing as well as the number of folks who understand it, but it is still less supported than SCORM.
One final thing of note, SCORM specs never defined a way to get data out of the LMS once the SCO attempt has ended. This made reporting and metrics difficult to do without getting the LMS vendor to build those features in. xAPI defines a GET endpoint to retrieve data (This might not be performant when you might have 100ks to millions of data points, but you can get the data back out - caveats about permissions aside) Some LRS vendors do add reporting and analytics platforms, as well as some adding ways to get your data into BI or data analysis tools.
There's more you'll find as you get into the space but that's some of the things off the top of my head.
I would recommend you read the xAPI spec first mainly because it is more easily consumed. Then look at SCORM - it has different versions (1.2, 2004 2-4th editions).
As for implementing content,
SCORM: Figure out the version to build to, create the SCOs (content that reports data to the LRS), have that find the API embedded in the HTML dom, use the defined methods (Initialize, Terminate, SetValue, GetValue) to communicate with the LMS, then package it all up in a zip with an XML manifest and support xml schema, deploy to the LMS.
xAPI: Create your content, preferably support an xAPI launch mechanism like TinCan Launch, make REST calls to the LRS's xAPI endpoint using something like Fetch or Requests, etc, host/package/deploy as you determine.
Looking at the specs is the driest but the authoritative way to learn about the different specs. There are also some very good articles and videos out there by various vendors and implementors.
The main technical difference is that SCORM uses a javascript API to communicate between a course in a window or frame wrapped by the LMS. xAPI uses a restful API to communicate with an LRS over HTTP.
Could you please let me know Any useful online resource to learns and implement some scenarios to explore more about it. Thanks.
Datapowers are historically, in order:
XML Tranformation acceleration Devices (that used to be a thing, XSLT was too slow to process)
SSL offloading devices (again, that used to be a thing, same reason)
Web site and Applications Gateways. Both web sites and web services security, concentrated around HTTP and SOAP/XML application layer mechanisms and standards (SSL/TLS, WS-S, SAML, etc.), but also token management, security conversion ... think "super SSO" + application security gateway
More specialized integration tools : Transformation of XML (with XSLT), Transformation to/from non-XML format (like CSV), Database connections, integration patterns (like routing, composing, and a LOT more). Some called the Datapower a lightweight ESB.
More specialized uses : B2B(EDI), JSON processing, REST/JSON support, API Mgmt (when used as deployment point for API Connect)
Notice that all later features needs the former ones (ESB is based on WS Security, etc.)
As you may know, most of Datapower devlopement is done with tranformations. The default, established language for them is XSLT (XQuery is also and historic, less popular option).
XSLT is both one of the most powerful and most horrible language to work with. Kind of like the Perl+REGEX of the XML world...
... but there is another problem with XSLT. It is not designed to work with JSON. Making the Datapower of 10 years ago heading for a fats retirement.
At first, IBM designed pseudo-XML ways of dealing with JSON. You could convert inbound JSON to XML and work with the JSON AS XML in XSLT. The inverse operation was to use XSLT to generate JSON... it worked perfectly but kind a looked like old school HTML/PHP merging code.
So IBM came up with a good idea: GatewayScript.
(Mostly based on many other good ideas)
GatewayScript is basically ECMAScript 2015 (ES6) + CommonJS 1.0 + Many super popular JS crypto libraries.
ECMAScript is obviously more known as JavaScript.
Pertaining to your question, the main advantage of GatewayScript is to enabled easier JSON Web Services Development of all the features in the list above, for modern REST/JSON APIs, instead of older (but still good) SOAP/XML Web Services.
GatewayScript has now been present for years, no longer a "beta" option.
Here are some other neat GatewayScript features:
Access to a DOM model, representing the incoming and outgoing version of the document, in simple JS notation.
Better errors in the logs when something does not work (you get the .js line number, unlike with the XSLT errors)
Better debugging options (you can enable a line-by-line debugger)
Some examples from the web written in Node.js and other JS frameworks can work... which is amazing
A very useful IBM site (Datapower Playground) where you can learn and test GatewayScripts examples without your own Datapower, à-la-w3cschool
And more.
I hope this helps.
GhislainCote's answer is very complete but basically GatewayScript is Node.js with an added framework for handling the session object which will contain your data/payload.
There are also some special objects, e.g. service-metadata and header-metadata that will contain DataPower variables and headers.
Sample scripts are available in the store:///gatewayscript/ directory and as the store:///healthcheck.js for example.
Also review the Knowledgecenter, it contains a lot of help and information about GatewayScript:
https://www.ibm.com/support/knowledgecenter/SS9H2Y_7.7.0/com.ibm.dp.doc/gatewayscript_model.html
GatewayScript is very powerful, I've coded support for AS2 de-/en-veloping (for customers not having the B2B Module option) and RosettaNet handling in GatewayScript so there is pretty much no limit to what you can achieve!
I am trying to figure out what is the exact difference between a document management system and archives management system? For example, what is the difference between Alfresco and Archivesspace (http://www.archivesspace.org/)?
Can Alfresco function as an archives management tool? What is the difference between the two? I read there is a record management module in Alfresco, is this what is meant by archives management?
Can Alfresco be used as an Archives Management System? Yes, of course. One real world example of this is the New York Philharmonic. They digitized their musical scores and associated artifacts going back to 1842 and then made them available online for researchers. Here is a video about it.
At its heart, Alfresco is a repository that allows you to capture any type of file, secure those files, route those files through workflows, search across the files, and associate metadata with each file. What I've just described are what most people would consider the basic set of functionality present in any worthwhile document management system.
Now, what makes that specific to archival purposes? I'm not an archivist. That's a highly-specialized field. One thing that is missing from my list of functionality above is "capture" or how the artifacts you are archiving will get into the system. This depends on exactly what it is you are archiving. One might use document scanners or high-end photography equipment, for example. None of that is addressed by Alfresco. You'll have to use third-party hardware and software and then integrate it, although many integrations exist between Alfresco and third-party capture vendors.
So I would say, yes, Alfresco can be used for archives management. But perhaps more importantly than considering whether or not a piece of software can be given a label, you should be thinking about how your users will use the software and what it is they need to get done. Then focus on how each of the packages you are evaluating can be used to achieve those goals to try to figure out whether or not each package will be a fit.
The difference is that ArchivesSpace is an 'archives information management system', whereas Alfresco is a full 'content management system', which means that it can manage any type of content.
What ArchivesSpace is:
ArchivesSpace Version 1.0 was completed in August 2013. It includes basic functionality for accessioning, processing, description, digital object description, and authority control workflows for archival material, as well as for searching descriptions and exporting metadata objects such as EAD, MARCXML, MODS, Dublin Core, METS, and CSV.
http://www.archivesspace.org/developmentplan
As for Alfresco:
The Alfresco One platform allows organizations to fully manage any type of content from simple office documents to scanned images, photographs, engineering drawings and even large video files.
http://www.alfresco.com/products/one/aws?utm_expid=11184972-12.IcCW-3j6RMavigPGfjODyw.1&utm_referrer=http%3A%2F%2Fwww.alfresco.com%2F
What the difference ultimately comes down to is not what it can store but what functionality you get in addition. ArchivesSpace seems to be a simple implementation of a document storage system that stores documents in collections with associated metadata. Alfresco also offers workflows, custom actions, previews, sites, wikis etc.
If your specific use case is related to archiving off documents specifically and you want something that will already be good at this then go ahead and use ArchivesSpace, if not, or if you want to expand the system out in future, then Alfresco will likely be able to do more but will likely take more effort to configure to your specific use case as you will have to create a custom content model and such.
Alfresco Records Management is for managing documents that will likely have some legal significance, such as court papers, official government department responses etc, and as such their creation and destruction need to be closely managed. As far as I can see this is not something ArchivesSpace can do.
(Full disclosure: I work for an Alfresco partner)
Hope all are doing well,
I wanted to know if the below scenario can be achieved.
We have a SCORM package that we wanted to have it on our Own web server and specify the link
to it in LMS(blackboard,moodle).
When User logs into LMS, it should perform a Single sign on (with LTI) and show the scorm content
from our web server.
Can SCORM in our web server access details of logged in User(UserID,Score details etc..).
I have searched and found some details below
http://scorm.com/scorm-solved/scorm-cloud-developers/how-to-get-started-with-the-scorm-cloud-api/
but this api is not free.
Your own web server utilizing another, then dealing with user credentials, assignments, and launching of courseware would be a tough one. These systems essentially have a Runtime API that manages the student attempt SCORM Interacts with.
There are a few parts of SCORM support that you'd obtain from Rustici's SCORM Engine that are actually worth paying for.
100% SCORM Compatibility (1.2, 2004)
I believe they have a .NET and Java implementation (uncertain about PHP) that you can plug into your platform. If you don't use those languages I'm sure they'd be grateful to answer any questions you have on further support.
You're covered on importing PIF/ZIP Content Aggregation Model packages. Even the robust ones.
SCORM Cloud hosting could negate the need for the SCORM Engine (#2)
The main reason you can't find much free in this space (with the exception of moodle) is mainly that is a epic amount of work, and another main reason you find many platforms with mixed support of SCORM. There is also the legacy space, and aged antiquated stuff that also comes to that end.
In the end you have the Runtime, Content Package Parsing, and if your using SCORM 2004 all the sequence and navigation rules. Those 3 things don't sound like much, but they are an exhausting amount of work from scratch.
Hope that all made sense,
Mark
I'm selecting a framework for restful service. Restlet looks promising. However, I'd like to pick something that's mainstream enough that it won't go out of support/development too soon. I know restlet has been around for quite some years. However, I'd like to know if it's popular enough. My questions are,
Any big name companies using it?
Is the default http server good enough for production?
thanks
The Restlet Framework has been available since 2005 when it was the first RESTful web framework for Java. It has support for the JAX-RS API, but its own Restlet API is both client and server-side since day one, much more comprehensive and extensible. We are free to innovate based on our community feed-back, without having to go through lengthy JCP standardization processes.
Also, we just published the 'Restlet in Action' book last September along with its version 2.1. Our internal connector is fully asynchronous and based on NIO and we are constantly stabilizing it even though it isn't ready for heavy productions yet (use the Jetty connector or a Java EE container with no change to your Restlet application).
Its consistent support for Java SE/EE, OSGi, Android, GAE and GWT with dedicated editions is unique. A port to JS (Node.js + AJAX) is also under way. We have also started work on version 2.2 with the first milestone being released (with full Java 6 support, OAuth 2.0 extension based on the final spec, etc.).
In term of references, we have many large companies using it including LinkedIn (see their GLU open source project), IBM, NVidia, ForgeRock, NASA, Sonatype, Apache Camel, Mule ESB, etc. Google has been using it internally as well. See some quotes here:
http://restlet.com/discover/quotes
In January, we'll launch a new community web site as well as APISpark, an all-in-one platform to create, host, manage and use web APIs, directly based on Restlet Framework (PaaS), so the project is active and has an exciting future!
Best regards,
Jerome Louvel
PS: I'm the Restlet Framework creator and lead developer.
The most mainstream you can get is Jersey. It is the official implementation of rest in java. Restlet came out before Jersey. But then Jersey surpassed them (in my humble opinion). I have used both Jersey and Restlet on serious projects. They are both good. However, you will find more support, more books, and more examples on Jersey.
Is this about Java? In that case, JAX-RS is the awesome new API for doing this. The best book for this is Restful Java with JAX-RS. My favorite implementation of it is Jersey, but there are others with their own unique features. All JAX-RS implementations are compatible if you don't use their distinctive features (which are minor anyway). The book explains the core API, the REST philosophy, and also some of the features unique to the different implementations. It is an excellent book. I love the introduction, where the author relates how he was used to traditional remote procedure call (like SOAP, WCF, and ordinary OO semantics) but then saw the light of the REST principles as being simpler and more elegant.
I use Tomcat as the HTTP server (servlet container). It is lightweight and is what Amazon Beanstalk uses (you can just upload your application, the WAR file, to it and it just works). You can also use GlassFish which supports many more JavaEE features, or use Apache for static pages and other things, and forward the REST requests to Tomcat/GlassFish.
The annoying thing about JAX-RS is that it's so powerful and easy that you're tempted to write ideologically-sound REST services. Unfortunately, javascript can't use many of the REST features (setting Accept header, calling anything but GET/POST, etc.) But it's not a big deal.
Jersey also has a fantastic client-side Java API that mirrors JAX-RS and reuses the same annotated classes, if your clients will be Java.
From the article
The Servlet API was released in 1998 and its core design hasn't
significantly changed since that time. It is one of the most
successful Java EE APIs, but it suffers from several design flaws and
limitations. For example, the mapping between the URI patterns and the
handlers is limited and centralized in one configuration file. Also,
it gives control of the socket streams directly to the application
developer, preventing some IO optimizations by the Servlet containers
like the full usage of NIO features. Finally it does not support HTTP
features like caching, content negotiation and content compression
very well, causing too much pain to developers and preventing them to
focus on their application specific code.
Another major concern was the lack of a modern HTTP client API in the
Java EE stack. The JDK's HttpURLConnection class is hard to use and
leaves too much HTTP features unsupported like expressing client
preferences for content negotiation.
Frequently, people were relying on third-party HTTP client APIs to
workaround those limitations. Again, NIO can't be supported with
HttpURLConnection.
In 2005, I saw an opportunity to go beyond all those limitations, and
to design a fresh API in the light of the REST principles. For the
first time, we have an API that unifies client-side and server-side
Web applications, an API that fully supports NIO and an API that lets
the developer programmatically control its container, connectors and
deployed applications without having to constantly rely on XML
descriptors.