Which tincan verbs to use - scorm

For data normalisation of standard tin can verbs, is it best to use verbs from the tincan registry https://registry.tincanapi.com/#home/verbs e.g.
completed http://activitystrea.ms/schema/1.0/complete
or to use the adl verbs like those defined:
in the 1.0 spec at https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md
this article http://tincanapi.com/2013/06/20/deep-dive-verb/
and listed at https://github.com/RusticiSoftware/tin-can-verbs/tree/master/verbs
e.g.
completed http://adlnet.gov/expapi/verbs/completed
I'm confused as to why those in the registry differ from every other example I can find. Is one of these out of date?

It really depends on which "profile" you want to target with your Statements. If you are trying to stick to e-learning practices that most closely resemble SCORM or some other standard then the ADL verbs may be most fitting. It is a very limited set, and really only the "voided" verb is provided for by the specification. The other verbs were related to those found in 0.9 and have become the de facto set, but aren't any more "standard" than any other URI. If you are targeting statements to be used in an Activity Streams way, specifically with a social application then you may want to stick with their set. Note that there are verbs in the Registry that are neither ADL coined or provided by the Activity Streams specification.
If you aren't targeting any specific profile (or existing profile) then you should use the terms that best capture the experiences which you are trying to record. And we ask that you either coin those terms at our Registry so that they are well formed and publicly available, or if you coin them under a different domain then at least get them catalogued in our Registry so others may find them. Registering a particular term in one or more registries will hopefully help keep the list of terms from exploding as people search for reusable items. This will ultimately make reporting tools more interoperable with different content providers.

Related

What is "Privacy by Design"? And how to achieve it?

I noticed that tutanota and mega.io mentioned "Privacy by design" in their homepages. So, I became curious and found the wikipedia page about Privacy by design, but it seems to be an abstract concept (a collection of principals). However, I was looking for something like - do a and b or implement y and z. For example, mega.io uses Zero Knowledge Encryption (User-Controlled End-to-End Encryption). What other features do a product need to have to be called a "Privacy by Design" service.
By their very nature, abstract principles do not concern themselves with implementation detail. There are many different ways to implement them, and mandating one approach over another is simply out of scope – what matters is the net effect. It's also applicable to non-tech environments, paper records, etc; it's not exclusive to web dev.
Privacy by design (PbD) is a term coined by Ann Cavoukian, an ex-information commissioner in Canada, and it has a collection of principles, as that Wikipedia page describes. PbD is also referenced by GDPR. I've given various talks on privacy and security at tech conferences around the world – you can see one of my slide decks on PbD.
So how do you use them in web development? Take the second principle: "Privacy as the default". This means that if a person using your web app does nothing special, their privacy must preserved. This means, amongst other things, that you should not load any tracking scripts (perhaps even remote content), and not set any cookies that are not strictly necessary. If you do want to track them (and thus break the user's privacy to some extent), then you need to take actual laws into account, such as the EU privacy directive, which is what requires consent for cookies and trackers.
So although the principle itself did not require these measures, it influenced the technical decisions you needed to make in your implementation in order to comply with the spirit of the principle. If that happens, the principle has done its job.
So what you have to do in order to claim privacy by design (though it's not like you get a badge!) is to introspect and consider how these principles apply to your own services, then act on those observations and make sure that the things you design and build conform to the principles. This is a difficult process (especially at first), but there are tools to help you perform "privacy impact assessments" (also part of GDPR) such as the excellent PIA tool by the French information commissioner (CNIL).
If you're thinking about PbD, it's worth looking at two other important lists: the data protection principles that have been the basis of pretty much all European legislation since the 1980s, including GDPR, and the 6 bases for processing in GDPR. If you get your head around these three sets of concerns, you'll have a pretty good background on how you might choose to implement something privacy-preserving, and also a good set of critical guidelines that will help you to spot privacy flaws in products and services. A great example of this is Google Tag Manager; it's a privacy train wreck, but I'll leave it to you to contemplate why!
Minor note: the GDPR links I have provided are not to the official text of GDPR, but a reformatted version that is much easier to use.

Explicit vs Implicit upgrades of Contracts & States in Corda

There seems to be plenty of information on explicit contracts&states upgrades, but it seems there is lack of info about implicit contract and state upgrades.
Assume that I use signature policy for contracts. How do I migrate old states to new ones if I want to use old ones also?
UPDATE:
I have found those samples and as I understand there is no states upgrade process at all! On the contrary, all flows/states and contracts are created in backward compatible way. But intuitively, if I have 50 releases for example, does it mean that related piece of the code will contain 50 if/else for all possible old versions of the flow? Won't the code become a mess? Is there any way of somehow normalizing the states?
I think you are correct. As long as the old versions of data (i.e. Corda states) exist in the network, you will need to keep this conditional logic in your contract code, so that it's capable of handling states of the older format.
What you could do to mitigate this proliferation of conditional logic is:
identify all the states of the older format. If there are any, migrate them to the new format, by spending them in a transaction and re-creating them with the new format. If there aren't any, move to the next step.
perform another implicit upgrade of your contract code that does not have any functional changes besides removing the conditional logic that is not needed anymore.
Following this steps, you can gradually remove conditional logic that's not needed, simplifying the contract code gradually. But, you're essentially back to a form of explicit upgrade, which might not be very practical depending on the number of parties and states in your network.

Amazon Alexa dynamic variables for intent

I am trying to build an Alexa Skills Kit, where a user can invoke an intent by saying something like
GetFriendLocation where is {Friend}
and for Alexa to recognize the variable friend I have to define all the possible values in LIST_OF_Friends file. But what if I do not know all the values for Friend and still would like to make a best match for ones present in some service that my app has access to.
Supposedly if you stick a small dictionary into a slot (you can put up to 50,000 samples), it becomes a "generic" slot and becomes very open to choosing anything, rather than what is given to it. In practice, I haven't had much luck with this.
It is a maxim in the field of Text To Speech that the more restrictive the vocabulary, the greater the accuracy. And, conversely, the greater the vocabulary, the lower the accuracy.
A system like VoiceXML (used mostly for telephone prompt software) has a very strict vocabulary, and generally performs well for the domains it has been tailored for.
A system like Watson TTS is completely open, but makes up for it's lack of accuracy by returning a confidence level for several different interpretations of the sounds. In short, it offloads much of the NLP work to you.
Amazon have, very deliberately, chosen a middle road for Alexa. Their intention model allows for more flexibility than VoiceXML, but is not as liberal as a dictation system. The result gives you pretty good options and pretty good quality.
Because of their decisions, they have a voice model where you have to declare, in advance, everything it can recognize. If you do so, you get consistent and good quality recognition. There are ways, as others have said, to "trick" it into supporting a "generic slot". However, by doing so, you are going outside their design and consistency and quality suffer.
As far as I know, I don't think you can dynamically add utterances for intents.
But for your specific question, there is a builtin slot call AMAZON.US_FIRST_NAME, which may be helpful.

Assigning URIs to RDF Resources

I'm writing a desktop app using Gnome technologies, and I reached the
stage I started planning Semantic Desktop support.
After a lot of brainstorming, sketching ideas and models, writing notes
and reading a lot about RDF and related topics, I finally came up with a
plan draft.
The first thing I decided to do is to define the way I give URIs to
resources, and this is where I'd like to hear your advice.
My program consists of two parts:
1) On the lower level, an RDF schema is defined. It's a standard set of
classes and properties, possible extended by users who want more options
(using a definition language translated to RDF).
2) On the high level, the user defines resources using those classes and
properties.
There's no problem with the lower level, because the data model is
public: Even if a user decides to add new content, she's very welcome to
share it and make other people's apps have more features. The problem is
with the second part. In the higher level, the user defines tasks,
meetings, appointments, plans and schedules. These may be private, and
the user may prefer to to have any info in the URI revealing the source
of the information.
So here are the questions I have on my mind:
1) Which URI scheme should I use? I don't have a website or any web
pages, so using http doesn't make sense. It also doesn't seem to make
sense to use any other standard IANA-registered URI. I've been
considering two options: Use some custom, my own, URI scheme name for
public resources, and use a bare URN for private ones, something like
this:
urn : random_name_i_made_up : some_private_resource_uuid
But I was wondering whether a custom URI scheme is a good decision, I'm
open to hear ideas from you :)
2) How to hide the private resources? On one hand, it may be very useful
for the URI to tell where a task came from, especially when tasks are
shared and delegated between people. On the other hand, it doesn't
consider privacy. Then I was thinking, can I/should I use two different
URI styles depending on user settings? This would create some
inconsistency. I'm not sure what to do here, since I don't have any
experience with URIs. Hopefully you have some advice for me.
1) Which URI scheme should I use?
I would advise the standard urn:uuid: followed by your resource UUID. Using standards is generally to be preferred over home-grown solutions!
2) How to hide the private resources?
Don't use different identifier schemes. Trying to bake authorization and access control into the identity scheme is mixing the layers in a way that's bound to cause you pain in the future. For example, what happens if a user makes some currently private content (e.g. a draft) into public (it's now in its publishable form)?
Have a single, uniform identifier solution, then provide one or more services that may or may not resolve a given identifier to a document, depending on context (user identity, metadata about the content itself, etc etc). Yes this is much like an HTTP server would do, so you may want to reconsider whether to have an embedded HTTP service in your architecture. If not, the service you need will have many similarities to HTTP, you just need to be clear the circumstances in which an identifier may be resolved to a document, what happens when that is either not possible or not permitted, etc.
You may also want to consider where you're going to add the most value. Re-inventing the basic service access protocols may be a fun exercise, but your users may get more value if you re-use standard components at the basic service level, and concentrate instead on innovating and adding features once the user actually has access to the content objects.

Guidelines for custom tools

While developing products, we often need to create proprietary tools to test some of their unique features or diagnose problems. In fact the tools can be at lest as interesting as the products themselves, and some of our internal groups have asked for copies of them.
So, aside from the obvious business-driven rules (e.g. don't retrieve sensitive data), what do you differently when you build personal or internal tools, as opposed to for-sale products, and why?
What's more (or less) important to you in internal tools, and do you consider overall value to the company when you build them?
Thanks for your thoughts!
First, internal tools are always developed quick and dirty. Almost no testing - it just has to do the work.
UI is not as important as with a customer-facing app.
Internal tool can use internal/private/proprietary knowledge of the products and frameworks they test. For example, our last product bypassed part of our published API and used a non-documented web service call to achieve better results.
This is an important point,but a losing battle: NEVER EVER leave internal tools with a customer.
As a consultant, I sometimes had to use and even develop those tools in the field. I try to hide it from my clients, but from time to time, they demand I leave the tool with them (or worse, call the sales rep and ask for that "magic tool"). You don't want customers judging your entire company's production level based on tools build according to points 1-3.
From an engineering perspective, I wouldn't do anything differently:
Both internal and for-sale tools need to be well-written and well-documented
Both need to be created given a set of requirements, deadlines, budgetary restraints, etc.
Both need to be tested or validated
The one big difference I see would apply to the for-sale products as opposed to the internal tools: for-sale products need marketing, support, etc that internal tools can do without.
Additionally, since internal tools will be used in a somewhat more controlled environment, they don't need to be tested against different computer systems, Internet browsers, etc.
The biggest difference:
It's with the personal and internal tools that you can be more free to try out a new technology, the latest fashion. You can take risks that you wouldn't take with the application that you are actually shipping to customers.
Since the diagnostics I build are usually very special-purpose, I tend to provide more options and built-in examples than I would for customer-facing products. In other words, I assume the user is more familiar with the technology than a customer would generally be, and I provide more ability to tweak the way the tool operates without worrying that it might overwhelm the user. But I also try to make it satisfy 80% of the use cases without much "help" from the user.

Resources