How to connect Adobe Captivate XApi course with YetAnalytics or LRS (Learning record system)? - adobe

I am trying to connect my Adobe Captivate XApi course to the LRS (YetAnalytics). I have very less information as to what should i add in this code of tc-onfig.js in the course files:
// Pre-configured LRSes that should receive data, added to what is included
// in the URL and/or passed to the constructor function.
//
// An array of objects where each object may have the following properties:
//
// endpoint: (including trailing slash '/')
// auth:
// allowFail: (boolean, default true)
// version: (string, defaults to high version supported by TinCanJS)
//
TC_RECORD_STORES = [
{
endpoint : "",
auth : "",
allowFail: ,
version: "",
}
];

Generally you should avoid using that functionality. That code is leveraged by an underlying library in Captivate (Rustici Driver) for packages with a tincan.xml file. That package will be launched with an LRS endpoint and authentication credential which is where it will send the statements that it generates. Generally it is a much better idea to send all statements to that configured LRS and then figure out a way to get those statements either forwarded from or pulled from that LRS into your additional LRS(s).
This is for two main reasons. First by using this functionality you have to hard code a credential into the package which makes it insecure and indistinguishable during requests, this is generally just bad. Second, there is little to no error handling around calls that leverage this functionality, so if you set allowFail to false exceptions will go uncaptured and the content will likely behave in strange ways (or break completely), if you set allowFail to true then you will have no recourse when a call fails and you potentially will not know that you've lost data.
(Unfortunately, I know this because I implemented the functionality originally a very long time ago before fully understanding all of the ramifications.)
But just so I've answered your actual question, if you wish to not heed my advice, then the values that should go there will be passed through to the constructor for a TinCan.LRS object which is documented here: http://rusticisoftware.github.io/TinCanJS/doc/api/latest/classes/TinCan.LRS.html
The auth being the most tricky, it should be a value that is a full Authorization header value as needed to connect to the LRS, very often a Basic Auth header.

Related

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

MarkLogic I don't know how to get all the result

Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything

Possibility for only currently connected (not authenticated) and admin user to read and write on certain location

Is there any way to write a security rule or is there any other approach that would make possible only for currently connected (not authenticated) user to write/read certain location - admin should also be able to write/read?
Can a rule be written that disallows users to read of complete list of entries and let them read only entry that matches some identifier that was passed from client?
I'm trying to exchange some data between user and Node.js application through Firebase and that data shouldn't be able to read or write by anyone else other than user and/or admin.
I know that one solution would be that user requests auth token on my server and uses it to authenticate on Firebase and that would make it possible to write rule which prevents reads and writes. However, I'm trying to avoid user connecting to my server so this solution is not first option.
This is in a way session based scenario which is not available in Firebase but I have
some ideas that could solve this kind of problem - if implemented before session management:
maybe letting admin write into /.info/ location which is observed by client for every change and can be read only by active connection - if I understood correctly how .info works
maybe creating .temp location for that purpose
maybe letting admin and connected client could have more access to connection information which would contain some connection unique id, that can be used to create location with that name and use it inside rule to prevent reading and listing to other users
Thanks
This seems like a classic XY problem (i.e. trying to solve the attempted solution instead of the actual problem).
If I understand your constraints correctly, the underlying issue is that you do not wish to have direct connections to your server. This is currently the model we're using with Firebase and I can think of two simple patterns to accomplish this.
1) Store the data in an non-guessable path
Create a UUID or GID or, assuming we're not talking bank level security here, just a plain Firebase ID ( firebaseRef.push().name() ). Then have the server and client communicate via this path.
This avoids the need for security rules since the URLs are unguessable, or close enough to it, in the case of the Firebase ID, for normal uses.
Client example:
var fb = new Firebase(MY_INSTANCE_URL+'/connect');
var uniquePath = fb.push();
var myId = uniquePath.name();
// send a message to the server
uniquePath.push('hello world');
From the server, simply monitor connect, each one that connects is a new client:
var fb = new Firebase(MY_INSTANCE_URL+'/connect');
fb.on('child_added', newClientConnected);
function newClientConnected(snapshot) {
snapshot.ref().on('child_added', function(ss) {
// when the client sends me a message, log it and then return "goodbye"
console.log('new message', ss.val());
ss.ref().set('goodbye');
});
};
In your security rules:
{
"rules": {
// read/write are false by default
"connect": {
// contents cannot be listed, no way to find out ids other than guessing
"$client": {
".read": true,
".write": true
}
}
}
}
2) Use Firebase authentication
Instead of expending so much effort to avoid authentication, just use a third party service, like Firebase's built-in auth, or Singly (which supports Firebase). This is the best of both worlds, and the model I use for most cases.
Your client can authenticate directly with one of these services, never touching your server, and then authenticate to Firebase with the token, allowing security rules to take effect.

How do you secure the client side MongoDB API?

I don't want just all of my users being able to insert/destroy data.
While there is no documented way to do this yet, here's some code that should do what you want:
Foo = new Meteor.Collection("foo");
...
if (Meteor.is_server) {
Meteor.startup(function () {
Meteor.default_server.method_handlers['/foo/insert'] = function () {};
Meteor.default_server.method_handlers['/foo/update'] = function () {};
Meteor.default_server.method_handlers['/foo/remove'] = function () {};
});
}
This will disable the default insert/update/remove methods. Clients can try to insert into the database, but the server will do nothing, and the client will notice and remove the locally created item when the server responds.
insert/update/remove will still work on the server. You'll need to make methods with Meteor.methods that run on the server to accomplish any database writes.
All of this will change when the authentication branch lands. Once that happens, you'll be able to provide validators to inspect and authorize database writes on the server. Here's a little more detail: http://news.ycombinator.com/item?id=3825063
[UPDATE] There is now an official and documented Auth Package which provides different solutions to secure a collection.
On a CRUD level :
[Server] collection.allow(options) and collection.deny(options). Restricts default write methods on this collection. Once either of these are called on a collection, all write methods on that collection are restricted regardless of the insecure package.
And there is also insecureto remove full write access from the client.
source : Getting Started with Auth (thanks to #dan-dascalescu)
[OLD ANSWER]
Apparently there are working on Auth Package(?) that should avoid any users taking full control on the db as it is now. There is also someone suggesting that there is an existing solution (workaround) by defining your own mutations (methods) and make them failed if they attempts to perform an unauthorized action. I didn't get it much better but I think this will often be necessary since I doubt the Auth Package will let you implement the usual auth logic on a row level but probably only on the CRUD methods. Will have to see what the devs have to say.
[EDIT]
Found something that seems to confirm my thoughts :
Currently the client is given full write access to the collection. They can execute arbitrary Mongo update commands. Once we build authentication, you will be able to limit the client's direct access to insert, update, and remove. We are also considering validators and other ORM-like functionality.
Sources of this answer :
Accessing to DB at client side as in server side with meteor
https://stackoverflow.com/questions/10100813/data-validation-and-security-in-meteor/10101516#10101516
A more succinct way:
_.each(['collection1', 'collection2'], function(collection){
_.each(['insert','update', 'remove'], function(method){
Meteor.default_server.method_handlers['/' + collection + '/' + method] = function(){}
});
});
or to make it more idiomatic:
extend meteor:
_.extend(Meteor.Collection.prototype, {
remove_client_access: function(methods){
var self = this;
if(!methods) methods = ['insert','update','remove'];
if(typeof methods === 'String') methods = [methods];
_.each(methods, function(method){
Meteor.default_server.method_handlers[self._prefix + method] = function(){}
});
}
});
Calls are simpler:
List.remove_client_access() // restrict all
List.remove_client_access('remove') //restrict one
List.remove_client_access(['remove','update']) //restrict more than one
I am new to Meteor, but what I have come across so far are these two points
You can limit what a client can access in the database by adding parameters to the find command in the server-side publish command. Then when the client calls Collection.find({}), the results that are returned correspond to what on the server side would be, for example, Collection.find({user: this.userId}) (see also Publish certain information for Meteor.users and more information for Meteor.user and http://docs.meteor.com/#meteor_publish)
One thing that is built in (I have meteor 0.5.9) is that the client can only update items by id, not using selectors. An error is logged to console on the client if there is an attempt that doesn't comply. 403: "Not permitted. Untrusted code may only update documents by ID." (see Understanding "Not permitted. Untrusted code may only update documents by ID." Meteor error).
In view of number 2, you need to use Meteor.methods on the server side to make remote procedure calls available to the client with Meteor.call.

Adobe AIR HTTP Connection Limit

I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.

Resources