I am trying to use Shareable State Persistence within a Desire2Learn SCO. However, it does not seem to be supported. I get an error 401 (Undefined Data Model Element) when I try to use one of the "ssp.*" with SetValue (e.g.: ssp.allocate or ssp.data).
Does anybody know how to share custom data between SCOs within Desire2Learn?
I am fairly certain Desire2Learn doesn't support SSP. In SCORM 2004 4th Edition, there are shared data buckets that allow you to share custom data across SCOs, but (to the best of my knowledge) I don't think Desire2Learn has updated their system to support 4th Edition either.
SCORM Cloud supports both 4th Edition and SSP. You could look into using its SCORM Dispatch feature to run your content in the Cloud and then report the results to your Desire2Learn instance.
Related
I am trying to see if an external API can be consumed from Microstrategy. I am new to this and so far I have seen a connector on Microstrategy that allows you to bring data from an URL, but when things get more complex like passing a specific header parameter, then the connector is not useful.
Also going through the documentation I have seen they have internal APIs that any external application can consume to create reports outside of Microstrategy or to join data hosted on Microstrategy.
Their documentation for internal APIs is this one, but I am sure the other way around is possible, I just need a direction or an example to understand.
https://www.microstrategy.com/en/support/support-videos/how-to-use-the-rest-api-in-library
You can use XQuery for this. You can look there;
https://www2.microstrategy.com/producthelp/Current/AdvancedReportingGuide/WebHelp/Lang_1033/Content/Using_XQuery_to_retrieve_data_from_a_web_service.htm#freeform_sql_4027597040_1133899
https://community.microstrategy.com/s/article/How-to-Create-a-Report-That-Dynamically-Retrieves-Data-From-a-Parameterized-Web-Service?language=en_US
I've samples for that, we can talk about that.
You can try the external data function provided by rest api.
The Push Data API, which belongs to the Dataset API family, lets you
make external data easily available for analysis in MicroStrategy. You
use REST APIs to create and modify datasets using external data
uploaded directly to the Intelligence Server.
By providing a simpler, quicker way to get data out and add data back
in, the Push Data API makes it easier to use MicroStrategy as a
high-performance data storage and retrieval mechanism and supports
predictive workflow by machine learning, artificial intelligence, and
data scientist teams. The ability to make external data easily
available extends MicroStrategy's reach to new and complex data
sources where code, rather than end-users, manages the data
modeling/mapping flow. The Push Data API supports close integration
with the ecosystem of third-party ETL tools because it allows them to
push data directly into MicroStrategy while allowing the most optimal
utilization of MicroStrategy's cube capabilities. The Push Data API
provides these tools, whether they are analyst or IT-oriented, with
the option to create and update datasets on the MicroStrategy
Intelligence Server without requiring an intermediate step of pushing
the data into a warehouse.
You can first make sure the data is ready in your local environment and then push it to the MSTR server as the instruction.
https://www2.microstrategy.com/producthelp/Current/RESTSDK/Content/topics/REST_API/REST_API_PushDataAPI_MakingExternalDataAvailable.htm
Need to make a tool to search XML data from BizTalk messagebox.
How do I search all XML data related to lets say a common node called Employee ID from all data stored in the BizTalk MessageBox?
The BizTalk Message Box (BizTalkMsgBoxDb database) is a transient store for messages as they pass through BizTalk. Once a message has finished processing, it will be removed from the Message Box.
You probably want to research Business Activity Monitoring (BAM) which will allow you to capture message data as messages flow through BizTalk; message data can be exposed through its generic web-based portal. BAM is a big product in its own right and I would suggest that you invest time in researching all of the available features to find the one that suits your particular scenario. There are many, many resources available, however you might start by taking look at Business Activity Monitoring. There is also a very good book specifically on BAM: Pro BAM in BizTalk Server 2009
Alternatively, take a look at using the built-in BizTalk Administration Console tools for querying the Tracking database (BizTalkDTADb) which will hold messages for later reference based on your pre-defined configuration options. See Using BizTalk Document Tracking.
Finally, you could consider rolling your own message tracking solution, writing message contents to a SQL Database table, as messages are received in a pipeline for example.
Check out the BizTalk Message Decompressor on CodePlex! I've been using this tool for a couple of years with excellent results. Since you're hitting the messagebox directly, you should be very careful and be very familiar with the queries that you choose to execute.
As noted by a previous poster's answer, BAM and the integrated HAT queries in the admin console are the official, safest, and Microsoft prescribed answers.
So I'm trying to figure out how much capabilities comes with Intersystems to send data to an XDS repository. Specifically with using the basic Ensemble package (NO HSF) Assume it's not the one Intersystems delivers, but an external XDS repository.
Is there a built-in way to send a large blob and wrap the ebRim around that blob?
As you can see at http://www.intersystemsbenelux.com/media/media_manager/pdf/1398.pdf, Ensemble does not natively support ebRIM, but it does support XML and XML schemas.
Maybe you could assemble an XML and use that to wrap your blob content.
You can send that over whatever protocol your XDS system provides (xDBC, SOAP, file system etc). Take a look at the items listed on sections "Ensemble Interoperability" and "Ensemble Adapter and Gateway Guides" of http://docs.intersystems.com/ens20122/csp/docbook/DocBook.UI.Page.cls for a full list of connectivity options.
Regards,
There is healthshare foundation product which has XDS connectivity
See this good answer on google groups https://groups.google.com/forum/m/?fromgroups#!topic/Ensemble-in-Healthcare/h7R300H68KQ
Or healthshare part of their website
HSF (HealthShare Foundation) XDS.b connectivity for query and retrieve and also the Provide and Register Operation.
Ok, so I re-read your question and have an answer for you. I think what you are trying to say is that you have Ensemble, not HSF, and you still want to be able to send documents (XDS provide and Register).
I did some testing with the Open Source Integration mirth and stumbled across an example channel of theirs, and it is doing a provide and register with straight up SOAP calls to the end point.
Basically, build the required soap envelope accordingly, then send a PDF or document to the repository using MTOM.
This is what makes HealthShare its money, encapsulating all that manual construction of objects that need to be sent to endpoints.
Anyway, a screenshot of the Mirth channel destination make give you an understanding:
http://www.integrationrequired.com/wp-content/uploads/2013/02/Capture.PNG
A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.
I am currently working on system that generated product recommendations like those on Amazon : "People who bought this also bought this.."
Current Scenario:
Extract the Google Analytics data of the client and insert it in database.
On the website of the client, on load of product page the API call is made to get the recommendations of the product being viewed.
When API receives the product ID as request it looks in the database and retrieves (using association rules) the recommended product IDs and sends them as response.
The list of these product Ids will be processed to get the product details(image,price..) at the client end and displayed on website.
Currently I am using PHP and MYSQL with gapi package and REST api
storage on AMAZON EC2 .
My Question is:
Now, if I have to choose amongst the following, which will be the best choice to implement the above mentioned concept.
PHP with SimpleDB or BIGQuery.
R language with BIGQuery.
RHIPE-(R and hadoop ) with SimpleDB.
Apache Mahout.
Plese help!
This isn't so easy to answer, because the constraints are fairly specialized.
The following considerations can be made, though:
BIGQuery is not yet public. Thus, with a small usage base, even if you are in the preview population, it will be harder to get advice on improvement.
Each of your answers asked about a modeling system & a storage system. Apache Mahout is not a storage mechanism, so it won't necessarily work on its own. I used to believe that its machine learning implementations were a a pastiche of a few Google Summer of Code, but I've updated that view on the suggestion of a commenter. It still looks like it has rather uneven and spotty coverage of different algorithms, and it's not particularly clear how the components are supported or maintained. I encourage an evangelist for Mahout to address this.
As a result, this eliminates the 1st, 2nd, and 4th options.
What I don't quite get is the need for a real-time server to utilize Hadoop and RHIPE. That should be done in your batch processing for developing the recommendation models, not in real-time. I suppose you could use RHIPE as a simple one-stop front end for firing off queries.
I'd recommend using RApache instead of RHIPE, because you can get your packages and models pre-loaded. I see no advantage to using Hadoop in the front end, but it would be a very natural back end system for the model fitting.
(Update 1) Other interface options include RServe (http://www.rforge.net/Rserve/) and possibly RStudio in server mode. There are R/PHP interfaces (see comments below), but I suspect it would be better to access R through HTTP or TCP/IP.
(Update 2) Addressing the whole process, the basic idea I see is that you could query the data from PHP and pass to R or, if you wish to query from within R, look at the link in the comments (to the OmegaHat tools) or post a new question about R & SimpleDB - I'm sure someone else on SO would be able to give better insight on this particular connection. RApache would let you instantiate many R processes already prepared with packages loaded and data in RAM; thus you would only need to pass whatever data needs to be used for prediction. If your new data is a small vector then RApache should be fine, and it seems this is correct for the data being processed in real-time.
If you want a real-time API for recommendations based on data in a database, Apache Mahout does this directly. You want to use ReloadFromJDBCDataModel, put on top a GenericItemBasedRecommender, and use the servlet-based wrapper in the examples module. It's probably a day or two of work to get familiar with the code and customize it to your needs, but it's pretty simple.
When you get past about 100M data points you would need to look at distributing the computation Hadoop. That's a fair bit more complex. Mahout has a distributed recommender too which you can customize.