Using the Trace Processing library, is it possible to use streaming as described here to parse IGenericEvents from a .etl file?
(I am a developer at Microsoft who works on the TraceProcessor project.)
That is not possible with the current implementation. It is meant to be transparent/irrelevant to users but with non-streaming data sources in TraceProcessor, some of the data is made available by parsing the ETW events and state models directly in TraceProcessor, and the other data is made available in the form of a .NET/managed TraceProcessor projection on top of native/C++ ETW processing done by Xperf. Likewise, the current implementation of the Windows Performance Analyzer (WPA) uses TraceProcessor as the data source for some of its tables, and Xperf as the source for the others.
In order to support streaming access in the current implementation of TraceProcessor, a data source has to be written both A) entirely within TraceProcessor (i.e. not in Xperf), and B) specifically to support streaming. We have typically only added this support when adding new data that wasn't already available in Xperf, or when we had other reasons to do a major rewrite of a data source.
Generic event support in TraceProcessor is currently built on top of the Xperf support, in part because there is some complicated logic required to parse the schemas for the event fields in one pass of the trace, and then populate the IGenericEvents in the next pass.
We don't currently have plans to invest in a streaming version of generic events, but if you are particularly interested, you could create an issue in our issues repo on Github and we could keep you posted if plans change.
Related
I want to integrate eLearning to an existing system that I already have, I have been reading a lot about two standards SCORM and xAPI but all what I read was theoritical differences about pros and cons of each standard, anyhow I want to have a technical differences from a developper perspective about implementing those standards in the system, what are the differences by implementing those standards from a developping view?i Just want headlines of the process of developping those standards. any references or documentation about that would also be very helpful.
and would it be doable or logical to integrate one standard in the system and later on integrate the other one? for example SCORM then later if needed I integrate xAPI?
Let me start with the integration part. Yes you can do SCORM and later integrate xAPI, though that might require retooling the SCORM course, or LMS to do the xAPI part. This is done in practice. A lot of what I do is integrate existing SCORM ecosystems with xAPI and LRSs.
As for differences in SCORM and xAPI, here's some high-level info.
SCORM is a set of specifications that defines the way to package content, the way to have your content report data, and the way an LMS launches and manages SCORM content and data. xAPI is a specification that defines a REST style API and JSON data format to track interactions/activity that happened in content.
As Andrew said, SCORM content finds an embedded API object in the browser DOM and uses that to communicate very structured and specific data to an LMS. xAPI uses a REST HTTP API to communicate various data in a well defined format.
Without some creative programming, SCORM typically is content that is delivered via a browser from a Learning Management System to the client. Typically that is in the form of HTML, images, videos. xAPI can be those types of things as well but since the API is HTTP/REST-like, the support of unmanaged content - simulators, phone apps, games - is a little easier.
SCORM's data model is very clearly defined in the specification and not normally extended. xAPI's data format is defined, but the actual data is much looser and open to the needs of the developer.
SCORM has been around since about the year 2000 in various versions. It is well supported in LMSs and content development tools. And many in the eLearning space know it. xAPI is newer. Support is growing as well as the number of folks who understand it, but it is still less supported than SCORM.
One final thing of note, SCORM specs never defined a way to get data out of the LMS once the SCO attempt has ended. This made reporting and metrics difficult to do without getting the LMS vendor to build those features in. xAPI defines a GET endpoint to retrieve data (This might not be performant when you might have 100ks to millions of data points, but you can get the data back out - caveats about permissions aside) Some LRS vendors do add reporting and analytics platforms, as well as some adding ways to get your data into BI or data analysis tools.
There's more you'll find as you get into the space but that's some of the things off the top of my head.
I would recommend you read the xAPI spec first mainly because it is more easily consumed. Then look at SCORM - it has different versions (1.2, 2004 2-4th editions).
As for implementing content,
SCORM: Figure out the version to build to, create the SCOs (content that reports data to the LRS), have that find the API embedded in the HTML dom, use the defined methods (Initialize, Terminate, SetValue, GetValue) to communicate with the LMS, then package it all up in a zip with an XML manifest and support xml schema, deploy to the LMS.
xAPI: Create your content, preferably support an xAPI launch mechanism like TinCan Launch, make REST calls to the LRS's xAPI endpoint using something like Fetch or Requests, etc, host/package/deploy as you determine.
Looking at the specs is the driest but the authoritative way to learn about the different specs. There are also some very good articles and videos out there by various vendors and implementors.
The main technical difference is that SCORM uses a javascript API to communicate between a course in a window or frame wrapped by the LMS. xAPI uses a restful API to communicate with an LRS over HTTP.
Could you please let me know Any useful online resource to learns and implement some scenarios to explore more about it. Thanks.
Datapowers are historically, in order:
XML Tranformation acceleration Devices (that used to be a thing, XSLT was too slow to process)
SSL offloading devices (again, that used to be a thing, same reason)
Web site and Applications Gateways. Both web sites and web services security, concentrated around HTTP and SOAP/XML application layer mechanisms and standards (SSL/TLS, WS-S, SAML, etc.), but also token management, security conversion ... think "super SSO" + application security gateway
More specialized integration tools : Transformation of XML (with XSLT), Transformation to/from non-XML format (like CSV), Database connections, integration patterns (like routing, composing, and a LOT more). Some called the Datapower a lightweight ESB.
More specialized uses : B2B(EDI), JSON processing, REST/JSON support, API Mgmt (when used as deployment point for API Connect)
Notice that all later features needs the former ones (ESB is based on WS Security, etc.)
As you may know, most of Datapower devlopement is done with tranformations. The default, established language for them is XSLT (XQuery is also and historic, less popular option).
XSLT is both one of the most powerful and most horrible language to work with. Kind of like the Perl+REGEX of the XML world...
... but there is another problem with XSLT. It is not designed to work with JSON. Making the Datapower of 10 years ago heading for a fats retirement.
At first, IBM designed pseudo-XML ways of dealing with JSON. You could convert inbound JSON to XML and work with the JSON AS XML in XSLT. The inverse operation was to use XSLT to generate JSON... it worked perfectly but kind a looked like old school HTML/PHP merging code.
So IBM came up with a good idea: GatewayScript.
(Mostly based on many other good ideas)
GatewayScript is basically ECMAScript 2015 (ES6) + CommonJS 1.0 + Many super popular JS crypto libraries.
ECMAScript is obviously more known as JavaScript.
Pertaining to your question, the main advantage of GatewayScript is to enabled easier JSON Web Services Development of all the features in the list above, for modern REST/JSON APIs, instead of older (but still good) SOAP/XML Web Services.
GatewayScript has now been present for years, no longer a "beta" option.
Here are some other neat GatewayScript features:
Access to a DOM model, representing the incoming and outgoing version of the document, in simple JS notation.
Better errors in the logs when something does not work (you get the .js line number, unlike with the XSLT errors)
Better debugging options (you can enable a line-by-line debugger)
Some examples from the web written in Node.js and other JS frameworks can work... which is amazing
A very useful IBM site (Datapower Playground) where you can learn and test GatewayScripts examples without your own Datapower, à-la-w3cschool
And more.
I hope this helps.
GhislainCote's answer is very complete but basically GatewayScript is Node.js with an added framework for handling the session object which will contain your data/payload.
There are also some special objects, e.g. service-metadata and header-metadata that will contain DataPower variables and headers.
Sample scripts are available in the store:///gatewayscript/ directory and as the store:///healthcheck.js for example.
Also review the Knowledgecenter, it contains a lot of help and information about GatewayScript:
https://www.ibm.com/support/knowledgecenter/SS9H2Y_7.7.0/com.ibm.dp.doc/gatewayscript_model.html
GatewayScript is very powerful, I've coded support for AS2 de-/en-veloping (for customers not having the B2B Module option) and RosettaNet handling in GatewayScript so there is pretty much no limit to what you can achieve!
can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.
I'm writing a web app in Scala using the Play framework. I'd like to be able to push some binary data to my web server from another machine I'm using to do number crunching. I'd like to do this over http. Can anyone suggest the best way to do each side? Ideas that have occurred to me so far are:
Send the data up as a file upload via the usual play form processing. Nice on the (web) server side, but I'm not sure what libraries to use for pushing the data up from the (number crunching) client. In C/C++ I'd consider using Curl.
Send the data up as raw POST with the binary attached and encoded appropriately. Not sure how to do either side.
I've done each of the above on several occasions in Python and C++ (although not recently enough to remember how!), but am not a web dev (but a more general sw engineer) and have only ever had control of one side before - so have no idea what the best way to do this is.
Any thoughts appreciated.
Alex
It depends what platform (and language) you're already using for the number-crunching client part. If that 'client' is also using the Play framework (or at least has access to the libraries), then there are some very helpful tools for accessing web services; (see here also).
Short Version (tl;dr):
Is there an open source or commercial engine that provides embeddable collaboration and microblogging functionality?
Long Version:
I am creating a niche application that has need of this functionality and do not want to reinvent the wheel. The following are must have requirements:
Data API only. My application is SaaS, and I want to build the functionality around the data. This eliminates most of the offerings out there (facebook, salesforce chatter, yammer, present.ly, teambox)
Does not require use of a built-in front end. I really just want an engine that will take care of the storage and events, and gives me a means of querying. Requiring the use of a specific front end renders it useless for embedding into my app. This eliminates everything else I have found (status.net, Yonkly, Jaiku)
Beyond standard updates and replies, can handle custom events. For example, if I were embedding this into an logistics application, I could have the engine handle events like "shipped", "received", and "cancelled".
Beyond this, there are several nice to have features that a framework would have:
Should not require a specific platform or server technology to run (i.e. something like a RESTful API would be nice)
Should be message based so that commands that affect its state can come from any source
Should encapsulate its own storage so that external resources are not necessary (i.e. no database needed)
Should have pluggable extendable UI components/widgets for web, mobile, and desktop clients
Should have search and retrieval APIs available for many languages/platforms
It seems that someone out there should have this already, or at least be in progress with it. Please point me in the right direction.
Since nobody had any answers and continued research did not find anything, I created a solution on my own called Collabinate. Updates can be found on Twitter, and the project itself is hosted on GitHub.