Is there a way to upload .feature file to rally against any story or feature and also to publish to rally the Automated reports
You are trying to link data that is incompatible. A story is a historical piece of data it is related to when it was created.
A .feature file is a current piece of data it documents the current state of your application.
If you try and link the two together you end up either
Freezing the feature files (this is very destructive)
Having stories with broken/inaccurate/misleading links, as the features are updated and modified to reflect the current state of the application
Both of the above are really bad things.
The thing you should link your story to is the pull request or commits that implement the story. Here you are linking data of the same type (both are historical). Note these commits will include the changes to the features that were used to implement the code, so you have an indirect connection of the type you are looking for.
Related
I've a couple training courses that I sometimes provide to clients. They are both a few text pages + 20-25 videos + a link out to take an exam in my LMS.
My preference has always been to provide embed links to the videos, as it allows us to easily push out updates. Then the client embeds that in their own LMS / training package (however they want). But two clients are requesting the courses are delivered in a SCORM package to be delivered on their LMS.
I'm familiar with the authoring tools like Captivate and Storyline Articulate. I'm not a huge fan as they feel like canned powerpoints. I'm also not sure that's what the client wants.
Two questions:
(1) My understanding is I can package a SCORM file manually. How does that content present itself when put into an LMS? Would it present slide-by-slide in a single panel (similar to how I see storyline work) or is it distributed based on how the LMS is set-up?
(2) Would doing it manually be advantageous in any way?
Couple things to watch out for -
Simply zipping your content folder with a imsmanifest.xml maybe not put the files in the root of the zip like the LMS may possibly expect. Instead you end up with folder/imsmanifest.xml vs /imsmanifest.xml. Something to watch out for, but some LMS's do not care.
You can manually edit the imsmanifest.xml, but I'd work to validate it. Performing this task in editors that do not provide feedback can lead to validation issues.
Content Packaging is part of it, but the LMS may have its own subset of additional options like how to launch the content (popup, new window/tab, or in a IFRAME). They may have additional scoring/attempts and other settings outside of what the SCORM CAM provides.
Aside from familiarizing yourself with the IMS Manifest format and the above cautionary items you can do do this without to many hiccups.
There is a packager on https://cybercussion.com if you'd like to demo that after you have prepared your Shareable Content Object.
GL
We are currently working with the Gracenote Music API and are wondering if there is a full list of generes and mappings between the different hierarchies of genres. Ideally, we'd love a dump of those tables in the backend Gracenote system. If .csv's, text files, or even XML are easier to provide, we will figure out a way to import that data in our system.
If a full mapping isn't available, a list of top level genres would be very helpful.
I'm afraid there is no way to iterate the list of genres via the Web API. Most of the client SDKs have this capability.
It turns out that there are at least three sources for example code in the GNSDK:
Properly maintained samples in the "samples" directory. This will compile into full applications with minimal effort (once you've settled on a makefile solution for your platform, as a complete Automake setup is not yet part of the package).
samples/code_snippets - These are useful to look at, but do not necessarily build into full apps, and may not be completely up to date with the SDK.
Code linked from the documentation. This is a problem if you downloaded the SDK as an archive and the documentation as a PDF, as the links will resolve as relative file links, not HTTP links, and you won't have the files. You need to look at the HTML version of the documentation on the server to find these files. However, they are apparently outdated and will not build without some (relatively minor rework). This can be done using the primary samples as a guide.
So, all of that said, what you want to look at in the GNSDK Developer's Guide is "Advanced Topics : Using Lists". You will want to read that entire section, then find and work with the sample application referenced on page 93.
To get the list of genres (or moods, or eras) you need to make a call to the "fieldvalues" API, you can see how to do it here:
https://developer.gracenote.com/rhythm-api#attribute-station
This call will give you the list of supported genres:
https://cXXXXXXX.web.cddbp.net/webapi/json/1.0/radio/fieldvalues?fieldname=RADIOGENRE&client=CLIENT_ID&user=USER_ID
You can then use the returned ID's with pygn.createRadio()
I am working on a learning project for mobile devices that requires (or would at least be desirable) the ability to export to a SCORM-compatible format. I see that SCORM has a "Package Interchange Format" (PIF) based around a .zip file. I am new to SCORM and am trying to understand exactly what this file must contain. Specifically, is the PIF file just a format for generating interchangeable data between systems, or is it more complicated than that?
For some context, imagine the use case of a set of questions/sections that a user has to run through on a native mobile app, and at the end, we want to offer the ability for the user to "export" their data in a SCORM-compliant fashion. Is this simply a matter of exporting information about a) the questions and b) the answers into some .xml format, or is there more to it? I notice a lot of the documentation around SCORM seems to focus on Javascript and HTML. Is SCORM HTML specific, or are native apps reconcilable with SCORM, at least from the export perspective?
Apologies if any of this is basic stuff. Just trying to wrap my head around the standard and how it does or does not apply to what I'm doing.
The PIF is really a very small detail of SCORM's packaging. It only says that you can distribute your content in zip format, but not what that should contain.
What a SCORM (1.2) file should contain is described in much, much detail in the SCORM CAM book. To summarize very quickly, you need:
All the files necessary for the content to run (images, html files, javascript files, css etc)
A file called imsmanifest.xml that describes a few things about your content, the files it contains and possibly how they interract with the LMS they run on. It can vary from very simple to very complicated.
Optionally, metadata in XML format
So, SCORM does not care if and where you include your questions and answers. It doesn't know about them. This is your content's responsibility and that should be able to include them and present them to the user, when ran. What SCORM can do is make your content communicate with the LMS you're running it on, so that the results of these questions are persisted.
For now, I'd suggest that you have a look at some existing SCORM files, to get an idea of how the imsmanifest.xml file should look like, and then study the SCORM CAM book and things will get rolling.
The trouble with SCORM is that is has to be launched from within LMS. If you're building an external app that has to communicate to a LMS, take a look at either LTI (http://www.imsglobal.org/toolsinteroperability2.cfm) or TinCanAPI (http://tincanapi.com/).
SCORM 2004 sample https://github.com/cybercussion/SCOBot/
You zip the contents of the directory. Some LMSs expect the imsmanifest.xml to be located in the root of the zip.
Some people are using Native Apps in a LMS format and loading the SCO's into an HTML view, but as stated above SCORM is expecting a JavaScript to JavaScript communication.
In most cases, work of creating document belongs to a specific process.And each document has the context of input and output with each other.This means that if all the document of a process is not completed, work of the next process cannot be started and its document cannot be created.For example, in process of development, the process of detail design can be started only after the process of basic design is completed.
In alfresco,I can define fix directory structure as space template and I can create directory using that template.But I cannot define order of work process and I have no more knowledge.
let me know how to implement or customize?
You may have the directory structure created, but you probably need some workflows defined. Workflows will basically do what you need: when one user creates a document and finishes his part of the process, he can mark his part of the work done and pass it to the next step.
You can learn some basics about workflows on Alfresco wiki, of course.
I suggest that you take a look and read the documents at the Custom Workflow Guide and maybe this material.
After that, if you don't understand much, take a look at the samples, then try modifying one of the samples and eventually design your own workflow.
I'm looking for the best solution to allow our users to upload XLS spreadsheet so that they can be used to populate tables in our data warehouse (DW).
Our users are heavy Business Object (BO) users, and BO lets you export to XLS. When they have data in a spreadsheet that needs to be loaded to the DW, they need a process to upload the data in the XLS to the DW's db. As a result, we end up with many of these "interfaces" when I think that what we really need is a programmatic automated feed. Using Excel as a data source for inter-system feeds, in my gut, just seems like a bad idea to me.
Question #1: I'd like to see if you agree and why or why not.
OK, there is no swimming against that tide, so I now take as a given that XLS uploads are here to stay for us. Now I need to find the best solution. First, I'll explain what we do now and then what I don't like about it:
Via web pages, we provide empty XLS files (no rows) with a defined set of columns. Each file is intended to be used to update a different target dest table. In each spreadsheet is an "upload" button. Pushing the Upload button results in the macro in the spreadsheet serializing the contents of the file to CSV and FTPing the data to server folder. Periodically, a scheduler fires off an Informatica ETL job that uses the CSV file as input and loads the data into a custom XLS-specific staging table and then, if the records pass edits, into the appropriate target table. Any errors encountered are logged to an error table. For each XLS file uploaded, the data ends up in a separate staging and error table that is specific for the file.
Some of the things I don't like include about our process are:
1) The macro code in the XLS is too exposed, includes passwords for example, can be tampered with and there are issues ensuring that the users are using the latest XLS templates.
2) Business Rule edits are placed in the ETL program, where they should probably be, but because we would like to catch the errors ASAP, i.e, in the spreadsheet, edits are also added to the macro code. This results in duplication of business edits. I want these rules in one place and centrally controlled. IMHO, I think putting any macro code in the XLS introduces a maintenance issue, even calls to stored procedures (some of which we have) or calls to web services (we haven't yet tried to call .NET Web Services from XLS macros.)
3) Every XLS file upload template has its own process with distinct set of staging and error tables and a custom screen for reporting errors encountered. It seems like we need a more generalized re-usable solution.
Besides often getting data exported to XLS from BO, the users like also Excel because it is easier to edit a large number of records and less clunkier than editing individual records via a web interface.
This is the general direction that I am thinking:
First, I want the users to have the ease of editing of Excel with editing, but without including embedded macros in the spreadsheet. I experimented with Farpoint's Grid with Excel compatibility...
http://www.fpoint.com/netproducts/spreadweb/tour/excel.aspx
...and I found that it was quite easy to allow a user the ability to open up an XLS file that resides on their PC and have it open up in a browser and be able to easily access the data read from server-side .NET web code. Excel isn't running locally in their browser, but the functionality of Excel is reproduced, presumably through a lot of client sided scripting that I expect would be a real pain to duplicate myself. You can even cut and paste from a local spreadsheet into the web's spreadsheet. This sounds great, by biggest problem is cost. Our company is near death and won't allow us to purchase any new software.
Next, I want to identify the common components across all spreadsheet upload processing and come up with generic processing code. For example, I imagine a table which defines each of our spreadsheets and the format of each including the column names and data type definitions, perhaps in terms of their destination columns instead of hard coding. Based on this table template definition, I can generate XLS templates for download from this table definition. I can also perform simple generic edits to ensure that the data entered matches the table definition. And one common web page can be used to present the data and allow report data type mismatch errors and allow for the user to correct them. I would also define a common table for storing the data in a "staging" table, using a table with two columns, submission #, row num, name and value, perhaps. No more "custom everything" is the goal.
Next I need to decide where to put the business rules. My dept's mgt firmly believes that all loading of data should be done by Informatica ETL batch processes and therefore the rules/edits belong "in Informatica". I have zero experience with Informatica tools, I am more of a .NET guy. I am therefore unsure as to how these rules are implemented but I suspect that they are not reusable in the sense that they can be used by a .NET web page to validate a particular record against. You see, in some cases, when the user is not performing a bulk upload, they do have the ability to edit a specific record and I would like the same edits that were applied by the ETL bulk insert process to be applied to an individual update attempt to a single record via a web page. If the solution to write a single web service or stored procedure that can be called from either the web page doing an update of a single record or called thousands of times for each record in a bulk upload? The latter sounds inefficient.
Your thoughts on anything above would be very much welcomed.
From a cost perspective, the efforts you'll need to go through to re-create spreadsheet functionality on the web will exceed the cost of Farpoint or other controls. Even if you made $20 an hour, do you think you could complete a working product in under 2 weeks? I think you have the facts on your side when you discussed maintenance issues if you allow ETL functionality to exist in Excel - you have twice the amount of work to maintain the transformation rules. I think you need to convince management that in order to create a maintainable, robust solution you need some flexible utilities.
Farpoint is a good choice. There is also SpreadsheetGear that is a .Net engine that interprets Excel macros and can run on a web server. It has a Win32 control that allows you to create a WinForms solution with very Excel interface functionality. Last time I checked there was no web control for the product. It does an excellent job of providing Excel capability for processing large amounts of data.
Good luck. I think you will find a good solution since you seem to have a good grasp of the pro's and con's of all the different potential solutions.