We have a Direct Call Rule in a report suite. Among other stuff, it is supposed to track a variable belonging to another report suite.
Is there a special function which allows something like:
_doStuff(reportSuite, prop,...)
I was intending to place it inside the Sequential Javascript in the Third Party Tags.
Thanks
Related
In Robot Framework, I have seen a term TAG. What is the use of it.
When and where we can use this TAG?
Can we create or own tags and how?
From User Guide:
Tags are shown in test reports, logs and, of course, in the test
data, so they provide metadata to test cases.
Statistics about test cases (total, passed, failed are automatically collected based on tags).
With tags, you can include or exclude test cases to be executed.
With tags, you can specify which test cases are considered critical.
and my points how i use:
Mark tests cases that are not allowed to be re-run at the end
Mark tests cases that are allowed to be run in parallel
Add defect ID as tag so I will know which test cases should pass after fix
we have several Test components grouped. I'd like to do some parameter validation at the beginning and skip the component altogether when certain conditions are met. I wanted to use ExitComponent for this, however I figured this does not only leave the component but the whole group.
I really do not want to use extensive if-else statement ranging over my whole component, which is the only solution I can see now.
Example:
'Skip component if value is empty
if Parameter("Par1) = "" Then
'Cannot use ExitComponent as I do not want to leave the whole component group
?????
endif
'Start processing data in the component
Does anyone have an idea?
The approach from BPT is to use the ALM Wizards and Forms in order to create and configure almost all Aspects of your Tests. If you select a flow or a Test Case you can configure the Run Condition of each Subcomponent / flow in the Test Script Tab. As the Linked documentation tells, you can do it based on Parameters.
Here is the tutorial for setting Run Conditions.
P.S: In case you have to check complex things and not simple parameters, well:
Create a component that checks the complex stuff (the relation of stellar objects regarding the sun - just kidding, of course some AUT specific condition) and shares the info with the world via an Output Parameter. The subsequent components can of course then react to the Parameter.
I'm working on a project that requires a unique "enrollment" id inside a file inside a SCORM package. Something that works like this:
<script src="...?enrollmentid=1234567890"></script>
I have figured out that I should be able to obtain a student_id, but this is too broad an identifier for this use. The id I use must describe a single student/course enrollment uniquely, as a student could enroll in multiple courses, and a course could have multiple students enrolled.
The id could be a composite of other fields, like student_id + course id + enrollment date, but I can't see any way to get those sorts of details from the LMS either.
Is what I'm trying to do possible?
SCORM 1.2 or even 2004 did not unfortunately include things like enrollment date, course id, or SCO title/structure unless that was pumped in via Launch Data that comes by way of the imsmanifest.xml at author time. And these are things which you would need to provide.
cmi.core.student_id is the only unique value you'll get directly from SCORM. The LMS was not given a way to also include any Tier IDs or internals it used when it imported the course. And unless they (unreliably) place them in the launch parameters or you have a way of doing some probing with javascript (also unreliable) you'll need to consider some other options.
Launch Data cmi.launch_data would probably be the easiest way to gain access to any values you want to pass thru to the SCO but this relies heavily on the authoring process of the SCO and its imsmanifest.xml. Situations where there is a LCMS setup or some mechanism of a authoring tool could enable these capabilities.
I add this below the <title/> tag in the imsmanifest.xml:
<!-- Launch Data Example uses Querystring format name=value&name=value -->
<adlcp:dataFromLMS><![CDATA[name=value]]></adlcp:dataFromLMS>
When I state unreliable - I mean to hint that unless you can definitively state you know where this content is running, and the LMS will never change, you won't be able to obtain the info you want in any reliable way.
Is there any way to store a pre test score in SCORM 2004. I have developed a module in actionscript 2. The pre test and post test uses the same question set.
The client is now demanding that the pre test scores should also be stored along with post test. Is there any way it can be done. Which value should / can I set for this.
I have spent last two days trying to find a solution to this. Is there any way to set a custom variable? or is there any pre set variable name that I missed?
It sounds like you're trying to put the pre-test and the post-test in the same SCO. The cleanest way to report separate pre and post test results would be to put the pre and post tests in their own SCO, by editing your packages imsmanifest.xml file. (see more on content packaging here: http://scorm.com/scorm-explained/technical-scorm/content-packaging/) You can link back to the same content multiple times in the same manifest and include query string parameters which your content then reads and uses to know what mode it's in (pre-test vs post test).
That said, a lot of people avoid using multiple SCOs so they don't have to think about how their LMS or SCORM manages those SCOs. Using only a single SCO gives your content a lot of control, but the trade off is it looks like one monolithic item to the LMS, so reporting on multiple tests can't be as nice. So, there is no specific pre-test variable because SCORM is designed on the assumption that pre-tests would go in their own SCO, so there is no need for such a variable.
What you can do in a single SCO is create additional named objectives and interactions. If you just want the score for the pre-test, that's going to look better, but if you're tracking responses to each question you'll wind up with a list of items like "PreTest question 1, PreTest question 2" ... and continuing to "PostTest question 1, PostTest question 2"... the naming scheme is up to you of course, but the constraint is that you're dealing with one list of objectives and interactions and can only differentiate them by name.
An example course using objectives and interactions: http://scorm.com/scorm-explained/technical-scorm/golf-examples/#advancedruntime
Some tips on what tests should report (and how): http://scorm.com/blog/2010/11/4-things-every-scorm-test-should-do-when-reporting-interactions/
Sorry for not being more step-by-step, but as you can see you have a couple of options, each of which involve a little more detail than I can really put in one answer.
While you didn't describe the structure of your course, I'll respond based upon the possibilities. I am assuming you have a pretest, content and a posttest:
You have one large SCO, which contains the pretest/content and SCO:
If you need the info only for your course to use and display within the course:
You can save the pretest and posttest scores in the cmi.suspend_data. Most people store this information in name/value pairs up to 64K chars. with the SetValue command you can
rtn = your_api.SetValue('cmi.suspend_data','pretest=69,'+oldSuspendData)
Again, you would only use this if your course needed to display this information within the course only and take action on it based upon the pretest results. Obviously, you should code to make sure you get clean data back and handle any odd conditions, like no data. If however, the client wants the data in the LMS and visible to the LMS admins, you'd need to look at the option below
If you need the LMS admins to have access to the pretest/post test scores:
you'll really need to separate each sco (pretest,content,posttest) but you won't be able to communicate (through SCORM) those scores between each SCO. ie: post test won't know what the pretest score is. You can look at nice examples of how to separate your content into SCOs from the url below. You can easily share the pre and posttest HTML/SWF but pass a querystring to the HTML or use the in the manifest to tell your code what it is. From my experience, there are some LMS which will not pass the querystring and therefore should use both.
Simple MultiSCO: http://scorm.com/wp-content/assets/golf_examples/PIFS/ContentPackagingOneFilePerSCO_SCORM20043rdEdition.zip
If you need to know the pretest score AND have the info sent to the LMS like a SCO
SCORM offers no way to get around this issue in SCORM 2004. I would first tell them the complications. If they still need this hybrid solution, I would us ajax to securely communicate the learnerID, courseID (if any), SCOID (pretest/content/posttest) and score to a server where it can be retrieved. Cookies are a no-no because they assume you'll be on the same machine between SCOs. Additionally, if there is the possibility of xAPI, you could do this much easier.
We are employing BDD and using SpecFlow to drive our development (ATDD).
Our QA team would like to define their own 'end-to-end regression tests (in Gherkin/SpecFlow) and re-use the Steps we have already defined.
(Please note - I know that this is not a great example but it should provide enough details)
A test may include..
Log in
Search for a product
Select a product to buy
Create an order
Select delivery option.
Submit the order.
Cancel the order.
This would suggests a scenario like..
Given I am logged in
When I Search for a product
And I Select a product to buy
And I Create an order
And I Select delivery option
And I Submit the order
And I Cancel the order
Then ??!!
Which is clearly wrong as we are not checking the output at each step.
So this may be resolved as a sequence of scenarios:
Scenario 1:
Given I am logged in
When I Search for a product
Then I see a list of products
Scenario 2:
When I select a product to buy
Then I can create an order
Scenario 3:
When I create the order
And I Select delivery option
Then I can submit the order
etc etc
The main issue with this is that there seems no way to specify the order/sequence that the scenarios are run in (a characteristic of nUnit?). Because there are dependencies between scenarios (they are not set to a know starting point) they must be run in sequence.
My questions are:
a) Are we trying to fit a square peg in a round hole?!
b) Does anyone know if there is a way to use SpecFlow/Gherkin in this way?
c) Or does anyone know what alternatives there are?
Many thanks!
I would say that you are writing your scenarios on the wrong abstraction level. But that depends on what you want to use them for;
If you want to write test-scripts then you are on the right track... but it will be a nightmare to maintain as it, in the first case (long script) will be very brittle and the second case (several scenarios) need to ensure a certain execution order. Both of them are discouraged and considered anti-patterns.
I would suggest that you merge the ATDD-tests you are writing and talk to the test department to get their view on the matter and include the test-cases they need to ensure that the system is thoroughly tested. Who know? You might even learn something from each other :P
And when you write those "specifications" (as I rather call them) you write them on a higher level. So instead of writing:
Given I am logged in
When I Search for a product
And I Select a product to buy
And I Create an order
And I Select delivery option
And I Submit the order
you write something like
When I submit an order for product 'Canned beans'
In the step-definitions behind that step you perform all that automation (login, browsing to the product page, select the delivery options, submit the order).
All of this can be read about in these great articles on how to write maintainable UI Automation tests:
http://gojko.net/2010/04/13/how-to-implement-ui-testing-without-shooting-yourself-in-the-foot-2/
http://elabs.se/blog/15-you-re-cuking-it-wrong
http://www.marcusoft.net/2011/04/clean-up-your-stepsuse-page-objects-in.html
http://dhemery.com/pdf/writing_maintainable_automated_acceptance_tests.pdf
http://gojko.net/2010/01/05/bdd-in-net-with-cucumber-part-3-scenario-outlines-and-tabular-templates/
http://chrismdp.github.com/2011/09/layers-of-abstraction-writing-great-cucumber-code/
http://benmabey.com/2008/05/19/imperative-vs-declarative-scenarios-in-user-stories.html
http://mislav.uniqpath.com/2010/09/cuking-it-right/
I hope this helps