Continue after failed assertion - automated-tests

Once an assertion fails( typically API response ), the remaining test steps in a test case do not get executed. How do I get SoapUI to continue on and complete the rest of the test steps?
I am also looking for a way to retry that step again with the same set of data. If not possible, skip that and proceed to next set of items. Any idea on this ?

1- Double click on the test case (not test step)
look at the provided picture and find the one shown below:

To continue tests executing after failed test step you need disable "Fail on error" option in TestCase options.
To control test execution flow you can use "Conditional Goto" test step (the easiest way) or "Groovy script" test step if you need more complex logic.
Detailed instructions here: http://www.soapui.org/Functional-Testing/controlling-flow.html

In Ready Api (Soapui pro) right click on Test Case and click on Option.
Abort test if an error occurs (Disable)

In SOAPUI free version you can go to the test case and click on the gear or settings icon which opens up a pop up and uncheck the option "Abort test if error occurs"

Related

Why does Progress go back to the initial screen after a session crash?

Hello all and thanks for viewing this question,
I have a program that users get access to via a login screen. Once the user's credentials have been validated on the login screen, the main program is called (from the login screen) and the login screen disappears. All good. However, if the session crashes (or I press CTRL-PAUSE), the main program is terminated and I end up at the initial login screen. I'd have assumed that after a session crash, Progress (11.4) should take me back to the OS (Windows Server 2012), but not back to the initial screen. I have tried placing QUIT in different areas of the program, but Progress still takes me back to the initial screen, while I need it to quit completely. Any thoughts would be greatly appreciated. Thanks!
It's the AVM's default behavior to rerun the startup procedure after a STOP condition has occurred that was not handled.
You can add an
ON STOP UNDO, RETURN "stopped" .
option to a DO, FOR or REPEAT block close where your "crash" happens. Then the calling procedure could check for the RETURN-VALUE of "stopped".
Assuming you are on a recent version (OpenEdge 12.x), you can also use CATCH Blocks for Progress.Lang.Stop:
CATCH stopcon AS Progress.Lang.Stop:
QUIT.
END CATCH.
I think that your use of the word "crashed" is very, very confusing. If your session actually "crashes" in the usual sense that _progres (or prowin if this is Windows) terminates, then you would not have any locked records remaining. You would also have a protrace file that would help you to identify where the issue occurs.
Incidentally, you could add error logging to the client startup to determine where the errors that QXtend cannot find are occurring:
_progres dbname -p startup.p -clientlog logname.log
You have not shared any code so I can only guess but, presumably, you are running your login program via the -p startup parameter.
Correct me if I am wrong but something along these lines:
_progres dbname -p startup.p
The startup program then runs whatever it runs to get you logged in and run the application. Maybe something like this:
/* startup.p
*/
message "(re)starting!".
pause.
run value( "login.p" ).
run value( "stuff.p" ).
message "all done".
pause.
quit.
And:
/* login.p
*/
message "hello, logging in!".
pause.
return.
Along with:
/* stuff.p
*/
message "hello, doing stuff!".
pause.
run value( "notthere.p" ).
message "hello, doing more stuff!".
pause.
return.
At some point an error occurs (you seem to want to call this a "crash"). I have arranged for a serious error to occur when stuff.p tries to "run notthere.p". So if you run my example you will see the behavior that you have described - your session "crashes", the startup procedure re-runs, and you get to the login screen again.
To change that and trap the error simply wrap a "DO ON STOP" around the RUN statements. Like this:
/* startup.p
*/
message "(re)starting!".
pause.
do on error undo, leave
on endkey undo, leave
on stop undo, leave
on quit undo, leave: /* "leave", exits this block when one of the named conditions arises */
run value( "login.p" ).
run value( "stuff.p" ).
/* we just leave because we finished normally */
end.
message "all done".
pause.
quit.
You mention QXtend so I am guessing that MFG/Pro is involved. If you cannot directly modify the MFG/Pro startup procedure (as I recall that would be "-p mfg.p") just adapt the code above to be a "shim" that runs mfg.p from within the "DO ON STOP..." block.
I believe I have found a way to quit the initial login screen when this appears as the result of a session crash, by using the the ETIME function. Thanks again, Mike for your response.

NetLogo: Can't "stop" forever button from another procedure?

I have simplified my problem below. I want to stop the execution of the forever button "go" when there's no robots, and I want to call this from another procedure ("test" in this case) like so:
to go
test
end
to test
if not any? robots [ stop ]
end
The reason for this is that I want to call stop where the robot dies such that I can send an appropriate user message.
Sadly, you must re-organize your code so that the you call if not any? robots [ stop ] in your go in order for the following to be true:
See the documentation:
A forever button can also be stopped from code. If the forever button
directly calls a procedure, then when that procedure stops, the button
stops. (In a turtle or patch forever button, the button won’t stop
until every turtle or patch stops – a single turtle or patch doesn’t
have the power to stop the whole button.)
Ref:http://ccl.northwestern.edu/netlogo/docs/programming.html#buttons
stop This agent exits immediately from the enclosing procedure, ask,
or ask-like construct (e.g. crt, hatch, sprout). Only the enclosing
procedure or construct stops, not all execution for the agent.
Ref: http://ccl.northwestern.edu/netlogo/docs/dict/stop.html
One alternative hacky solution which I'm tempted to not post may be to do the following where you raise an error in which then stops.
to go
carefully[test][error-message stop]
end
to test
if not any? robots [ error "no more robots!" ]
end

tSQLt - How to output a custom failure or success message?

We are using the tSQLt framework and have the below code in the test.
IF #count>0
EXEC tsqlt.fail;
else EXEC tSQLt.AssertEquals 1,1;
I am interested to know how we can display a custom test success or failure message when this test gets executed?
tSQLt.fail takes up to 10 parameters that all get concatenated into a custom failure message.
You also do not need the call to tSQLt.AssertEquals as it, in your case, literally does nothing.
BTW, asserting a count is in almost all cases a bad idea, as it does not really tell you anything about the result. If you get the correct count back, you could still have wrong data. And if you get the incorrect count, you don't have any additional info on what went wrong.
Have a look at tSQLt.AssertEqualsTable or tSQLt.AssertEmptyTable instead.

Set log level for built in keywords in robotframework

In robot framework, it looks like it logs messages for keywords like "=" by default with 'INFO' log level. Ex:
<Test case>
${xyz} = "hello"
Would log message with:
'INFO': ${xyz} = "hello"
I would like to lower the log level for this to 'DEBUG' or 'TRACE' but can't seem to find it in the source code.
An advice for this?
Have you tried to execute the test bringing the whole execution to a deeper level "DEBUG" or "TRACE" adding this -L trace or -L debug to your test call. robot -L trace mytest.robot for instance.
Also, you can set your log level in the code. Like this:
Test Setup Set Log Level TRACE
Then in the log.html file, a visible log level dropdown is shown in the upper right corner. This allows users to remove messages below the chosen level from the view. This can be useful especially when running tests at the TRACE level.
for more information see: http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#visible-log-level
Source code defined like this
def log(self, message, level='INFO', html=False, console=False,
repr=False, formatter='str'):
u"""Logs the given message with the given level.
Valid levels are TRACE, DEBUG, INFO (default), HTML, WARN, and ERROR.
Messages below the current active log level are ignored. See
`Set Log Level` keyword and ``--loglevel`` command line option
for more details about setting the level.
usage example
Log you message:{message} level=DEBUG

How to download HTML Report from HP ALM Performance Center 11.0 using rest API

I want to download HTML default report for a test run from Performance Center storage (using Rest API). Actually I need just summary.html file.
I was using the following steps in PC 11.5:
Request test scenarios:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/tests?fields=id,last-modified,name,owner&query={subtype-id[=PERFORMANCE-TEST]}&page-size=max
Let user choose the scenario (id) and request all its runs:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/runs?page-size=max&fields=id,owner,pc-start-time,duration,status,test-id&query={test-id[=234]}
Let user choose the run (id) and request Report (result entity):
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results?page-size=max&query={run-id[=123];name[=Reports]}&fields=id,name
Request "summary.html" file using file-id taken from previous step response:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results/{file-id}/storage/report/summary.html
However it is not working with Performance Center 11.0. It fails at last step:
qccore.general-error
Not Found
I guess it is because the path of report was changed.
Can someone tell the path for summary.html for Performance Center 11.0?
I've been able to have a little bit of success with this. Rather than use the request you are using above I used the following:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results/{file-id}/logical-storage/
This gave me a zip file, which contained the report inside it.

Resources