Report generation fails - report

We are trying to use Karate/Gatling for performance tests and very often the run succeeds but the results are not generated with this error.
java.lang.IllegalStateException: cannot create children while terminating or terminated
at akka.actor.dungeon.Children.makeChild(Children.scala:270)
I can see the simulation logs in profilessimulation-xxx/simulation.log folder. When I try to generate the report using ./gatling.sh -ro /simulation.log, I get this error.
>
>
exception in thread "main" java.lang.NumberFormatException: For input string: "profilessimulation"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
Here is part of the simulation.log >>
RUN ic.ecx.automation.testcases.profilesSimulation profilessimulation 1553697537449 null 2.0
USER test 1 START 1553697537633 1553697537633
USER test 2 START 1553697537893 1553697537893
USER test 3 START 1553697537994 1553697537994
USER test 4 START 1553697538095 1553697538095
USER test 5 START 1553697538204 1553697538204
USER test 6 START 1553697538304 1553697538304
USER test 7 START 1553697538403 1553697538403
USER test 8 START 1553697538503 1553697538503
USER test 9 START 1553697538603 1553697538603
USER test 10 START 1553697538703 1553697538703
REQUEST test 3 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539459 1553697541977 OK
REQUEST test 6 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539460 1553697541977 OK
REQUEST test 5 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539460 1553697541977 OK
REQUEST test 4 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539460 1553697541977 OK
REQUEST test 1 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539459 1553697541977 OK
REQUEST test 7 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539459 1553697544333 OK
REQUEST test 10 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539460 1553697546110 OK
REQUEST test 2 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539459 1553697546120 OK
REQUEST test 9 GET /ecx/v3/l2/buyer/connections?pageSize=20 1553697539460 1553697546130 OK
Any idea how to generate the report from the simulation log?

Would be great if you can narrow down this problem and help us replicate: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
Just a guess, but in case the report is not getting time to be written, I believe Gatling has a concept of after hooks: https://gatling.io/docs/current/general/simulation_structure/#hooks
So maybe you could add a Thread.sleep in the after hook to make sure Gatling terminates gracefully. If you are using values for pauseFor() try to not have values or use Nil.

Related

About expected timeouts in RobotFramework SSHLibrary

I’m a newb with respect to Robot Framework.
I’m writing a test procedure that is expected to
connect to another machine
perform an image update (which causes the unit to close all services and reboot itself)
re-connect to the unit
run a command that returns a known string.
This is all supposed to happen within the __init__.robot module
What I have noticed is that I must invoke the upgrade procedure in a synchronous, or blocking way, like so
Execute Command sysupgrade upgrade.img
This succeeds in upgrading the unit, but the robotframework script hangs executing the command. I suspect this works because it keeps the ssh session alive long enough for the upgrade to reach a critical junction where the session is closed by the remote host, the host expects this, the upgrade continues, and this does not cause the upgrade to fail.
But the remote host appears to close the ssh session in such a way that the robotframework script does not detect it, and the script hangs indefinitely.
Trying to work around this, I tried invoking the remote command like so
Execute Command sysupgrade upgrade.img &
But then the update fails because the connection appear to drop and leaves the upgrade procedure incomplete.
If instead I execute it like this
Execute Command sysupgrade upgrade.img &
Sleep 600
Then this also fails, for some reason I am unable to deduce.
However, if I invoke it like this
Execute Command sysupgrade upgrade.img timeout=600
The the command succeeds in updating the unit, and after the set timeout period, the robotframework script does indeed resume, but since it has arrived at the timeout, the test has (from the point of view of robotframework) failed.
But this is actually an expected failure, and should be ignored. The rest of the script would then reconnect to the host and continue the remaining test(s)
Is there a way to treat the timeout condition as non-fatal?
Here is the code, as explained above, the __init__.robot initialization module is expected to perform the upgrade and then reconnect, leaving the other xyz.robot files to be run and continue testing the applications.
The __init__.robot file:
*** Settings ***
| Library | OperatingSystem |
| Library | SSHLibrary |
Suite Setup ValidationInit
Suite Teardown ValidationTeardown
*** Keywords ***
ValidationInit
Enable SSH Logging validation.log
Open Connection ${host}
Login ${username} ${password}
# Upload the firmware to the unit.
Put File ${firmware} upgrade.img scp=ALL
# Perform firmware upgrade on the unit.
log "Launch upgrade on unit"
Execute Command sysupgrade upgrade.img timeout=600
log "Restart comms"
Close All Connections
Open Connection ${host}
Login ${username} ${password}
ValidationTeardown
Close All Connections
log “End tests”
This should work :
Comment Change ssh client timeout configuration set client configuration timeout=600 Comment "Launch upgrade on unit" SSHLibrary.Write sysupgrade upgrade.img SSHLibrary.Read Until expectedResult Close All Connections
You can use 'Run Keyword And Ignore Error' to ignore the failgure. Or here I think you should use write command if you do not care the execution result.

How to change session's status from running to succeeded when a condition is met in Informatica?

I'm having a problem in changing status of a session from running to succeeded when a condition is met..
For example, I have a workflow as below:
start ---------> workA
| |---> workB
|--------> timer_20mins
From the diagram above, the process of workA and workB is running concurrently as well as the timer. So if the session's process succeeded before the configured 20 mins in the timer, the status of the timer should change from running to succeeded......I tried with post session success command, but it's still not working. How should I rectify the code?
Have you reviewed my previous answer available here?
This problem here is very similar, you just need the decision to fire in case both your sessions are done, so it should have one more decision task with Treat the input links as set to AND this time.
Briefly, the flow should look like this:
Start--->s_sessionA---\
\ > Decision [AND]
\--->s_sessionB--/ \
\ > Decision [OR] ---(False)---> Control Task [Fail parent]
\-------------->timer-----------/

Robot Framework: I got [Errno 10057] A request to send or receive data was disallowed after try to run Robot script for automate test Mainframe

I got the error: [Errno 10057] after I try to run Robot script for automate test Mainframe.
*** Settings ***
Library Mainframe3270
*** Test Cases ***
Example
Open Connection HostTest
Change Wait Time 0.9
Set Screenshot Folder C:\\Temp\\IMG
${value} Read 2 13 10
Take Screenshot
Close Connection
error: [Errno 10057] A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
I don't know what something wrong in my Robot code or configuration, can anyone explain please. This is my first time to do the Robot in Mainframe.

Running concurrent users with Taurus using a Robot-Framework Script

I have prepared a robot test script and now I'm trying to run the script in multiple browsers ( same time ) using Blazemeter - Taurus. The Tauyus yml file looks like the code bellow.
I have used the same method in JMeter and Taurus seems to run smoothly with Jmeter as expected.
execution:
- concurrency: 5
executor: selenium
runner: robot
ramp-up: 50s
hold-for: 2h
scenario:
script: WebFlow.robot
reporting:
- console
- final-stats
- blazemeter
I'm expecting to start 5 browser windows and run the robot script concurrently. But now even the concurrency is 5 it will open browsers one at a time and once the whole robot script finished running it will start the browser for the second time.
In Taurus you can easily create multiple execution instances which will be parallel for robot scripts and will aggregate results into single report as expected. For example:
execution:
- executor: robot
concurrency: 1
iterations: 5
scenario:
script: /tools/robot/phx-read-1.robot
- executor: robot
concurrency: 1
iterations: 5
scenario:
script: /tools/robot/phx-read-2.robot
- executor: robot
concurrency: 1
iterations: 5
scenario:
script: /tools/robot/phx-read-3.robot
- executor: robot
concurrency: 1
iterations: 5
scenario:
script: /tools/robot/phx-read-4.robot
- executor: robot
concurrency: 1
iterations: 5
scenario:
script: /tools/robot/phx-read-5.robot
reporting:
- console
- final-stats
- blazemeter
Yes, you have to specify it many times... but easily scriptable. In my case I actually have to have different scripts anyway, but Taurus nicely aggregates everything.
I would recommend checking out the pabot library for robot. I have used it before for running tests in parallel and it worked great.
A parallel executor for Robot Framework tests. With Pabot you can split one execution into multiple and save test execution time.

How to extend timeout when waiting for citrus async action to complete?

I'm using citrus to test a process that invoke a callback after performing several steps.
I've got the following sequence working:
-> httpClient kicks process
<- SUT answers OK
<-> Several Additional Steps
<- SUT invokes httpServer
-> httpServer answers OK
I'm now trying to make it more generic by using the citrus async container to wait for the SUT invocation in // to the Additional Steps execution.
async(
<- SUT invokes httpServer
-> httpServer answers OK
)
-> httpClient kicks process
<- SUT answers OK
<-> Several Additional Steps
The problem I'm facing is that after the last additional steps executes the async container does not seem to be waiting long enough for my SUT to invoke it. It seems to be waiting maximum 10 sec.
See below the output and the code snippet (without additional steps to make it simple)
14:14:46,423 INFO port.LoggingReporter|
14:14:46,423 DEBUG port.LoggingReporter| TEST STEP 3/4 SUCCESS
14:14:46,423 INFO port.LoggingReporter|
14:14:46,423 DEBUG port.LoggingReporter| TEST STEP 4/4: echo
14:14:46,423 INFO actions.EchoAction| VM Creation processInstanceID: 3543
14:14:46,423 INFO port.LoggingReporter|
14:14:46,423 DEBUG port.LoggingReporter| TEST STEP 4/4 SUCCESS
14:14:46,530 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:47,530 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:48,530 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:49,528 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:50,529 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:51,530 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:52,526 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:53,529 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:54,525 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:55,525 DEBUG citrus.TestCase| Wait for test actions to finish properly ...
14:14:56,430 INFO port.LoggingReporter|
14:14:56,430 ERROR port.LoggingReporter| TEST FAILED StratusActorSSL.SRCreateVMInitGoodParamCentOST004 <com.grge.citrus.cmptest.stratus> Nested exception is: com.consol.citrus.exceptions.CitrusRuntimeException: Failed to wait for nested test actions to finish properly
at com.consol.citrus.TestCase.finish(TestCase.java:266)
Code snippet
async()
.actions(
http().server(extServer)
.receive()
.post("/api/SRResolved")
.contentType("application/json;charset=UTF-8")
.accept("text/plain,application/json,application/*+json,*/*"),
http().server("extServer")
.send()
.response(HttpStatus.OK)
.contentType("application/json")
);
http()
.client(extClientSSL)
.send()
.post("/bpm/process/key/SRCreateVMTest")
.messageType(MessageType.JSON)
.contentType(ContentType.APPLICATION_JSON.getMimeType())
http()
.client(extClientSSL)
.receive()
.response(HttpStatus.CREATED)
.messageType(MessageType.JSON)
.extractFromPayload("$.processInstanceID", "processId");
echo(" processInstanceID: ${processId}");
another update... hopefully this might help other citrus users...
I finally implemented the behaviour I wanted, using the parallel citrus container as shown below. Nevertheless, I'll let this question open for few days as this does not answer my initial question...
parallel().actions(
sequential().actions(
http().server(extServer)
.receive()
.post("/api/SRResolved")
.contentType("application/json;charset=UTF-8")
.accept("text/plain,application/json,application/*+json,*/*"),
http().server("extServer")
.send()
.response(HttpStatus.OK)
.contentType("application/json")
),
sequential().actions(
http()
.client(extClientSSL)
.send()
.post("/bpm/process/key/SRCreateVMTest")
.messageType(MessageType.JSON)
.contentType(ContentType.APPLICATION_JSON.getMimeType())
http()
.client(stratusClientSSL)
.receive()
.response(HttpStatus.CREATED)
.messageType(MessageType.JSON)
.extractFromPayload("$.processInstanceID", "processId"),
echo("VM Creation processInstanceID: ${processId}")
)
);
The more I think about it, the more I believe this is a bug as when using async (as described above) I'm expecting the async part (and thus the test) to continue until the timeout given on the http server (in my case 60 sec) will expire or an expected request is received from the SUT and not after an arbitrary (10 sec) delay following the end of the non-async part of the test case unless I missed something about the async container features & objectives.

Resources