How to clear message error that occur from the Corda flow test - corda

I try to run the test that I created from Corda 3.3 in Corda 4.1
I have 2 test case for test the flow
in the first test I expected fail that came from contract
and the result from first test is also correct as I expected to
but I error that I got from first test was send to hospital flow
and the error have been shown in the second test
actually the error that come for the first test not effect to the second test but it make the second test to slow
I really don't know how to clear the the error message before sun the second test
If someone have any idea please let me know thank you.
Note: If you have the way that not stop the nodes and re-create mock node again before run new test, it will be the solution that I looking for.
==============================
I have 6 tests in one file
first I try to create the network and use that net work for all of 6 test in this way I can reduce the time for initiate the network
but I need to clear the database after each test finish for avoid create duplicate data.
everything work until I change to Corda 4.1
in the 4.1 I don't know why the way that I use for clear database in Corda 3.3 not work like before (In 4.1 take long times for tuncate the table)
so I need to change the way to create the network and stop after finish each test.
In this way take more time for initiate the network (around 20-30 seconds per test)
and the point that surprised me is when I finish 5 tests in the 6th test take the long time (the log show house keeper clean) they use 6 minutes for finished
but when I run only that test, they use 1 minute for finished.
my question are
1. How I clear everything after finish each test
2. Have another way for initiate the network and use for every test? and how to clear the database and message after finish each test

It is not visible actual cause of exception.
But be aware that for 4.x corda you have to put
subFlow(ReceiveFinalityFlow(otherPartySession))
As last operation.
Dunno if this helps

It sounds like you are sharing state between tests, which is generally bad. Consider creating a MockNetwork in JUnit's #Before method or use the DriverDSL to create an isolated test for each test case.

Related

Python's unittest testing framework, how to uniformly modify the execution result of each use case in the tearDownClass method?

I am using python's unittest framework. Since each test case execution will take the same amount of time [obtain the product log to determine whether it is successful or not], I want to extract the time-consuming operation to determine the success/failure of the test and put it in tearDownClass In this method, the product logs are obtained uniformly, and the success of each use case is compared one by one, which can save a lot of test execution time,
The core requirement is: in tearDownClass or elsewhere, finally modify the execution results of each test case one by one, changing success to failure

Restarting from where recorder left off and Iteration number

I have 2 questions on the case recorder.
1- I am not sure how to restart an optimizaiton from where the recorder left off. I can read in the case reader sql file etc but can not see how this can be fed into the problem() to restart.
2- this question is maybe due to my lack of knowledge in python but how can one access to the iteration number from within an openmdao component (one way is to read the sql file that is constantly being updated but there should be a more efficient way.)
You can re-load a case back via the load_case method on the problem.
See the docs for it here.
Im not completely sure what you mean by access the iteration count, but if you just want to know the number of times your components are called you can add a counter to them yourself.
There is not a programatic API for accessing the iteration count in OpenMDAO as of version 2.3

How between-graph replication with asynchronous updates in Tensorflow runs?

We focus on this situation: between-graph replication with asynchronous updates.
The following figure shows how it works. And I known that each worker will send gradients to every ps and receive its updated parameters from every ps.(Is it correct? Is it path 1 and path 2?)
But can anyone explain in detail what does the figure 1,2,3,4,5,6 mean respectively?
Thank you in advance!

Custom code on the updateHandler from HKWorkoutSession

My question is plain and simple. Can i run custom code in the updateHandler when I´ve executed a HKWorkoutSession and is listening for HeartRate samples? (Even when the Watch is locked from "wrist down" movement)
If this i possible what are my limitations?
I´m interested in processing the HeartRate data when my code receives them. I don´t have a device yet so I haven't been able to test it yet.
Would love your thoughts on this if anyone have experimented with an actual device.
Yes you can do this, I've had it append every HKSample that came back from my query onto an array, so when I resume the array is much larger. However UI won't update this way, on resume you need to update to the values you've received from the updateHandler.
Whether I should be doing this, or how far it can be pushed, I'm not sure.
Update
In the latest Xcode 7 beta you can get simulated workout data, so you won't need to install the Watch OS 2 beta on your device.

How to use VSTS Loadtest Goal based load pattern to achieve a constant test per second

I am using Visual Studio TS Load Test for running WebTest (one client/controls hitting one server). How can I configure goal based load pattern to achieve a constant test / second?
I tried to use the counter 'Tests/Sec' under 'LoadTest:Test' but it does not seem to do anything.
I've recently tested against the Tests / Sec, and I've confirmed it working.
For the settings on the Goal Based Load Pattern, I used:
Category: LoadTest:Test
Counter: Tests/Sec
Instance: _Total
When the load test starts, verify it doesn't show an error re: not being able to access that Performance Counter.
Tests I ran for my own needs:
Set Initial User Load quite low (10), and gave it 5 minutes to see if
it would reach the target Tests / Sec target, and stabilise. In my case, it stabilised after about 1 minute 40.
Set the Maximum User Count [Increment|Decrement] to 50. Turns out the
user load would yo-yo up and down quite heavily, as it would keep
trying to play catch-up. (As the tests took 10-20 seconds each)
Set the Initial User Load quite close to the 'answer' from test 1,
and watched it make small but constant adjustments to the user
volume.
Note: When watching the stats, watch the value under "Last". I believe the "Average" is averaged over a relatively long period, and may appear out of step with the target.

Resources