Python's unittest testing framework, how to uniformly modify the execution result of each use case in the tearDownClass method? - python-unittest

I am using python's unittest framework. Since each test case execution will take the same amount of time [obtain the product log to determine whether it is successful or not], I want to extract the time-consuming operation to determine the success/failure of the test and put it in tearDownClass In this method, the product logs are obtained uniformly, and the success of each use case is compared one by one, which can save a lot of test execution time,
The core requirement is: in tearDownClass or elsewhere, finally modify the execution results of each test case one by one, changing success to failure

Related

How to create a "background timer" on Robot Framework?

I am struggling to create (and to find similar examples) of what I understand as a "background timer" on Robot Framework.
My scenario is:
I have a Test Case to Execute, and at a certain point I give one input to my system. Then I need to keep testing other functionalities on the way, but after 30 min from that initial input, check if the expected outcome is happening or not.
For example:
Run Keyword And Continue On Failure My_Keyword
#This keyword should trigger the system timer.
Run Keyword And Continue On Failure Another_Keyword
#This one does not relate with the timer but must also be executed.
Run Keyword And Continue On Failure My_Third_Keyword
#Same for this one.
Run Keyword And Continue On Failure Check_Timer_Result
#This one shall see if the inputs from the first keyword are actually having effect.
Since I know the Keyword "Sleep" will pause the execution and wait, I have been think about using some logic + BultIn Get Time keyword. But wondering if this is the most efficient way.

How to clear message error that occur from the Corda flow test

I try to run the test that I created from Corda 3.3 in Corda 4.1
I have 2 test case for test the flow
in the first test I expected fail that came from contract
and the result from first test is also correct as I expected to
but I error that I got from first test was send to hospital flow
and the error have been shown in the second test
actually the error that come for the first test not effect to the second test but it make the second test to slow
I really don't know how to clear the the error message before sun the second test
If someone have any idea please let me know thank you.
Note: If you have the way that not stop the nodes and re-create mock node again before run new test, it will be the solution that I looking for.
==============================
I have 6 tests in one file
first I try to create the network and use that net work for all of 6 test in this way I can reduce the time for initiate the network
but I need to clear the database after each test finish for avoid create duplicate data.
everything work until I change to Corda 4.1
in the 4.1 I don't know why the way that I use for clear database in Corda 3.3 not work like before (In 4.1 take long times for tuncate the table)
so I need to change the way to create the network and stop after finish each test.
In this way take more time for initiate the network (around 20-30 seconds per test)
and the point that surprised me is when I finish 5 tests in the 6th test take the long time (the log show house keeper clean) they use 6 minutes for finished
but when I run only that test, they use 1 minute for finished.
my question are
1. How I clear everything after finish each test
2. Have another way for initiate the network and use for every test? and how to clear the database and message after finish each test
It is not visible actual cause of exception.
But be aware that for 4.x corda you have to put
subFlow(ReceiveFinalityFlow(otherPartySession))
As last operation.
Dunno if this helps
It sounds like you are sharing state between tests, which is generally bad. Consider creating a MockNetwork in JUnit's #Before method or use the DriverDSL to create an isolated test for each test case.

Design of JSR352 batch job: Is several steps a better design than one large batchlet?

My JSR352 batch job needs to read from a database, and then depending on the result flows to one of two pathways, each of which involves some more if/else scenarios. I wonder what the pros and cons between writing a single step with a large batchlet and several steps consisting of smaller batchlets would be. This job does not involves chunk steps with chunk size larger than 1, as it needs to persists the read result immediately in case there is any before proceeding to other logic. The job will be run using Control-M, I wonder if using multiple smaller steps provides more control points.
From that description, I'd suggest these
Benefits of more, fine-grained steps
1. Restart
After a job failure, the default behavior on restart is to begin executing at the step where the previous job execution failed. So breaking the job up into more steps allows you to avoid writing the logic to resume where you left off and avoid re-processing, and may save execution time in the process.
2. Reuse
By encapsulating a discrete function as its own batchlet, you can potentially compose other steps in other jobs (or even later in this job) implemented with this same batchlet.
3. Extract logic into XML
By moving the transition logic into the transition elements, and extracting the conditional flow (e.g. <next on="RC1" to="step3"/>, etc.)
into the job definition XML (JSL), you can introduce changes at a standard control point, without having to go into the Java source and find the right place.
Final Thoughts
You'll have to decide if those benefits are worth it for your case.
One more thought
I wouldn't automatically rule out the chunk step just because you are using a 1-item chunk, if you can still find benefits from the checkpointing or even possibly the skip/retry. (But that's probably a separate question.)

Pintool- How can I traverse through all traces (even the ones that have already been executed once)?

I'm trying to count how many times a bbl is executed in the whole program run, but apparently, Trace_addinstrumentfunction skips traces that have already been executed once. Anyone has any ideas?
Pin instrumentation works in two phases. The instrumentation phase is called when new code is encountered, and allows you to insert analysis callbacks. Analysis callbacks are called every time the code is encountered.
I strongly recommend reading the first bit of the pin manual to understand the difference between instrumentation and analysis functions.
The instrumentation called allows you to insert the callbacks. In simpler terms, the function will have you put function calls before each instrument. You can define this instrument as either Instruction, Trace or Routine. Now, specific to your question, finding number of bbl is easy. Pin however follows a different definition of BBL. Finding the number of times a BBL(Per Pin's definition) is executed is easy. You can simply insert an Trace Instrumentation call and for every BBL increment a counter in the analysis call and you will get the BBL count.
If you want to go by the textbook definition of BBL(one entry one exit) which implies one BBL breaks at the BranchOrCall statement, insert a call using IsBranchOrCall API and increment the BBLcounter in the callback function.
I recommend trying both of them and figuring out the difference between the two definitions.

How to write integration test for systems that interact asynchronously

Assume that i have function called PlaceOrder, which when called inserts the order details into local DB and puts a message(order details) into a TIBCO EMS Queue.
Once message received, a TIBCO BW will then invoke some other system(say ExternalSystem) to pass on the order details.
Now the way i wrote my integration tests is
Call the Place Order
Sleep, and check details exists in local DB
Sleep and check details exists in ExternalSystem.
Is the above approach correct? Above test gives me confidence that, End to End integration is working, but are there any better way to test above scenario?
The problem you describe is quite common, and your approach is a very typical solution.
The problem with this solution is that if the delay is too short, your tests may sometimes pass and sometimes fail, but if the delay is very long, then your just wasteing time waiting, and with many tests, it can add a lot of delay. But unless you can get some signal to tell you the order arrived in the database, then you just have to wait.
You can reduce the delay by doing lots of checks with short intervals. If you're order is not there after timeout, then you would fail the test.
In "Growing Object-Oriented Software, Guided by Tests"*, there is a chapter on this very subject, so you might want to get a copy if you will be doing a lot of this sort of testing.
"There are two ways a test can observe the system: by sampling its observable
state or by listening for events that it sends out. Of these, sampling is
often the only option because many systems don’t send any monitoring
events. It’s quite common for a test to include both techniques to interact
with different “ends” of its system"
(*) http://my.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321574442

Resources