How to Know If in the Last Repeat of Repeat Keyword - robotframework

The Repeat Keyword Robot Framework command allows you to keep running a keyword for a certain number of cycles or a certain amount of time. I want to have one type of Teardown for all intermediate Repeats but a final other teardown at the end of the test. Is there an internal variable or something else under the hood that I can check to see if the last Repeat is being run?

I highly doubt it.
But there's a workaround. Just make a regular for loop with index and use Run Keyword there. Then do then conditional calls based on the index.

Related

How to create a "background timer" on Robot Framework?

I am struggling to create (and to find similar examples) of what I understand as a "background timer" on Robot Framework.
My scenario is:
I have a Test Case to Execute, and at a certain point I give one input to my system. Then I need to keep testing other functionalities on the way, but after 30 min from that initial input, check if the expected outcome is happening or not.
For example:
Run Keyword And Continue On Failure My_Keyword
#This keyword should trigger the system timer.
Run Keyword And Continue On Failure Another_Keyword
#This one does not relate with the timer but must also be executed.
Run Keyword And Continue On Failure My_Third_Keyword
#Same for this one.
Run Keyword And Continue On Failure Check_Timer_Result
#This one shall see if the inputs from the first keyword are actually having effect.
Since I know the Keyword "Sleep" will pause the execution and wait, I have been think about using some logic + BultIn Get Time keyword. But wondering if this is the most efficient way.

Python's unittest testing framework, how to uniformly modify the execution result of each use case in the tearDownClass method?

I am using python's unittest framework. Since each test case execution will take the same amount of time [obtain the product log to determine whether it is successful or not], I want to extract the time-consuming operation to determine the success/failure of the test and put it in tearDownClass In this method, the product logs are obtained uniformly, and the success of each use case is compared one by one, which can save a lot of test execution time,
The core requirement is: in tearDownClass or elsewhere, finally modify the execution results of each test case one by one, changing success to failure

Design of JSR352 batch job: Is several steps a better design than one large batchlet?

My JSR352 batch job needs to read from a database, and then depending on the result flows to one of two pathways, each of which involves some more if/else scenarios. I wonder what the pros and cons between writing a single step with a large batchlet and several steps consisting of smaller batchlets would be. This job does not involves chunk steps with chunk size larger than 1, as it needs to persists the read result immediately in case there is any before proceeding to other logic. The job will be run using Control-M, I wonder if using multiple smaller steps provides more control points.
From that description, I'd suggest these
Benefits of more, fine-grained steps
1. Restart
After a job failure, the default behavior on restart is to begin executing at the step where the previous job execution failed. So breaking the job up into more steps allows you to avoid writing the logic to resume where you left off and avoid re-processing, and may save execution time in the process.
2. Reuse
By encapsulating a discrete function as its own batchlet, you can potentially compose other steps in other jobs (or even later in this job) implemented with this same batchlet.
3. Extract logic into XML
By moving the transition logic into the transition elements, and extracting the conditional flow (e.g. <next on="RC1" to="step3"/>, etc.)
into the job definition XML (JSL), you can introduce changes at a standard control point, without having to go into the Java source and find the right place.
Final Thoughts
You'll have to decide if those benefits are worth it for your case.
One more thought
I wouldn't automatically rule out the chunk step just because you are using a 1-item chunk, if you can still find benefits from the checkpointing or even possibly the skip/retry. (But that's probably a separate question.)

Pintool- How can I traverse through all traces (even the ones that have already been executed once)?

I'm trying to count how many times a bbl is executed in the whole program run, but apparently, Trace_addinstrumentfunction skips traces that have already been executed once. Anyone has any ideas?
Pin instrumentation works in two phases. The instrumentation phase is called when new code is encountered, and allows you to insert analysis callbacks. Analysis callbacks are called every time the code is encountered.
I strongly recommend reading the first bit of the pin manual to understand the difference between instrumentation and analysis functions.
The instrumentation called allows you to insert the callbacks. In simpler terms, the function will have you put function calls before each instrument. You can define this instrument as either Instruction, Trace or Routine. Now, specific to your question, finding number of bbl is easy. Pin however follows a different definition of BBL. Finding the number of times a BBL(Per Pin's definition) is executed is easy. You can simply insert an Trace Instrumentation call and for every BBL increment a counter in the analysis call and you will get the BBL count.
If you want to go by the textbook definition of BBL(one entry one exit) which implies one BBL breaks at the BranchOrCall statement, insert a call using IsBranchOrCall API and increment the BBLcounter in the callback function.
I recommend trying both of them and figuring out the difference between the two definitions.

Having two sets of input combined on hadoop

I have a rather simple hadoop question which I'll try to present with an example
say you have a list of strings and a large file and you want each mapper to process a piece of the file and one of the strings in a grep like program.
how are you supposed to do that? I am under the impression that the number of mappers is a result of the inputSplits produced. I could run subsequent jobs, one for each string, but it seems kinda... messy?
edit: I am not actually trying to build a grep map reduce version. I used it as an example of having 2 different inputs to a mapper. Let's just say that I lists A and B and would like for a mapper to work on 1 element from list A and 1 element from list B
So given that the problem experiences no data dependency that would result in the need for chaining jobs, is my only option to somehow share all of list A on all mappers and then input 1 element of list B to each mapper?
What I am trying to do is built some type of a prefixed look-up structure for my data. So I have a giant text and a set of strings. This process has a strong memory bottleneck, therefore I was after 1 chunk of text/1 string per mapper
Mappers should be able to work independent and w/o side effects. The parallelism can be, that a mapper tries to match a line with all patterns. Each input is only processed once!
Otherwise you could multiply each input line with the number of patterns. Process each line with a single pattern. And run the reducer afterwards. A ChainMapper is the solution of choice here. But remember: A line will appear twice, if it matches two patterns. Is that what you want?
In my opinion you should prefer the first scenario: Each mapper processes a line independently and checks it against all known patterns.
Hint: You can distribute the patterns with the DistributedCache feature to all mappers! ;-) Input should be splitted with the InputLineFormat
a good friend had a great epiphany: what about chaining 2 mappers?
in the main, run a job that fires up a mapper (no reducer). The input is the list of strings, and we can arrange things so that each mapper gets one string only.
in turn, the first mapper starts a new job, where the input is the text. It can communicate the string by setting a variable in the context.
Regarding your edit:
In general a mapper is not used to process 2 elements at once. He shall only process one element a time. The job should be designed in a way, that there could be a mapper for each input record and it would still run correctly!
Of course it is suitable, that the mapper needs some supporting information to process the input. This information can be by-passed with the Job Configuration (Configuration.setString() for example). A larger set of data shall be passed via the distributed cache.
Did you have a look on one of these options?
I'm not sure if I fully understood your problem, so please check by yourself if that would work ;-)
BTW: A appreciating vote for my well investigated previous answer would be nice ;-)

Resources