Maximum limit of started keywords exceeded - robotframework

I'm new to robot framework and I'm trying to do a simple test data driven by using excel .this message is displayed

Your test case has name Open Browser, and the first keyword in this test case is Open Browser. It creates an endless loop, so RF sees that and stops it at some point.
Similar topics on SO:
RobotFramework: Maximum limit of started keywords exceeded
and a thread on github:
https://github.com/robotframework/robotframework/issues/1961

Related

First time error in Azure Log analytics or application insights

How can we find the first time error in Log Analytics or Application insights. There could be errors that get repeatedly written to log files. But I want to find errors that are not in this error pattern and send an alert when it happens.
Or search these 'different'/irregular errors (not necessarily first time) during a particular hour or custom time.
I am thinking of the following options as solutions(existing or common errors can ran up to 100-500 lines):
running a saved kusto query with a list of hardcoded error messages(resulting mismatches can be listed as new errors).
Create a Data Table with all existing/common errors (in Kusto but will it stay stored?) and do the same as above(join)
Or use logic apps and store the known errors in table storage.
retrieve the table storage first and run the join query against log
analytics.
Has anyone done this before and what would you suggest the best option is?
Possible options can be
If the known errors list is small , you can adjust your Kusto query on Log
Analytics/Application Insights to exclude those common errors and create an alert
based on your alert logic (custom query, number of results, threshold, period (grain)
and frequency)
If you want to maintain the errors list with in Log Analytics/Application Insights as
a custom events , you very well can call the respective API's and ingest the data.
Some documentation reference for the same.
Once you have your list of errors as custom logs/events you can write your Kusto query
accordingly.
Additional Documentation Reference for dynamic thresholds and smart detections, see if this fits your requirement.
Metric Alerts with Dynamic Thresholds in Azure Monitor
Smart Detection in Application Insights
Hope the above information helps !

xProc - Pausing a pipeline and continue it when certain event occurs

I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.

Loadrunner Flex protocol - issue with amf call

I am trying to make load test for Flex application, but some error appear during replay. I can't understand why it's happening. I have done all correlations for DSID,
I'm using latest version of Externalizable Objects:
- flex-messaging-common
- flex-messaging-core
- flex-messaging-data
- flex-messaging-data-req
- flex-messaging-opt
- flex-messaging-proxy
- flex-messaging-remoting
I have rerecorded script and I tried to find differences, but I didn't found anything significant. It's very confusing because first 20 amf_calls works fine.
Error Output:
Error:Decoding of AMF message failed. Error is : ReadValue Failed due Insufficient data to read at location 6
I'm using Loadrunner 12, Flex protocol, IE 9.
Edit. The line before error:
Warning:HTTP status code 500 was returned by the server
HTTP 500 is typically associated with two states
(1) Missed correlation
(2) Application "coming off the tracks" where an unexpected response is sent back, but no check is made for an acceptable response. One to two requests downstream you will have a 500 thrown for a request which is out of context with the state of the data or the application.
Record the application twice. Compare the scripts. Make sure you have handled all of the dynamic data.
Record a third time, change the data record used. Make sure you have covered all of the business process specific dynamic elements
Record a fourth time with a different sign on credentials set. Compare to previous recordings and make sure you have covered all of the dynamic elements related to user credentials.
Odds are you have missed a dynamic element related to the business process which is outside of Flex session and state items

Explanation of XDMP-EXTIME in Marklogic

I need a lucid explanation of why XDMP-EXTIME happens in Marklogic. In my case it's happening during a search(read operation). In the exception message a line from the code is being printed:
XDMP-EXTIME: wsssearch:options($request, $req-config) -- Time limit exceeded
This gives me the impression that the execution does not go beyond that line. But it seems that it's a pretty harmless line of code ,it does not fetch any data from the DB, just sets certain search options. How can I pin point which part of the code is causing this? I have heard that increasing the max time limit of the task server solves such problems but that's not an option with me. Please let me know how such problems are tackled. It would be very very hard for me to show you the code base.Still hoping to hear something helpful from you guys.
The error message can sometimes put you on the wrong foot because of lazy evaluation. The execution can actually be further down the road than the error message seems to indicate. Could be one line, could be several. Look for where the returned value is being used.
Profiling can sometimes help getting a clearer picture of where most time is spent, but the lazy evaluation can throw things off here as well.
The bottom-line meaning of the message is pretty simple: the execution of your code takes too long. The actual search in which the options are being used is the most likely candidate of where it goes wrong.
If you are using cts:search or search:search under the covers, then that should normally perform well. A search typically gets slow when you end up returning many results, e.g. don't apply pagination. Search:search does that by default however.
A search can also get slow if you are running your search in update mode. You could potentially end up having MarkLogic trying to apply many (unnecessary) read locks. Put the following declaration in your search endpoint code, or xquery main module that does the search:
declare option xdmp:update "false";
HTH!
You could try profiling the code to see what specifically is taking so long. This might require increasing the session time limit temporarily to prevent the timeout from occurring while profiling. Note that unless this is being executed on the Task Server via xdmp:spawn or xdmp:spawn-fucntion, you would need to increase the value on the App Server hosting the script.
If your code is in a module, the easiest thing to do is make a call to the function that times out from Query Console using the Profile tab. Alternatively, you could begin the function with prof:enable(xdmp:request()) and later output the contents of prof:report(xdmp:request()) to a file on the filesystem, or insert it somewhere in the database.

QTP Retrieve code that has already been executed

I'm trying to figure out a way to reflectively look at the code that I've executed in a QTP script. The idea here is, when I encounter a crash, have a recovery scenario that captures an error message and sends it to QC as a defect. If I can see the code I've already executed, then I could also include the steps to reproduce the defect, in theory.
Any thoughts?
Option 1: Movie recording and playback
QTP11 (finally) has a feature for demands like that: Take a look at Tools, Options, Run, Screen capture. "Save movie to results" there allows you to record exactly what happened. The resulting movie is part of the run result, i.e. if you submit a bug with this run result, the movie will be included.
I would not use such a feature because you would have to record the movie always just to have it in case of error. You would end up with big run results that contain movies nobody wants to see, just to have them in the rare cases that an error occurred and a defect is created. But:
In this regard, HP has done the job right: You can select in the dialog to save the movie to results only if an error occurs. And, to avoid saving the hole boring part of the test execution that did not contain errors, yet seeing the critical steps that lead to the error, you can specify to keep only the last N kB of the movie so you will always see what lead to the error.
Option 2: "Macro" recording and playback
You could, in theory, create your own playback methods for all test objects (registering functions via RegisterUserFunc), and make them save the call info into some data structure before doing the playback step (by calling the original playback function).
Then, still in theory, you could create a nice little playback engine that iterates over that data structure and executes exactly the playback steps that were recorded previously.
I´ve done similar stuff to repeat bundles of playback steps after changing AUT config to iterate a given playback over various configs without changing the code that does the original playback.
But hey, this is quite some work, lots of things can be wrong: The AUT must be in the same intial state upon playback as during "recording of playback". This includes all relevant databases and subsystems of your testing environment. Usually, this is not an easy task in large projects and not worth the trouble (we are talking about re-creating the original initial config just to reproduce one single bug).
So I suggest you check out the movie feature, i.e. option 1. It does not playback the steps in the AUT, but it shows what happened during the original playback -- exactly.

Resources