QTP Retrieve code that has already been executed - reflection

I'm trying to figure out a way to reflectively look at the code that I've executed in a QTP script. The idea here is, when I encounter a crash, have a recovery scenario that captures an error message and sends it to QC as a defect. If I can see the code I've already executed, then I could also include the steps to reproduce the defect, in theory.
Any thoughts?

Option 1: Movie recording and playback
QTP11 (finally) has a feature for demands like that: Take a look at Tools, Options, Run, Screen capture. "Save movie to results" there allows you to record exactly what happened. The resulting movie is part of the run result, i.e. if you submit a bug with this run result, the movie will be included.
I would not use such a feature because you would have to record the movie always just to have it in case of error. You would end up with big run results that contain movies nobody wants to see, just to have them in the rare cases that an error occurred and a defect is created. But:
In this regard, HP has done the job right: You can select in the dialog to save the movie to results only if an error occurs. And, to avoid saving the hole boring part of the test execution that did not contain errors, yet seeing the critical steps that lead to the error, you can specify to keep only the last N kB of the movie so you will always see what lead to the error.
Option 2: "Macro" recording and playback
You could, in theory, create your own playback methods for all test objects (registering functions via RegisterUserFunc), and make them save the call info into some data structure before doing the playback step (by calling the original playback function).
Then, still in theory, you could create a nice little playback engine that iterates over that data structure and executes exactly the playback steps that were recorded previously.
I´ve done similar stuff to repeat bundles of playback steps after changing AUT config to iterate a given playback over various configs without changing the code that does the original playback.
But hey, this is quite some work, lots of things can be wrong: The AUT must be in the same intial state upon playback as during "recording of playback". This includes all relevant databases and subsystems of your testing environment. Usually, this is not an easy task in large projects and not worth the trouble (we are talking about re-creating the original initial config just to reproduce one single bug).
So I suggest you check out the movie feature, i.e. option 1. It does not playback the steps in the AUT, but it shows what happened during the original playback -- exactly.

Related

Can you get a log file of 'reads' on specific RECID(Tablename) in Progress-4GL/Openedge at RunTime without access to Source Code?

I want to know which tables are being read by a query.
for each Customer where CustomerID = 12345.
Eventually this customer will be found in the following example, but progress must 'read' many tables before getting to customer 12345.
How do I know exactly which tables are read (By CustomerID), prior to getting to customer 12345?
*NOTE: I do not have access to modify the code being run for this selection. Ideally I would run a separate set of code that is executed at the same time as the customer query above to track the reads.
EDIT: More clearly - Can you track reads from a given program (.p) OR ProcessID and output either a RECID or the PrimaryKey to a file?
I understand the information is being read off the Disk and probably stored in a database buffer. So how would I get at the information in the database buffer?
You seem to be mixing up a few different things.
In a situation like your example where you FIND a specific record in one, and only one table then there is just a single record read. Progress will find that record by first scanning a relevant index. That might be 2 or 3 "logical reads" of the b-tree to get to the proper node. The record block and index blocks may, or may not be read from disk - that depends on what has happened previously.
There are "Virtual System Tables" available that can tell you how many READ operations take place against a particular table or index. But they do not trace the specific ROWID or other identifying data. _TableStat and _IndexStat are aggregates for all users on the system, _UserTableStat and _UserIndexStat are specific to a particular user's activity. You do need to set the -tablerangesize and -indexrangesize parameters adequately to take advantage of these.
If you have enabled the table and index statistics then you can use a tool like ProTop - http://protop.wss.com to get insight into this activity. Or you can write your own code.
OpenEdge Auditing does not track reads. That would be prohibitively expensive.
It's probably not really a good idea but, in theory, you could write FIND triggers for the tables you are interested in. That doesn't require access to the application source but you would need a development license. It will probably kill performance to do this though - so unless this is a non-production test environment that you just want to fiddle with I wouldn't really do that.
You mention wanting to know how you got to that point. That sounds more like you might need to have a "4gl trace". One easy way to get the stack trace of a running process is to execute:
$DLC/bin/proGetStack PID (UNIX)
or
%DLC%\bin\proGetStack PID (Windows)
This command will generate a "protrace.pid" file containing a 4gl stack trace and other interesting information.
There are also more complicated ways to get that info like using PROMON and the "client statement cache" or setting various log entry types at session startup. But proGetStack is pretty convenient and requires no code or scripting changes.
Some great options from Tom above. And all of them may be relevant to you. The option he only skirts around is the logging options. I feel obliged to expand on this because I'm giving a talk on it in a couple of weeks!
Assuming you are running a modern version of Progress, or even 10.2B08, then you have client logging available to you. Start your session with these additional options:
-clientlog "\somefolder\somefile.txt"
-logentrytypes "QryInfo:3"
This will log all the info of all the queries in your session to the file you specified above. If you navigate to the point in the system where you want to analyse your query and empty the logfile and save it, you can then run the offending query and see all the detail you need.
The output tells you all sorts of useful info, including the number of reads on each table, compared with the number returned to the user. You also get the index selected.
Using Tom's advice and/or this will get you what you need.

Force couchbase to update view index while integration testing

I'm looking to integration test my Couchbase implementation and i'm running into a problem with the eventually-consistent nature of Couchbase. In production, it's perfectly ok for my data to be stale, but at test time i'd like to insert some data and then verify that i'm getting it back via my various services. This doesn't work if the data is stale because my test expectations can't account for that.
I can work around this by setting staleState to false in the Couchbase client, but that means that every test i have is going to trigger a rebuild of the indexes and increases their running time.
Is there a way to force Couchbase to trigger a one-time rebuild the indexes for a design doc? Essentially, i'd like to upload all of my test data, trigger a rebuild and then execute my test cases.
Also if there's a better pattern for integration testing with Couchbase, i'd love to hear it.
Thanks,
M.
Couchbase will only rebuild the view indexes when stale=false is set if there is actually more data that needs to go into the index. Your first stale=false may take some time, but the rest of the calls should be fast even with stale=false set as long as you're not putting more data into your cluster.
For all of the subsequent calls there will a small (millisecond or smaller) delay due to the index checking to make sure it is up to date. If you don't want this you can just run the queries with stale=true and again as long as you're not inserting any more data you should get the correct results.
The last thing to note is that view index builds are incremental so they never rebuild the entire index.

xProc - Pausing a pipeline and continue it when certain event occurs

I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.

Explanation of XDMP-EXTIME in Marklogic

I need a lucid explanation of why XDMP-EXTIME happens in Marklogic. In my case it's happening during a search(read operation). In the exception message a line from the code is being printed:
XDMP-EXTIME: wsssearch:options($request, $req-config) -- Time limit exceeded
This gives me the impression that the execution does not go beyond that line. But it seems that it's a pretty harmless line of code ,it does not fetch any data from the DB, just sets certain search options. How can I pin point which part of the code is causing this? I have heard that increasing the max time limit of the task server solves such problems but that's not an option with me. Please let me know how such problems are tackled. It would be very very hard for me to show you the code base.Still hoping to hear something helpful from you guys.
The error message can sometimes put you on the wrong foot because of lazy evaluation. The execution can actually be further down the road than the error message seems to indicate. Could be one line, could be several. Look for where the returned value is being used.
Profiling can sometimes help getting a clearer picture of where most time is spent, but the lazy evaluation can throw things off here as well.
The bottom-line meaning of the message is pretty simple: the execution of your code takes too long. The actual search in which the options are being used is the most likely candidate of where it goes wrong.
If you are using cts:search or search:search under the covers, then that should normally perform well. A search typically gets slow when you end up returning many results, e.g. don't apply pagination. Search:search does that by default however.
A search can also get slow if you are running your search in update mode. You could potentially end up having MarkLogic trying to apply many (unnecessary) read locks. Put the following declaration in your search endpoint code, or xquery main module that does the search:
declare option xdmp:update "false";
HTH!
You could try profiling the code to see what specifically is taking so long. This might require increasing the session time limit temporarily to prevent the timeout from occurring while profiling. Note that unless this is being executed on the Task Server via xdmp:spawn or xdmp:spawn-fucntion, you would need to increase the value on the App Server hosting the script.
If your code is in a module, the easiest thing to do is make a call to the function that times out from Query Console using the Profile tab. Alternatively, you could begin the function with prof:enable(xdmp:request()) and later output the contents of prof:report(xdmp:request()) to a file on the filesystem, or insert it somewhere in the database.

How to build large/busy RSS feed

I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"

Resources