I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.
Related
I want to know which tables are being read by a query.
for each Customer where CustomerID = 12345.
Eventually this customer will be found in the following example, but progress must 'read' many tables before getting to customer 12345.
How do I know exactly which tables are read (By CustomerID), prior to getting to customer 12345?
*NOTE: I do not have access to modify the code being run for this selection. Ideally I would run a separate set of code that is executed at the same time as the customer query above to track the reads.
EDIT: More clearly - Can you track reads from a given program (.p) OR ProcessID and output either a RECID or the PrimaryKey to a file?
I understand the information is being read off the Disk and probably stored in a database buffer. So how would I get at the information in the database buffer?
You seem to be mixing up a few different things.
In a situation like your example where you FIND a specific record in one, and only one table then there is just a single record read. Progress will find that record by first scanning a relevant index. That might be 2 or 3 "logical reads" of the b-tree to get to the proper node. The record block and index blocks may, or may not be read from disk - that depends on what has happened previously.
There are "Virtual System Tables" available that can tell you how many READ operations take place against a particular table or index. But they do not trace the specific ROWID or other identifying data. _TableStat and _IndexStat are aggregates for all users on the system, _UserTableStat and _UserIndexStat are specific to a particular user's activity. You do need to set the -tablerangesize and -indexrangesize parameters adequately to take advantage of these.
If you have enabled the table and index statistics then you can use a tool like ProTop - http://protop.wss.com to get insight into this activity. Or you can write your own code.
OpenEdge Auditing does not track reads. That would be prohibitively expensive.
It's probably not really a good idea but, in theory, you could write FIND triggers for the tables you are interested in. That doesn't require access to the application source but you would need a development license. It will probably kill performance to do this though - so unless this is a non-production test environment that you just want to fiddle with I wouldn't really do that.
You mention wanting to know how you got to that point. That sounds more like you might need to have a "4gl trace". One easy way to get the stack trace of a running process is to execute:
$DLC/bin/proGetStack PID (UNIX)
or
%DLC%\bin\proGetStack PID (Windows)
This command will generate a "protrace.pid" file containing a 4gl stack trace and other interesting information.
There are also more complicated ways to get that info like using PROMON and the "client statement cache" or setting various log entry types at session startup. But proGetStack is pretty convenient and requires no code or scripting changes.
Some great options from Tom above. And all of them may be relevant to you. The option he only skirts around is the logging options. I feel obliged to expand on this because I'm giving a talk on it in a couple of weeks!
Assuming you are running a modern version of Progress, or even 10.2B08, then you have client logging available to you. Start your session with these additional options:
-clientlog "\somefolder\somefile.txt"
-logentrytypes "QryInfo:3"
This will log all the info of all the queries in your session to the file you specified above. If you navigate to the point in the system where you want to analyse your query and empty the logfile and save it, you can then run the offending query and see all the detail you need.
The output tells you all sorts of useful info, including the number of reads on each table, compared with the number returned to the user. You also get the index selected.
Using Tom's advice and/or this will get you what you need.
Under the condition of "allow_versions" set to "FALSE" or "TRUE", for both cases, how Swift response for the scenario that a file is under overwriting while delete request come in simultaneously(with the order of overwriting first then deletion)?
Please share your thoughts.
Many thanks!
The timestamp assigned to the request coming in to the proxy is what will ultimately decide which "last" write wins.
If you have a long running upload and issue the delete during, the tombstone will have a later timestamp and will eventually take precedence even if the upload finishes after.
When using the container versioning feature, overwriting in a versioned container will cause the object data to be COPY'd off the current tip before the PUT data is sent to the storage node with the assigned timestamp. For deletes in a versioned container the "previous version" is discovered at the time the overwriting request is made and subject to eventual consistency in the container listing, but is only deleted once it has been copied into the current location for the object.
More information about object versioning is available here:
http://docs.openstack.org/developer/swift/overview_object_versioning.html
Well, a quick summary comes, though still a very high level view but hope it helps understanding how it works under the hood.
The diagram(below link) sets two simultaneous scenarios(A and B) against the enable/disable of the Swift object versioning feature. The outcome for each scenario is shown in the diagram.
Download the diagram.
Please share your thoughts if any.
I'm trying to figure out a way to reflectively look at the code that I've executed in a QTP script. The idea here is, when I encounter a crash, have a recovery scenario that captures an error message and sends it to QC as a defect. If I can see the code I've already executed, then I could also include the steps to reproduce the defect, in theory.
Any thoughts?
Option 1: Movie recording and playback
QTP11 (finally) has a feature for demands like that: Take a look at Tools, Options, Run, Screen capture. "Save movie to results" there allows you to record exactly what happened. The resulting movie is part of the run result, i.e. if you submit a bug with this run result, the movie will be included.
I would not use such a feature because you would have to record the movie always just to have it in case of error. You would end up with big run results that contain movies nobody wants to see, just to have them in the rare cases that an error occurred and a defect is created. But:
In this regard, HP has done the job right: You can select in the dialog to save the movie to results only if an error occurs. And, to avoid saving the hole boring part of the test execution that did not contain errors, yet seeing the critical steps that lead to the error, you can specify to keep only the last N kB of the movie so you will always see what lead to the error.
Option 2: "Macro" recording and playback
You could, in theory, create your own playback methods for all test objects (registering functions via RegisterUserFunc), and make them save the call info into some data structure before doing the playback step (by calling the original playback function).
Then, still in theory, you could create a nice little playback engine that iterates over that data structure and executes exactly the playback steps that were recorded previously.
I´ve done similar stuff to repeat bundles of playback steps after changing AUT config to iterate a given playback over various configs without changing the code that does the original playback.
But hey, this is quite some work, lots of things can be wrong: The AUT must be in the same intial state upon playback as during "recording of playback". This includes all relevant databases and subsystems of your testing environment. Usually, this is not an easy task in large projects and not worth the trouble (we are talking about re-creating the original initial config just to reproduce one single bug).
So I suggest you check out the movie feature, i.e. option 1. It does not playback the steps in the AUT, but it shows what happened during the original playback -- exactly.
On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.
I've received a rush project (asp.net c# framework v2), of course due on Monday morning. It's a very simple project -- add a "Request Quote" page to an existing site. Basically, collect some info and then email someone at the company the contents of the form, and show the user a "Thank You" page. Simple as pie...until I just read the last requirement.
Every form submission is supposed to have a "Confirmation Number", starting with 10000, and each one successively increments the number by one.
Sounds so easy right? Well, I don't have a database in this site. No idea why, I really don't know anything else about the site other than the info I need to fill the requirements.
So, with this in mind, and realizing that this is less than optimal, I guess my only solution is to make a text file (yuck) and read it in, get the last number used, add one, and write that back to the text file. Which of course leaves a lot to be desired in respect to "locking" of the file so I don't get duplicate numbers.
Anyone have any suggestions for me? I'm open to anything at this point...
First, I'm assuming each process has some sort of unique identifier available. This could be a process ID, or something that is guaranteed to be unique without race conditions. Write this value to a file with the same name, so that you don't have a race creating that file. Then, move it to a file called "lock" if it doesn't exist. Check that this happened successfully by looking at the contents. Now read the value from the Confirmation number file. Check that the "lock" file has your unique identifier in it -- this ensures there wasn't a race from two people trying to move their file to lock. If it is yours, write back the incremented number and your unique ID to the Confirmation number file. Check again you hold the lock, and that the Confirmation number file has your ID. Then remove the lock file.
Should this fail, just sit around waiting for the lock file to disappear, then try again.
This should allow you write to a file in a race-free manner, and ensures that the sequence number always increases by one.
You could keep track of the confirmation number in the Windows Registry, but given the option it's probably easier to use a simple text file as you suggest. Make sure you clean up (i.e. Dispose) the file handles each time to release any file locks.
I would think of XML as a good solution.
This post might make xml even easier for you.