How to add pause between requests? - paw-app

I have a set of requests I run together in a group. One of them starts some async operations on the server. I need to insert a n-second pause between this request and the next to give those async operations time to complete. How can I do this?

Unfortunately it isn't possible yet with Paw. Though we're going to bring a nice testing flow (with assertions, request flow, waits, etc.) in a future release.
As a temporary workaround, you could add a dummy request to a "sleep" page alike: http://httpbin.org/delay/3
A screenshot to explain this better (and a video here to see it run):

Related

How to call .xqy page from other .xqy page in Marklogic?

Can I call a .xqy page from another .xqy page in Marklogic ?
There are several ways to execute another .xqy, but the most obvious is probably by using xdmp:invoke. That calls the .xqy, waits for its results and returns them on the spot in your code. You can also call a single function using the combination xdmp:function and xdmp:apply. You could also mess around with xdmp:eval, but that is usually a last resort.
Another strategy could be to use xdmp:http-get, but then the execution runs in a different transaction, so would always commit. You would also need to know the url of the other .xqy, which need some knowledge about whether, and how url are rewritten in the app server (not by default).
Running other .xqy without waiting for results is also possible with xdmp:spawn. Particularly usefull for dispatching heavy load of for instance content processing. Dispatching batches of 100 to 1000 docs is quite common. Keep an eye on the task queue size though..
HTH!

Okay to POST with an empty body, pass data in the response?

I want a client to be able to request (via HTTP) whichever document is at the head of a server's output queue, with the understanding that if retrieval is successful, the document will then automatically be deleted from the queue. There is never more than one client per server, but the client could be multithreaded. There is no random access to the queue; only the head item can be retrieved (and deleted). I have not found this scenario discussed either here or elsewhere on the web. Here are the various approaches I can think of:
(1) The client could send a GET request. But GET is not supposed to have side effects, so this doesn't seem like a good idea.
(2) The client could send two requests, a GET to retrieve the document at the head of the queue and a DELETE (with an empty or ignorable URL) to delete the document at the head of the queue. But this requires two calls, which could cause various problems, especially if more than one thread/process in the client is trying to retrieve files.
(3) The client could send a POST request with an empty body; if there is a document at the head of the queue, the server will return a response whose body contains the document, and will also delete the document from the queue. This is somewhat counterintuitive in that it doesn't match the mental model of posting data and receiving a simple return code, but otherwise I like it. I'm not worried about the response getting lost in transit and the document going missing; I expect the connection to be safe enough to prevent this.
It would be nice if there were another HTTP method to handle this situation, but since there isn't, I think (3) is the best approach. Agree? Disagree?
UPDATE: Added (4) after reading Dan675's post below.
(4) The client could send a DELETE request, to which the server could send a response with the document in the body (and delete the document from the queue, of course). Again, this is slightly counterintuitive (you don't usually say "delete the item on top of the stack for me, please" when you want to retrieve it), but would work.
It should be done in two calls the first to GET it then one to DELETE it.
If the delete succeeds then the clients request is valid otherwise just treat it as if the whole request failed and try to get what's on the top of the queue again. This will cause some additional overhead due to failed requests but I would not recommend doing either of the other options.
I guess another way of doing this would be to first maybe do a PUT to mark the top most item as 'reserved' in some way then do a GET and DELETE. In doing it this way it may be possible to traverse this server side queue and look for the top-most item that is not 'reserved'.

Strategy for sending updates to a view while performing a large operation

This might be a very easy one for many of you - but I'm stuck trying to figure out a strategy for rendering updates to the View while server is performing a timeconsuming operation.
This is the case. I have a view that has a button which say "Approve". This approve needs to call some Action or backgroundprogress of some kind to perform a heavy operation that might take 20-30 seconds.
During that time I want to update the View with some kind of processing gif-animation and append text's like "performing operation A", "performing operation B" and so on.
What is the best stragegy for achieving this?
Here an answer you might not like: don't even bother trying to get some "progress updates" from the server.
Take a look at this task from the commercial point of view. The purpose of providing some feedback is to give the user some warm and fuzzy feeling that they have not been forgotten and the task they have asked their computer to do has not been abandoned. How much cost are you willing to incur delivering this feature?
The simplest such device is the humble progress bar. Even though most experience users would not trust it to tell them when a task will finish they do still trust that if its moving something is happening.
My solution would be to post off an async operation to the server to kick the operation off. Then show a progress bar that is entirely managed by javascript. It starts of rapidly but slows down as it progresses such that it would never actually complete but does appear to be making some progress. When the async operation completes briefly show the progress bar as reaching completion then remove it.
The cost of other solutions is much, much greater but the benefit over this approach is almost negligable if not actually negative, after all they are complex to implement and are more likely to go wrong.
I must admit that I haven't tried but I am willing to do it.
I think SignalR could be a good try.
while this sounds good in theory, I would make this a background operation that then sends a notification to the user via email or sms when the task is done.
otherwise you need to
setup a cache (not asp.net cache) on the server to store the current state of the long running process
setup a js timer to poll the server
update the ui with the current state stored in the cache.
not impossible, but a lot of moving parts.

When to update index with Lucene.NET? Async or not?

Is it generally fast enough to make simple updates synchronously? For instance with a ASP.NET web app, if I change the person's name... will I have any issues just updating the index synchronously as part of the "Save" mechanism?
OR is the only safe way to have some other asynchronous process to make the index updates?
We do updates both synchronous and asynchronously depending on the kind of action the user is doing. We have implemented the synchronous indexing in a way where we use the asynchronous code and just waits for some time for its completion. We only wait for 2 seconds which means that if it takes longer then the user will not see the update but normally the user will.
We configured logging in a way so we would get notified whenever the "synchronous" indexing took longer than we waited to get an idea of how often it would happen. We hardly ever get over the 2 second limit.
If you are using full text session, then you don't need to update indexs explicitly. Full text session take care of indexing updated entity.

How do I increase information in ASP.NET Trace

I've made some performance improvements to my application's backend, and to show the benefit to the end users of the GUI, we've been using the Trace.axd page to take timings. (The frontend is .Net 1.1 and the backend is Java, connected via Web services.)
However, these timings show no difference between the old and the new backends.
By putting a breakpoint in the backend and holding a request there for 30 seconds, I can see from Trace.axd that the POST is taking 3ms, and the GET is taking 4s. I'm missing about 26s...
The POST is where the performance improvement should be, but the timing on the Trace page seems to only include the time it takes to send the request, not the time it takes to return.
Is there a way to inrease the granularity of the information in the trace to include the whole of the request? Or is there another way to take the measurements I need?
OK, I kind of got what I wanted in the end. The problem is the IIS Trace doesn't include the time the POST takes to return.
I found that I could use Trace.Write() to add custom entries to the trace log, and even add a category, using Trace.Write(string category, string message).
Adding a call to Trace.Write() in my code that executes after the POST has completed gives me a better figure.
Still, it's not ideal as it's custom, and it's down to me to put it as near to the end of the POST cycle as possible.
I'm not sure how you're making the requests on the .NET side, but I'll assume that there's a HttpWebRequest involved somewhere. I'd expect HttpWebRequest.GetResponse() to return as soon as it has received the response headers. That way, you can start processing the start of a large response while the rest is still downloading. If your trace messages are immediately before and after a call to GetResponse, you therefore wouldn't see the whole execution time of the backend. Can you add a trace message immediately after you close the response?
It is abnormal that the trace output doesn't show all the time you've spent in the brakpoint. Did you check the total time column to see if it matches the time you've spent in the request ? Don't forget that one of the columns only shows the time spent since the preceeding trace statement.
If you want more granular data in the trace output, you can add your own. The TraceContext class has two methods, Warn and Write that add lines to the output (warn adds it in red).
The TraceContext is accessible from every page or control: just use this.Trace.Warn() or this.Trace.Write() (and I think it is also accessible through the HttpContext class).

Resources