classic asp multiple server side asynchronous callings [closed] - asp-classic

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have a classic asp page that when running do a lengthy check repeatedly calling 20 times the same vbscript function each time with different argument. The function returns either true or false (depending on the argument). Each time the function runs it takes 5-6 seconds. So the whole time is 5*20=100 seconds. It is not an acceptable time the user to wait. I improved things a bit by using Response.Flush() after each calling to the function so the temporary results published immediately to the browser. BUT I believe that to run the function 20 times asynchronously collecting all the responses to an array is the best solution. I have found that there is a way to use asynchronous calling in server side asp like we use ajax in client side but all the examples use only one asynch calling using MSXML2.ServerXMLHTTP or Microsoft.XMLHTTP.
XMLHTTP classic asp Post
https://gist.github.com/micahw156/2968033
How do I fire an asynchronous call in asp classic and ignore the response?

I think you need to rethink your approach to this. Instead of calling the function in server side ASP code, why not make the call from javascript? You could display all 20 lines in a table with the arguments displayed, and each row would be a separate async api call to your function. The call to the function could be a simple .asp page which takes whatever arguments are necessary to run the function and return the result.
Then your html table can be updated as the function returns.

Related

ASP.NET AJAX Progress Indication with Async Server Calls

I know this question has been asked before..a lot in fact. But I can't seem to get the wheels turning on this thing. To be honest I'm a bit lost on mating client side and server scripting and the examples I've seen are either far to simplistic or way above my head.
Goal:
My goal is to take a long running process I've writtin in VB.NET on the server, which happens to be loop based, and calculate a percentage complete (I know the range of the index values) and relay that back to the user by some means.
Idea:
As the loop iterates I want to pass back up to the client an integer of percent complete or poll it from the client.
What I've done:
Is very limited, I have little to no experience here, I've been doing a lot of googling and I've played with the UpdatePanel and UpdateProgress controls from AJAX a bit, but this method so far seems to lean towards an idicator, like GIF.
As always any help is appreciated, and if I can answer any questions I will.
Have you considered using an inline frame (iFrame) to host your long running process and report back status to the client via the Response object of the long running .aspx page?
If so, then I suggest you read Easy incremental status updates for long requests.
The example uses a button as the display for the progress after the user clicks it, but you could direct output to another DOM element if you wish.

How to get the value of hidden field with xquery in Marklogic? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a hidden field in a xqy page. Now I want to get its value in the same page through xquery code. The page is not refreshing. I do not want to use javascript. Is there any way to get the value of the hidden field with xquery without submitting the page.
Instead of simple HTML and outputting it directly you could use XForms. Some extensive documentation about XForms is available at http://en.wikibooks.org/wiki/XForms
You can then use a XForms-Processor (e.g. XSLTForms or betterForms), which can be used server- and client-side. This allows you to get the value of any fields (not just hidden fields) with pure X-technologies. XForms also includes MVC by default, which is quite nice. However, depending on your project and the amount of code already existing you might have to change a lot, as it is a complete technology. But normally this is the way to go to avoid JavaScript and instead using X*
You would need XQuery capabilities within the browser. MarkLogic runs server-side, so needs a round-trip (e.g. a submit). But you could have a look at XQiB: http://www.xqib.org/
HTH!
This question is vague enough to be interpreted several different ways: you will get better answers if you ask better questions.
But I'll be a little more optimistic than Geert. If the form field is in an HTML form built by a server XQuery module, the data for it must be available to that module while it is building the form. Arrange your code so that you can use it for whatever is needed, before returning the completed page.
You might be looking for some magical way to write XPath against the half-complete results of the query that's actually running. That isn't possible without some work on your part. You could arrange your code so that the form or the hidden field is a node returned by some other function, and write XPath relative to it to retrieve its value. If the hidden field was built using a request parameter from the previous HTTP request, you could call xdmp:get-request-field again. The point is to arrange your code so that you have the data you need it, when you need it.
The XQuery code at https://github.com/mblakele/cq might have some useful examples. It plays all sorts of games with form fields, both with and without JavaScript.

Caliburn Micro - is it possible to intercept calls to execute a command?

I want to add error handling to my view-models so that when a command is executed and an exception thrown, the error is handled gracefully and a modal dialog displayed.
I've got this working but my approach is a too wordy. Errors are trapped within a command and then published via an IObservable. A behavior subscribes to the errors - creating an appropriate view model and passing to the WindowManager. While it works, I'd prefer something more declarative.
Instead I want to decorate or intercept calls to commannds (bound to a button) and provide generic error handling. The try-catch might call out to a method on the view model or command that is decorated with a Rescue attribute.
I understand this is possible within Caliburn but can it be done with Micro? Perhaps there's an alternative approach?
Have a look at this question I asked on SO and subsequently answered with help via the CM codeplex forum.
I slightly modified the RescueAttribute of this CM filters implementation to allow the error handling routine to be executed as a coroutine.
This in combination with the ShowModal IResult available in some of the samples should get you what you want.

Using .aspx pages as an HTML template outside of an ASP.NET 3.5 HTTP request

I need to generate a block of HTML for use by an asynchronous operation triggered by an HTTP request (I am calling the Facebook API in response to an HTTP request, with the HTML block as a parameter). I already have an .aspx page that generates the HTML required, and would like to reuse that code.
I see three options, none of which I want to do:
Re-write the functionality currently in the .aspx page into a .NET function that returns the HTML. I don't want to spend the time re-writing it unless absolutely necessary. Also, the .NET code to produce the HTML will be much less maintainable than the .aspx markup to do so (yes, even with XML literals).
When I need the block of HTML, make an HTTP request to the .aspx page on the local server. The inefficiency of this does not concern me, but the design compromise does. Because of how the application is structured, I would have to litter my .aspx code with:
if(localRequest)
{doOneThing();}
else
{doTheOtherThing();}
which I don't want to do.
Create an ASP.NET application host to spit out these chunks of HTML. I'd imagine that this would improve on the efficiency of 2, but not the complexity.
Are there other alternatives? The ideal would be instantiating the .aspx page class and executing it with a mocked up HttpRequest or HttpContext. Can this be done, and is it worth the hassle?
There are two related but distinct parts to this problem:
a) how do you ensure that an asynchronous operation has a valid HttpContext?
b) how can you get the HTML output of an ASPX execution returned as a string?
For (a), it depends on how you're invoking the async operation. Unfortunately, in .NET there are quite a few ways to do async operations. But if you want to propagate HttpContext to the async code, there's only one good option: the Event-based Asynchronous Pattern. Although IMHO the event-based async pattern has some drawbacks (e.g. no "wait" operations, hard to sync multiple threads, need to refactor your code, etc.) it does a really cool thing of integrating cleanly with ASP.NET async pages and ensuring that the right context is set up when your callback gets control.
So in other words, propagating context only works (without doing a lot of extra work, that is) if you're playing by the rules set up for ASP.NET Async Pages. Here's an article on async pages if you're not familiar. Here's another post that is useful. In a nutshell, you split page processing into three stages:
1) set up for long-running operations
2) kick off long-running operations (e.g. to get your expensive data)
3) ASP.NET will call your Page_PreRenderComplete handler once all long-running operations are complete. from here, you can bind your data and render your HTML.
What may make this hard is that often you'll need to re-factor existing code since you need to segregate fetching the data from binding the data.
Now, on to (b) above: once you have context, the other question is how to get your page output into a string. There are a few ways to do this too, but perhaps the easiest is to encapsulate the stuff you want to output into a user control (.ASCX) and then follow the instructions in this blog post: http://stevesmithblog.com/blog/render-control-as-string/. See this post if you need data binding too.

Design Decision - Javascript array or http handler

I'm building a Web Page that allows the user to pick a color and size. Once they have these selected I need to perform a lookup to see if inventory exists or not and update some UI elements based on this.
I was thinking that putting all the single product data into multidimensional JavaScript array (there is only 10-50 records for any page instance) and writing some client side routines around that, would be the way to go for two reasons. One because it keeps the UI fast and two it minimizes callbacks to the server. What i'm worried about with this solution is code smell.
As an alternative i'm thinking about using a more AJAX purist approach of using HTTP handlers and JSON, or perhaps a hybrid with a bit of both. My question is what are your thoughts as to the best solution to this problem using the ASP.Net 2.0 stack?
[Edit]
I also should mention that this page will be running in a SharePoint environment.
Assuming the data is static, I would vote option #1. Storing and retrieving data elements in a JavaScript array is relatively foolproof and entirely within your control. Calling back to the server introduces a lot of possible failure points. Besides, I think keeping the data in-memory within the page will require less code overall and be more readable to anyone with a more than rudimentary understanding of JavaScript.
i'm against Ajax for such tasks, and vote (and implemented) the first option.
As far as I understand, you won't create Code smells if the JS part is being written by your server-side.
From a user point-of-view, Ajax is an experience-killer for wireless browsing, since any little glitch or mis-service will fail or simply lengthen the interaction by factors of 20(!).
I've implemented even more records than yours in my site, and the users love it. Since some of my users use internet-caffee, or dubious hotel wifi, it wouldn't work otherwise.
Besides, Ajax makes your server-vs-client interaction code much more complex, IMO, which is the trickiest part in web programming.
I would go with your second option by far. As long as the AJAX call isn't performing a long running process for this case, it should be pretty fast.
The application I work on does lots with AJAX and HttpHandler, and our calls execute fast. Just ensure you are minimizing the size of your JSON returned in the response.
Go with your second option. If there are that few items involved, the AJAX call should perform fairly well. You'll keep your code off the client side, hopefully prevent any browser based issues that the client side scripting might have caused, and have a cleaner application.
EDIT
Also consider that client side script can be modified by the user. If there's no other validation occuring to the user's selection, this could allow them to configure a product that is out of stock.

Resources