Postman Collection Run does pause for setTimeout calls - collections

I put a simple setTimeout(function(), 10000) call in the Tests section of a request.
Works fine when I run the step by itself.
When I do a Collection Run, the step just gets executed and Postman moves on without pausing.
Is this by design?
I'd rather not have to put a delay of X seconds for every step.

it works as expected , check the console to see when was the request send
make sure request is saved:
THe orange indication shows that it is unsaved , you have to save it
use the below command in pre-request script.
let moment = require("moment")
console.log("before:", moment())
setTimeout(function () { console.log("after:", moment()) }, 10000)
and check in console:

Related

Why does Progress go back to the initial screen after a session crash?

Hello all and thanks for viewing this question,
I have a program that users get access to via a login screen. Once the user's credentials have been validated on the login screen, the main program is called (from the login screen) and the login screen disappears. All good. However, if the session crashes (or I press CTRL-PAUSE), the main program is terminated and I end up at the initial login screen. I'd have assumed that after a session crash, Progress (11.4) should take me back to the OS (Windows Server 2012), but not back to the initial screen. I have tried placing QUIT in different areas of the program, but Progress still takes me back to the initial screen, while I need it to quit completely. Any thoughts would be greatly appreciated. Thanks!
It's the AVM's default behavior to rerun the startup procedure after a STOP condition has occurred that was not handled.
You can add an
ON STOP UNDO, RETURN "stopped" .
option to a DO, FOR or REPEAT block close where your "crash" happens. Then the calling procedure could check for the RETURN-VALUE of "stopped".
Assuming you are on a recent version (OpenEdge 12.x), you can also use CATCH Blocks for Progress.Lang.Stop:
CATCH stopcon AS Progress.Lang.Stop:
QUIT.
END CATCH.
I think that your use of the word "crashed" is very, very confusing. If your session actually "crashes" in the usual sense that _progres (or prowin if this is Windows) terminates, then you would not have any locked records remaining. You would also have a protrace file that would help you to identify where the issue occurs.
Incidentally, you could add error logging to the client startup to determine where the errors that QXtend cannot find are occurring:
_progres dbname -p startup.p -clientlog logname.log
You have not shared any code so I can only guess but, presumably, you are running your login program via the -p startup parameter.
Correct me if I am wrong but something along these lines:
_progres dbname -p startup.p
The startup program then runs whatever it runs to get you logged in and run the application. Maybe something like this:
/* startup.p
*/
message "(re)starting!".
pause.
run value( "login.p" ).
run value( "stuff.p" ).
message "all done".
pause.
quit.
And:
/* login.p
*/
message "hello, logging in!".
pause.
return.
Along with:
/* stuff.p
*/
message "hello, doing stuff!".
pause.
run value( "notthere.p" ).
message "hello, doing more stuff!".
pause.
return.
At some point an error occurs (you seem to want to call this a "crash"). I have arranged for a serious error to occur when stuff.p tries to "run notthere.p". So if you run my example you will see the behavior that you have described - your session "crashes", the startup procedure re-runs, and you get to the login screen again.
To change that and trap the error simply wrap a "DO ON STOP" around the RUN statements. Like this:
/* startup.p
*/
message "(re)starting!".
pause.
do on error undo, leave
on endkey undo, leave
on stop undo, leave
on quit undo, leave: /* "leave", exits this block when one of the named conditions arises */
run value( "login.p" ).
run value( "stuff.p" ).
/* we just leave because we finished normally */
end.
message "all done".
pause.
quit.
You mention QXtend so I am guessing that MFG/Pro is involved. If you cannot directly modify the MFG/Pro startup procedure (as I recall that would be "-p mfg.p") just adapt the code above to be a "shim" that runs mfg.p from within the "DO ON STOP..." block.
I believe I have found a way to quit the initial login screen when this appears as the result of a session crash, by using the the ETIME function. Thanks again, Mike for your response.

NetLogo: Can't "stop" forever button from another procedure?

I have simplified my problem below. I want to stop the execution of the forever button "go" when there's no robots, and I want to call this from another procedure ("test" in this case) like so:
to go
test
end
to test
if not any? robots [ stop ]
end
The reason for this is that I want to call stop where the robot dies such that I can send an appropriate user message.
Sadly, you must re-organize your code so that the you call if not any? robots [ stop ] in your go in order for the following to be true:
See the documentation:
A forever button can also be stopped from code. If the forever button
directly calls a procedure, then when that procedure stops, the button
stops. (In a turtle or patch forever button, the button won’t stop
until every turtle or patch stops – a single turtle or patch doesn’t
have the power to stop the whole button.)
Ref:http://ccl.northwestern.edu/netlogo/docs/programming.html#buttons
stop This agent exits immediately from the enclosing procedure, ask,
or ask-like construct (e.g. crt, hatch, sprout). Only the enclosing
procedure or construct stops, not all execution for the agent.
Ref: http://ccl.northwestern.edu/netlogo/docs/dict/stop.html
One alternative hacky solution which I'm tempted to not post may be to do the following where you raise an error in which then stops.
to go
carefully[test][error-message stop]
end
to test
if not any? robots [ error "no more robots!" ]
end

Why is Puppeteer failing simple tests with: "waiting for function failed: timeout 500ms exceeded"?

While trying to set up some simple end-to-end tests with Jest and Puppeteer, I've found that any test I write will inexplicably fail with a timeout.
Here's a simple example test file, which deviates only slightly from Puppeteer's own example:
import puppeteer from 'puppeteer';
describe('Load Google Puppeteer Test', () => {
test('Load Google', async () => {
const browser = await puppeteer.launch({
headless: false
});
const page = await browser.newPage();
await page.goto('https://google.co.uk');
await expect(page).toMatch("I'm Feeling Lucky");
await browser.close();
});
});
And the response it produces:
TimeoutError: Text not found "I'm Feeling Lucky"
waiting for function failed: timeout 500ms exceeded
I have tried adding in custom timeouts to the goto line, the test clause, amongst other things, all with no effect. Any ideas on what might be causing this? Thanks.
What I would say is happening here is that using toMatch expects text to be displayed. However, in your case, the text you want to verify is text associated with a button.
You should try something like this:
await expect(page).toMatchElement('input[value="I\'m Feeling Lucky"]');
Update 1:
Another possibility (and it's one you've raised yourself) is that the verification is timing out before the page has a chance to load. This is a common issue, from my experience, with executing code in headless mode. It's very fast. Sometimes too fast. Statements can be executed before everything in the UI is ready.
In this case you're better off adding some waitForSelector statements throughout your code as follows:
await page.waitForSelector('input[value="I\'m Feeling Lucky"]');
This will ensure that the selector you want is displayed before carrying on with the next step in your code. By doing this you will make your scripts much more robust while maintaining efficiency - these waits won't slow down your code. They'll simply pause until puppeteer registers the selector you want to interact with / verify as being displayed. Most of the time you won't even notice the pause as it will be so short (I'm talking milliseconds).
But this will make your scripts rock solid while also ensuring that things won't break if the web page is slower to respond for any reason during test execution.
You're probably using 'expect-puppeteer' package which does the toMatch expect. This is not a small deviation. The weird thing is that your default timeout isn't 30 seconds as the package's default, check that.
However, to fix your issue:
await expect(page).toMatch("I'm Feeling Lucky", { timeout: 6000 });
Or set the default timeout explicitly using:
page.setDefaultTimeout(timeout)
See here.

Always times out at 10 seconds regardless of setting

I am running a simple WebDriverIO script, and inserting any amount of async behaviour is making it time out at the 10 sec threshold (or before?). I want to control the timeout setting, but no matter what I try I cannot increase it.
As I am using ChromeDriver, not all Selenium settings are applicable, and setting browser.timeouts('implicit', 30000) (or script or pageLoad) will throw an error: unknown error: unknown type of timeout:pageLoad.
The only other timeouts I have found are
mochaOpts.timeout
waitforTimeout
This is my test:
it.only('should be able to register', ()=>{
// Mocha timeout
this.timeout(50000)
browser.url('/encounter/new');
browser.waitUntil( function() {
return browser.isExisting('[name=lastName]');
});
browser.setValue('#problem', 'something fishy');
// this is problematic: comment this out and everything works
// also works with very small timeouts
browser.executeAsync(function(done){
setTimeout(done, 1000);
});
browser.click('#appdetailsheader button');
console.log(browser.getUrl(), browser.log('browser'))
browser.waitUntil( function() {
return !browser.isExisting('[name=lastName]');
});
console.log(browser.getTitle(), browser.getUrl());
console.log(browser.log('browser'))
});
I can totally understand you frustration. WebdriverIO is extremely modular & configurable, but this comes with an increased level of complexity which often leads to confusion.
For this:
// Mocha timeout
this.timeout(50000);
!!! This has no effect because you are configuring/setting your Mocha timeout in an arrow function which is discouraged by Mocha. Read more about it here.
Solution (pick as applicable to your setup):
run your script with WebdriverIO test-runner and set mochaOpts: { timeout: <desiredTimeout>}, or you can even override it from your test run: wdio wdio.config.js --mochaOpts.timeout=<desiredTimeout>;
set your timeout in your root describe statement, or even better, in a before hook: before(function() { this.timeout(<desiredTimeout>); (...)});;
if you're running your test case using Mocha, pass the timeout either into your CLI command (if you're using it to run your tests): mocha yourTestFile.js --timeout <desiredTimeout>, or change it's value in your mocha.opts file;
Note: I'm sure there are even more ways to do this, but these are a few that worked for me.
For this:
browser.waitUntil( function() {
return browser.isExisting('[name=lastName]');
});
!!! This will always wait for the existence of the element with attribute name="lastName" for the default 1000 ms before timing out. This value can be changed via waitforTimeout.
Solution (pick as applicable to your setup):
explicitly give your waitUntil.../waitfor... commands the timeout: browser.waitUntil( function() { return browser.isExisting('[name=lastName]');}, <desiredTimeout>, <errorMessage>);;
run your script with WebdriverIO test-runner and set waitforTimeout: <desiredTimeout>, or you can even override it from your test run: wdio wdio.config.js --waitforTimeout=<desiredTimeout>;
Finally, I tried to run a few test cases with obscene timeout values (50000 ms) and it worked as expected for every one of the issues you mentioned above.
waitforTimeout example:
Logs (1 failing (57s)):
[chrome #0-0] ConnectWeb Devices Content Test Suite
[chrome #0-0] 1) "before all" hook
[chrome #0-0]
[chrome #0-0]
[chrome #0-0] 1 failing (57s)
[chrome #0-0]
[chrome #0-0] 1) ConnectWeb Devices Content Test Suite "before all" hook:
[chrome #0-0] Oups! An error occured.
Timed out waiting for element ('span[connectqa-device="events"]') to exist
Note: I've never used Selenium timeouts in WebdriverIO before (implicit, pageLoad, script), but I never had this necessity before as waitforTimeout & Mocha's timeout have been more than effective for my testing scenarios.
Small mention: This statement inserting any amount of async behaviour is making it time out at the 10 sec threshold is not true. First off, WDIO is completely asynchronous. You might be using the sync: true flag, but behind the scenes, everything is still async.
This is a vast topic and I tried to cover as much as possible given the information at hand. Sorry if I didn't completely answer your question. Tell me in the comments and I'll try to update the answer with the relevant info.
Hope it helps. Cheers!

How to wait for a Blaze template to be renderer before asserting on mocha-web-velocity?

I have some mocha-web-velocity tests that need the template to be rendered.
I can use setTimeout:
setTimeout(function() {
chai.assert.equal($(".text-center").html(), "Something");
done()
}, 1500)
This works but I would like not to depend on timeouts but to make the assertion on the rendered callback:
Template.deliver.rendered = function() {
chai.assert.equal($(".text-center").html(), "Send a deliveqewrry");
done()
}
This only works partially, as the error msg gets logged into the browser's console and the results UI shows an error, but the error diaplayed on the UI says that the timeout has reached (on the browser console I get the correct msg).
Why is the behaviour different between this to approaches?
Which is the best way to make my tests to wait for templates to be rendered?

Resources