I am trying to run certain integrations test cases using Nightwatch.js and Saucelabs. Currently, for each test case, a new browser window opens and due to this, the test cases are taking a long time.
I need to run the test cases in the same browser window and display test case result for each test case on SauceLabs.
Below is the code similar to what I need to run.
module.exports = {
beforeEach: (browser, done) => {
//login
},
'Test-1': browser => {
browser
.page
.testPage()
.navigate()
.end()
},
'Test2': browser => {
browser
.page
.testPage()
.navigate()
.end()
},
afterEach: (browser, done) => {
//logout
},
}
If I remove .end() from Test-1, Saucelabs runs the test in one browser, but only shows test result with name Test-2
Nightwatch handles the starting and stopping of webdriver automatically by default, but you can disable this and manage it yourself by setting the webdriver start_process configuration option to false in your nightwatch.json file:
"webdriver" : {
"start_process": false
}
Nightwatch reference documentation: https://nightwatchjs.org/gettingstarted#configuration
However, you probably shouldn't do that.
First of all, it means more work for you -- you will have to manage starting and stopping sessions yourself in each test spec, including passing in the configuration.
Secondly, sessions in Sauce Labs are meant to be started and stopped for each test. Sauce will only recognize 1 completed test -- passed or failed for each session.
Finally, it's not a good practice to lump together tests in a single session -- your browser may be in an unexpected state, with different cookies and cache, and your app may have lingering settings from a previous tests. By creating a new session in Sauce Labs you ensure that everything is "clean" and you can be sure to reproduce any scenario exactly.
I understand the desire to try to save time because it takes longer to start each session individually, as well as to go through whatever preparatory steps are needed to get to a certain state within your application. But you'd be better off being able to configure that state by calling an API, setting a cookie, or whatever it takes rather than having your browser in an unknown state.
This is why Nightwatch restarts the browser between tests automatically.
Related
I want to run a testcafe script inside the SauceLab without using provided plugin.
Say I have a test that can be run on chrome using testcafe on local machine. Now the same test having some browser's capability I want to trigger in SauceLab.
Is it possible or not? if not then why?
As Sauce lab provides a grid and browser with capability with help of a remote Saucelab URL can't be run on that grid?
I tried to create a configuration file by mapping the test, defining capability and trigger over SauceLab URL
my test is below that I want to run in SauceLab grid:
fixture `My first fixture`
.page `http://devexpress.github.io/testcafe/example/`;
const page = new page();
test('My first test', async t => {
await t
.typeText(page.nameInput, 'P.Parker')
.click(page.macOSRadioButton)
.click(page.featureList[0].checkbox)
.click(page.interfaceSelect)
.click(page.interfaceSelectOption.withText('Both'))
.expect(page.nameInput.value).contains('Peter');
});
There is no such way. These are completely different environments.
TestCafe script is executed by Node.js and SauceLabs provides only the browsers.
testcafe-browser-provider-saucelabs performs a lot of service things: it sets up a tunnel between your computer and Saucelabs virtual machines, runs specified browsers, passes the test URLs to them and etc.
I've configured firebase ab-testing. Everything works fine except there is no impact user on console.
Actually, I can see UI and log show ab-testing is applied.
Moreover, by checking the other StackoverFlow topic, activateFetched also invoked after fetch successfully.
Moreover, I've referenced
Firebase Remote Config A/B testing shows no results after 24 hours
Firebase Remote Config results on initial request
Remote Config A/B Test does not provide results on iOS
But those are no work on my case.
Is there anything miss or any other need to check so that client can response AB testing result to firebase console.
Thanks for your help first.
Code snippet:
[FIRApp configure];
FIRRemoteConfigSettings* configSettings = [[remoteConfig configSettings] initWithDeveloperModeEnabled:YES];
[[FIRRemoteConfig remoteConfig] setConfigSettings:configSettings];
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:duration completionHandler:^(FIRRemoteConfigFetchStatus status, NSError *error) {
if (status == FIRRemoteConfigFetchStatusSuccess) {
BOOL configFound = [[FIRRemoteConfig remoteConfig] activateFetched];
A couple things to check or take note of:
Make sure you're using and have deployed the latest Remote Config SDK. Earlier versions don't work with A/B test experiments.
Be sure to verify your experiment on a test device by following the documentation here
It can take a couple days for data to come in for your experiment.
Please call the functions in the following order:
fetch()
Call activatefetched() in the completion handler of fetch().
Fire activation event. If you need to call activation event immediately after activatefetched(), add a time delay of a few seconds. This is because activatefetched() process asynchronously and hence the function may not execute completely, before the activation event is fired.
Once done, test a running experiment on test device. In the debug logs search with string "exp_X" where 'X' is the experiment Id. You will find the experiment Id in the URL of the experiment. If you find the experiment ID in the debug logs while executing the code on test device, it means the device was covered in experiment.
Also if the experiment setup is correct, the running experiment will show 1 active experiment user in the console.
Bug:
I'm consistently getting error code -1009 "The Internet connection appears to be offline." errors when making URLSession requests in an Apple Watch extension on an Apple Watch Series 3 when connected to the Internet only via LTE.
Steps to Reproduce:
Install the app.
Configure your device so that it's only on LTE.
Verify your connection to LTE using iMessages, e.g.
Launch the app.
Initialize a URLSession using the .default or .ephemeral session configuration.
Make a data task request for any known-good https URL.
Expected Behavior:
The request manages to reach the destination.
Observed Behavior:
The request fails immediately with error code -1009 "The Internet connection appears to be offline."
Code Sample:
let config = URLSessionConfiguration.ephemeral
let sesh = URLSession(configuration: config)
let url = URL(string: "https://google.com")!
sesh.dataTask(with: request) { (_, _, error) in
print(error)
}.resume()
NOPE: SEE UPDATE #3 BELOW: The crucial missing element: you must set the waitsForConnectivity flag on your session configuration to true.
let config = URLSessionConfiguration.ephemeral
config.waitsForConnectivity = true
let sesh = URLSession(configuration: config)
let url = URL(string: "https://google.com")!
sesh.dataTask(with: request) { (_, _, error) in
print(error)
}.resume()
If you do not set that flag, the requests fail immediately because LTE access isn't available instantly but only after the briefest of delays. Setting this flag to true makes the requests work. In my testing there even seems to be no appreciable difference in timing between enabling the waitsForConnectivity over LTE and making the same request without enabling waitsForConnectivity but conducted over WiFi, almost like the waiting period enabled by waitsForConnectivity in some scenarios is a next-turn-of-the-runloop kind of situation.
Update #1
I am unable to make any requests over LTE. When waitsForConnectivity is set to true, the requests just timeout according to the timeout properties of the session config. When waitsForConnectivity is false, the requests fail immediately. I'll update my question and answer when I have more information. I'm waiting on a response from an Apple TSI request which usually takes several days.
Update #2
Adding to the mystery, the same sample code runs fine over cellular on two other developers' hardware. I know that my hardware is good because Apple's apps fun fine over LTE on it (phone calls rolling down the highway with nothing but my watch in the car). So there's something really fishy going on. I've asked Apple DTS to look into this, and they can't reproduce the issue either. I'll be following up with them as soon as I can.
Update #3
Sometime in the intervening weeks after I last updated this post, cellular requests started working in my apps. I didn't change anything about my watch, no software updates, no resets, nothing. I didn't even recompile the code; the same build is still on my watch as previously. It just started working as expected, same as it did on other developers' devices.
The only thing strange I noticed is that I got three, back-to-back, identical SMS messages from AT&T notifying me that my Apple Watch is now linked to my iPhone number. Which is strange, because that linkage supposedly occurred the night I unboxed my phone, not two months later. I have no idea if this is related to my issue. All I know is that cellular requests are now working.
I had the same problem but was developing an App for the iPhone. This is what finally solved the problem. I set the configuration objects property:
config.allowsCellularAccess = true
This is very confusing, because the Apple documentation states that this property is set to true by default... but in my case it was not. Also, even though I am using "background tasks," and they are always meant to wait for connectivity, I also set waitsForConnectivity = true, too, just in case.
Just in case someone runs into this error but has everything set up correctly. I was running my project from xCode onto a real device but couldn't get past the internet connectivity issue.
In the code there was a check for __DEV__ to determine what API url to use.
I was building this for running not testing so i assumed it would make __DEV__ false. but it did not, so I had to change the code for that check and set it to a non-localized api url.
even if you are injecting your url, it might not grab the correct one based on if it thinks it is a DEV build or not.
I want to load test an enterprise Web application (which I did not build), using a Visual Studio 2010 Ultimate Load Test. I want each virtual user to log in at the beginning, and log out at the end of their run of random tests. I can properly configured the load test to do so. However, there is a complication. The session key is injected into the URL, like this:
http://ProductName/(S(ilv3lp2yhbqdyr2tbcj5mout))/System/Container.aspx
I converted the Visual Studio WebTests to coded tests, and then retrofit them with code that uses the session-specific URL. This works fine. What I need to do is persist this session encoded URL across the various tests that specific virtual user runs, starting with the login WebTest class, to the logout WebTest class.
The individual WebTest classes are capable of logging in and out at the beginning and end of each test. However, this is not an accurate representation of normal use. This application emulates a mainframe terminal, and never cuts the connection or session between Web browser requests. Each session is one long, interactive HTTP request, just like a mainframe terminal interacts with, for example, an IBM AS400. Usert typically log in to the mainframe at the beginning of day, and (should) log out at the end of day. Likewise, this Web application maintains the HTTP request until the user logs out, or the IIS session timeout occurs. Therefore, it is important I keep the same session in the URL, between all tests, to ensure memory leaks and other nasty bugs don't accumulating.
Please share your thoughts!
Problem 1: persist the session id across test iterations
You can store data in the 'user context' which is persistent across test iterations. It is found in the WebTestContext having the name '$LoadTestUserContext'. (But note that this context parameter only appears in load test runs, not in standalone web test runs)
// within WebTestPlugin.PreRequest() or MyExtractionRule.Extract()
// e is the corresponding eventargs object...
LoadTestUserContext userContext = (LoadTestUserContext)e.WebTest.Context["$LoadTestUserContext"];
...
// setting the value in the user context (i.e. in the extraction rule)
userContext["sessionId"] = "(extracted session id)";
...
// getting the value from the user context (i.e. in WebTestPlugin PreWebTest event)
e.WebTest.Context["sessionId"] = userContext["sessionId"];
You'll have to add a WebTestPlugin (that fetches the value from the user context into the web test context) to all of your web tests to make the value available across all tests.
Problem 2: Login/Logout only at start and end of load test
extract the login and logout functionality into their own separate tests (remember that the logout test also needs the WebTestPlugin that fetches the stored sessionId)
in the Load Test, the Edit Test Mix dialog lets you specify an Initialize and Terminate test: set these to the Login and Logout tests you just created
in the Load Test Scenario, set "Percentage of New Users" to 0.
Some additional explanation of the "Percentage of New Users" setting
The "Percentage of New Users" setting is poorly named and does not indicate its full behaviour.
When a "New User" starts a test iteration, it takes a new $WebTestUserId (and gets a new fresh user context, which you don't want)
When a non-"New User" starts a test iteration, it keeps the same old $WebTestUserId (and the old user context, which you do want)
So far so good. But the unexpected part is this:
Each "New User" executes the following during a load test:
Initialize > web test iteration > Terminate
A non-"New User" executes the following for the entire duration of the load test:
Initialize > iteration1 > iteration2 > ... > iterationN > Terminate
In other words, "New Users" are constantly logging in and out (which you don't want). Non-"New Users" only login and logout once in the entire load test, and continually run test iterations for the duration (which you do want).
I have a php script which does the accepted answer described here.
It doesn't work unless I add the following before fclose($fp)
while (!feof($fp)) {
$httpResponse .= fgets($fp, 128);
}
Even a blank for loop would do the job instead of the above!!
But whats the point? I wanted Async calls :(
To add to my pain, the same code is running fine without the above code snippet in an Apache driven environment.
Anybody knows if Nginx or php-fpm having a problem with such requests?
What you're looking for can only be done on Linux flavor systems with a PHP build that includes the Process Control functions (PCNTL library).
You'll find it's documentation here:
http://php.net/manual/en/book.pcntl.php
Specifically what you want to do is "fork" a process. This creates an identical copy of the current PHP script's process including all memory references and then allows both scripts to continue executing simultaneously.
The "parent" script is aware that it is still the primary script. And the "child" script (or scripts, you can do this as many times as you want) is aware that is is a child. This allows you to choose a different action for the parent and the child once the child is spun off and turned into a daemon.
To do this, you'd use something along these lines:
$pid = pcntl_fork(); //store the process ID of the child when the script forks
if ($pid == -1) {
die('could not fork'); // -1 return value means the process could not fork properly
} else if ($pid) {
// a process ID will only be set in the parent script. this is the main script that can output to the user's browser
} else {
// this is the child script executing. Any output from this script will NOT reach the user's browser
}
That will enable a script to spin off a child process that can continue executing along side (or long after) the parent script outputs it's content and exits.
You should keep in mind that these functions must be compiled into your PHP build and that the vast majority of hosting companies will not allow access to them on their servers. In order to use these functions, you generally will need to have a Virtual Private Server (VPS) or a Dedicated server. Not even cloud hosting setups will usually offer these functions as if used incorrectly (or maliciously) they can easily bring a server to it's knees.