I have a requirement to open 50 to 100 URLs once and verify the login for each URL. All URLs belongs to Same App but hosted for different customers? How I can open multiple browsers, say 20 to 50 browser with different URLs using Selenium WebDriver? I tried TestNG with Parallel attribute set to "Tests" and instantiating driver object in #BeforeTest but after opening 2 browsers getting selenium exception as browser closed or died for 3rd browser.
Below find code for this.
#Test
#Parameters({ "url" })
public void testParallel(String url) throws Exception {
try {
driver.get(url);
int i = 0;
i++;
System.out.println("Browser Count" + i);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I think it is not possible to use multiple IEDriver instances in parallel on the same machine using Java bindings. (remember reading somewhere.. .NET bindings support parallel IE instances)
As per official documentation of IEDriver, "Unlike other WebDriver classes, there should only ever be a single InternetExplorerDriver instance at one time for some language bindings. If you need to run more than one instance of the InternetExplorerDriver at a time, consider using the RemoteWebDriver and virtual machines.". Refer here.
This should work with FirefoxDriver provided you have got your testng xml right. Or if you want it on IE, then you should consider setting up a grid and launch IE nodes on different machines, so that parallel runs can happen.
Why do you need to open them all at once? Selenium is not designed for load testing. If you want to check how your application or server is doing under load you better have a look at JMeter.
For a test like that I would recommend not using a browser per-se but instead use HTMLUnit driver (which is like a headless browser). Also, there is a thing called GhostDriver than might also accomplish similar. Still, you should probably use a remote Grid node+hub but you don't need to in order to accomplish your goal.
Selenium can do load testing in that respect. Also, I wouldh't use TestNG: instead, I would use Gradle or Maven because they have JUnit forking-multithread capability in themselves. In Gradle or Maven, create a task that filters and identifies certain test class and then forks processes to run them multi-threaded. I created an example here.
Related
I have a NaCl program that uses nacl_io. I'm looking for a way to run automated tests on it without relying on a browser. As long as I'm not using sockets, using set_ldr to run the executable seems to do the job. I can use mkdir() and create files, for example.
However, calling socket() in my program fails with "Permission denied".
Does the fact that I'm using sockets means that I must run my tests in a browser?
If so, what is the best way to automate this kind of tests?
There is no way to use real sockets from sel_ldr, but you can use test fakes instead.
For the nacl_io tests, we use fake Pepper interfaces that have simple implementations. See https://code.google.com/p/chromium/codesearch#chromium/src/native_client_sdk/src/tests/nacl_io_test/fake_ppapi/.
We haven't yet implemented a fake socket interface, but it should be possible. You would need to implement the following interfaces:
PPB_MESSAGE_LOOP_INTERFACE_1_0
PPB_TCPSOCKET_INTERFACE_1_1
PPB_UDPSOCKET_INTERFACE_1_0
Then when initalizing nacl_io in your tests, pass in your own PPB_GetInterface callback:
const void *my_get_interface(const char* interface_name) {
if (strcmp(interface_name, PPB_MESSAGE_LOOP_INTERFACE_1_0) == 0) {
return my_fake_message_loop_interface;
} else if (...) {
...
}
nacl_io_init_ppapi(pp_instance, my_get_interface);
I created a Web Performance Test for a site which seems to be working fine. It's a simple test for logging in and testing the navigation. Running that test solely works every time. But the problem shows up when I call that test in in a LoadTest. So, I created a load test with only this web performance test in it and it fails all the time right after logging in because of this error:
The server committed a protocol violation. Section=ResponseStatusLine
I've researched this error a lot, and everyone suggests that inserting this statement:
<system.net>
<settings>
<httpWebRequest useUnsafeHeaderParsing=”true”/>
</settings>
</system.net>
in the web.config file solves the issue, but usually QA is separated from DEV and we have no access to their code. I'm just wondering how can the test work when executed individually and not in a load test. I thought the problem might be the number of users or the load pattern, so I set it from my initial Step load pattern to a Constant load pattern with only one user. Still, the same error causes the test to fail. Did anyone have a similar issue? If you need any more data, just let me know.
EDIT: When I specified a proxy (localhost:8888 - for fiddler) in the performance test that the load test uses, the issue didn't occur, but the load test was too slow.
I got exactly the same problem. My test environment is using SSL and is load balanced using an F5 load balancer. I was not getting the problem in a non-load balanced configuration.
A webtest when run does not cache dependent requests whereas the loadtest will cache dependent requests, hence the different behavior encountered.
To get around this problem you need to create a plug in to force dependant requests not to be cached in the load test. The following article tells you how to create a plug in.
http://msdn.microsoft.com/en-us/library/ms243191.aspx
Plug in Code Required:
using System;
using Microsoft.VisualStudio.TestTools.WebTesting;
namespace DisableCache
{
public class DisableCache : WebTestPlugin
{
public override void PostRequest(object sender, PostRequestEventArgs e)
{
foreach (WebTestRequest dependentRequest in e.Request.DependentRequests)
{
dependentRequest.Cache = false;
}
}
}
}
In the past, I got this error because of extra \n in the Url.
In my case, this was caused by a dynamic datasource. Parameter in the datasource should be cleaned.
Are you using DataSource in your Test ?
I have the following code in my test class (java) but the timeout doesn't seem to work (it has no effect at all). I've tested it with really slow connections and I expect it to fail after 5 secnods but it waits for page to load indefinately and sometimes it come back in 8-10 seconds and the test passes as the page has actually loaded but not within the time I specified. Any idea why page timeout command is not doing what it is supposed to do?
protected static WebDriver driver;
driver = new FirefoxDriver();
driver.manage().timeouts().pageLoadTimeout(5,TimeUnit.SECONDS);
driver.get("http://www.google.com");
I'm using Selenium 2.20.0.
Thanks in advance
Then report it as an issue
http://code.google.com/p/selenium/issues/list
pageLoadTimeOut makes no sense without "unstable" Firefox profile.
Probably, you will have to either download the plugin been mentioned at the selenium download page OR write a while loop that would run indefinitely and break only when the element is found. Make use of try-catch blocks as well.
Does webdriver maintain unique browser sessions by default when used with multiple threads i.e multiple tests in parallel? If not then how do I make it maintain unique sessions?
By
Using TestNG , we can open multiple browser session (firefox) and run tests.
I am closer to "no" - If I run my tests in Selenium Grid and some browser window "dies" (hangs up unexpectedly, because I am bad programmer), restarting the tests cause my webapp to tell me "another user with same user name is already logged in"
But in normal Selenium Webdriver, calling driver = new FirefoxDriver(); caused new session. Always.
You can use Grid configuration; It is exactly what you need.See here an example of parallel test run.
Grid can support multiple sessions. You can configure this when you register your node to the hub, using parameters : -maxSession x -browser browserName=firefox,maxInstances=x, where x represents desired number of sessions.
I have a php script which does the accepted answer described here.
It doesn't work unless I add the following before fclose($fp)
while (!feof($fp)) {
$httpResponse .= fgets($fp, 128);
}
Even a blank for loop would do the job instead of the above!!
But whats the point? I wanted Async calls :(
To add to my pain, the same code is running fine without the above code snippet in an Apache driven environment.
Anybody knows if Nginx or php-fpm having a problem with such requests?
What you're looking for can only be done on Linux flavor systems with a PHP build that includes the Process Control functions (PCNTL library).
You'll find it's documentation here:
http://php.net/manual/en/book.pcntl.php
Specifically what you want to do is "fork" a process. This creates an identical copy of the current PHP script's process including all memory references and then allows both scripts to continue executing simultaneously.
The "parent" script is aware that it is still the primary script. And the "child" script (or scripts, you can do this as many times as you want) is aware that is is a child. This allows you to choose a different action for the parent and the child once the child is spun off and turned into a daemon.
To do this, you'd use something along these lines:
$pid = pcntl_fork(); //store the process ID of the child when the script forks
if ($pid == -1) {
die('could not fork'); // -1 return value means the process could not fork properly
} else if ($pid) {
// a process ID will only be set in the parent script. this is the main script that can output to the user's browser
} else {
// this is the child script executing. Any output from this script will NOT reach the user's browser
}
That will enable a script to spin off a child process that can continue executing along side (or long after) the parent script outputs it's content and exits.
You should keep in mind that these functions must be compiled into your PHP build and that the vast majority of hosting companies will not allow access to them on their servers. In order to use these functions, you generally will need to have a Virtual Private Server (VPS) or a Dedicated server. Not even cloud hosting setups will usually offer these functions as if used incorrectly (or maliciously) they can easily bring a server to it's knees.