How to run tests in same browser in parallel using selenium webdriver? - webdriver

We have set of automated test cases around 2000 and we need to run them daily on every new build that come up. It's currently taking 4 hours to finish the test on one machine. In order reduce this we planning to run tests in batches (500 per batch) on same machine by initiating multiple browsers of same type. Say 4 firefox browser sessions per test suite. So it can finish in 1 hour time.Is it possible to achieve this using selenium webdriver and testng? Please suggest.

It is possible using Selenium Grid and TestNG. Grid can help distribute your tests on various machines or browser instances as you require. To get started, refer : Grid2
I think you might need to change your driver instantiation code a bit to include RemoteWebDriver instead of concrete drivers, but would be ok if your driver instantiation code in your framework is isolated. TestNG and Grid can help provided your tests are well written to support parallel execution.
For TestNG, you can refer, parallel running in TestNG.

If you are using Python - the best way to go is to use py.test to drive the tests. To distribute the tests the pytest-xdist plugin can be used.
Anyway, for both Java and Python you can use Jenkins to run/distribute your tests (using Selenium plugin)

Related

Run hydra configured project with SLURM and Horovod

Right now, I am using Horovod to run distributed training of my pytorch models. I would like to start using hydra config for the --multirun feature and enqueue all jobs with SLURM. I know there is the Submitid plugin. But I am not sure, how would the whole pipeline work with Horovod. Right now, my command for training looks as follows:
CUDA_VISIBLE_DEVICES=2,3 horovodrun -np 2 python training_script.py \
--batch_size 30 \
...
Say I want to use hydra --multirun to run several multi-gpu experiments, I want to enqueue the runs with slurm since my resources are limited and would be run sequentially most of the time and I want to use Horovod to synchronize gradients of my networks. Would this setup run out of the box? Would I need to specify CUDA_VISIBLE_DEVICES if slurm took care of the resources? How would I need to adjust my run command or other settings to make this setup plausible? I am especially interested in how the multirun feature handles GPU resources. Any recommendations are welcome.
The Submitit plugin does support GPU allocation, but I am not familiar with Horovod and have no idea if this can work in conjunction with it.
One new feature of Hydra 1.0 is the ability to set or copy environment variables from the launching process.
This might come in handy in case Horovod is trying to set some environment variables. See the docs for info about it.

Conduct a Speed Test of Internet Connection In an automatic manner

I would like to conduct an extensive tests on my new internet connection using speedtest.net
Is there some way that I can completely automate the process.
The speed test is conducted at a fixed interval of time automatically and a spanshot of the screen is then taken and stored on my system.
I found a good repository that does basically exactly what you're asking, but better (runs multiple tests, can run from the command line.)
I would recommend copying the code (python) from https://github.com/Janhouse/tespeed.
You can run this repeatedly using a cron job, and easily email the results to yourself using the crontab file.
For easy step by step instructions, I found http://www.pythonforbeginners.com/code-snippets-source-code/command-line-speedtest-net-via-tespeed/

Jmeter console manipulation for automation purposes

I am pretty newbie by this question.
I want to know about possibilities of how to manipulate Jmeter through the console (bash or cmd).
My goal for a start consists in understanding of how to run my testplan.jmx for several URLS. For this I add "server" and "port" parameters into my testplan.
How could I can change these parameters through the console and then run Jmeter ?
Morover, I want to ask you guys to suggest any free online tutorials where I can learn more about "Jmeter in non gui mode" and possibilities for integration Jmeter between different frameworks to use for automated testing.
Thank you very much indeed.
See:
http://jmeter.512774.n5.nabble.com/How-to-Run-Jmeter-in-command-line-td2640725.html
You can launch your test plan from the command line, specifying parameters, like:
jmeter -n -t plan.jmx -Jmy_url=http://www.firsturl.com
Inside your testplan you'd reference that command line param as ${__P(my_url)}
In terms of capturing results when running in non-gui mode, you may want to see:
http://blogs.amd.com/developer/2009/03/31/using-apache-jmeter-in-non-gui-mode/
Personally, my experience is with using the GUI and writing and running test plans that way but this seems workable.

MsTest noob - how to set up testing infrastructure the right way

We are a MSFT shop with a far-reaching MSDN license.
After many years of doing things wrong, we finally have to start doing automated testing.
My group is the Guinea pigs at this. We need to create what was not there before. We looked at the multitude of options out there. Some people get by just fine with open-source alternatives such as CC.Net, Bamboo, MbUnit, etc. We want to give MsTest, CodedUI, Team Build a good try ... might as well because of MSDN licensing and MSFT focus.
The plus and minus of doing things the MSFT way is that MSFT makes monolithic things. You have got to install various tools that play with each other nicely, but with outsiders - not necessarily. The plus is that when things are done correctly, it should all function rather smoothly. There is the option of gated check-ins, of using TFS to store the reports, etc.
Frankly, I am confused by all of the options. Our traditional build system was hacked together with a bunch of perl, batch scripts, executables, but now the build team switched to Team Build, which ought to be cleaner, but for the most part it is just a wrapper to the same old perl crap.
I am inclined to hack things together for testing too, because I can at least see what the pieces are. So, I envision the poor man's version as:
* A dedicated fast computer to run tests
* Some script to copy build files (test code as well as product code) over to that computer.
* A batch/perl script which would run mstest.exe from command line and execute a few test batches on some by-category filter within some test dlls (the product is so huge, that we do want to organize tests by various categories).
* Some script which will invoke the latter script remotely from the build server using psexec.exe (http://technet.microsoft.com/en-us/sysinternals/bb897553), as well as grabbing the xml output from a shared drive, and then sending out an email with results to those who are interested.
This can probably work, but then I have to worry about how well error handling can work with so many potential points of failure. It would be nice to configure things the "right way", taking advantage of whatever MSFT has cooked up. I am just not sure where to turn for a good guide. Have you done something like this?
Eventually we will want to have a farm of test computers, if we are to run out of the allotted time. Something else of concern is - for coded ui tests to succeed, I think a user has to be logged in, so I am not sure if psexec will be of much help here.
Can you share your positive/negative experience, point me to a good guide perhaps? Thanks!
Here are some tips off the top of my head if you want to get started with testing using the MS tools:
If you have an MSDN subscription, install a Test Rig by installing the Test Controller on your network and the the Test Agent service on each of the machines that will be collecting diagnostic data. See the following link for reference: http://msdn.microsoft.com/en-us/library/dd293551.aspx.
Add a Test Project to your solution. See the first part of the following blog post: http://blogs.microsoft.co.il/blogs/eranruso/archive/2010/03/27/visual-studio-2010-coded-ui-test-user-guide-create-a-simple-coded-ui-test.aspx.
Automated test options can be configured through the .testsettings file(s) that are added automatically when you add a Test project (you can also manually add these files to your solution).
Install Team Foundation Server (2010 recommended) in order to take advantage automating your tests with a daily build. You will also need TFS 2010 if you want to use the VS2010 Test Manager tool to define test environments and plan manual tests (these can be fully automated with CodedUI). Customize your new automated build to setup / deploy your application after build and set the build to run tests. Deployment will likely not be necessary for unit tests, but they will be for Web Performance and CodedUI test types.
If you have VS Ultimate or Test Professional licenses, you can also go further and set up virtual test labs using "Lab Management" features.

What alternatives exist for running QTP tests in batch?

We are in the process of implementing automated regression testing for our applications, and are looking for a solid batch-testing utility. We have QuickTest Professional 10.0, and it comes bundled with 'Test Batch Runner' which appears to be deprecated. It appears in previous versions there was 'Multi-Test Manager', which has been discontinued as well.
What alternatives exist, if any?
The canonical way to do this is via Quality Center, if you don't have QC you can use QTP's automation model from a vbs file. The documentation for this is available in Start -> Programs -> QuickTest Professional -> Documentation -> Automation Object Model Reference
QTP 10 works excellent with Multi Test Manager V8.2.4.
We use it for our project (previously used it with QTP 9.2).
Try google for an installation (of you don't have one), it should be free but just not supported by HP anymore.
From WinRunner times I very extensively used Test Driver scripts with great success due to the following benefits:
non-programming testers can easily create/maintain batches as their stored in XML format
test input files are externally configurable through mapping
a variety of customization parameters supported, from login credentials to prefixes and switches
test dependencies could be established so that if critical test cases is failed the whole branch of dependant test cases is skipped.
Now I continue using Test Drivers and introducing them to clients.
And Test Driver approach was integrated not only by the client companies that do not use Quality Center. Some others followed it because it gives much more flexibility and robustness in automated test plan execution.
Thank you,
Albert Gareev
http://automationbeyond.wordpress.com
I echo Motti...if i get your question right ....You can see the below written link as well..
Work with Test Batch runner

Resources