I tried to run Serverspec tests as shared behaviors on multiple servers by referencing from
http://serverspec.org/advanced_tips.htmlWhen running test, I got the "No examples Found" message every times. How to solve this problem?
if you are using test kitchen ,you can create a helpers directory and place all you shared behaviours in there,an example can be found here
Here is a good example:
http://artplustech.com/automated-infrastructure-testing-with-serverspec/
It is quite complex because it shows the power of writing customized tests for specific apps installed on the infrastructure, which depends on the evironment.
Related
I am trying to run my Automated test cases deployed on a virtual machine and trying to trigger it with the help of Octopus Deployment tool. I installed test agent and Octopus Tentacle on my machine. Octopus is triggering the DLL's for Automated test cases very well.But while Octopus trying to run the test cases it's giving me an Error as below:-
Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
Error 01:59:38
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735)
I setup my password in test agent and set it as intractive process but still i am facing the same issue.
I am triggering my DLL's as below through Octopus.
& "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe" "C:\MyWebaPP\Automated_test\Automated_test.dll"
I tried each and every way i found.Please help me out in this.
Thanks in advance!!
We recently encountered the same problem.
During our research, we found this post on the Octopus support forum:
http://help.octopusdeploy.com/discussions/questions/5080-tentacle-running-interactive-tests
We also contacted Octopus Deploy by mail, and they essentially gave us the same response.
While we had no luck with the "scheduled task for test run" approach, we eventually managed to get it working by running the Octopus Tentacle as a process rather than a service.
The challenge here was making sure the Tentacle would start when our test machine started. We wanted this to happen automatically, so RDPing in and starting the process every time was out of the question (this also caused some additional problems for the UI test run...).
The final working solution was to schedule a task that would start the Octopus Tentacle as an interactive process whenever the machine booted (i.e. run Tentacle.exe directly), and then make sure we never RDP to this machine. Make sure the task has sufficient privileges, and that it "Runs whether the user is logged in or not". Also, remember to disable automatic startup of the Octopus Tentacle Service.
Edit: We had some trouble making this solution work across all our environments. It seems that for security reasons, newer versions of Windows are quite skeptical about allowing scheduled tasks to start interactive processes when there is no user logged on.
We did another search for possible solutions, and came across FireDaemon Pro (commercial software), which allows us to register interactive Windows services that run under a domain user. Not quite sure how it works, but they seem to be able to run a UI from a Windows service in session0 (the UI is also isolated). The Octopus Tentacle starts without complain, and the UI tests run the way we want them to.
We have set of automated test cases around 2000 and we need to run them daily on every new build that come up. It's currently taking 4 hours to finish the test on one machine. In order reduce this we planning to run tests in batches (500 per batch) on same machine by initiating multiple browsers of same type. Say 4 firefox browser sessions per test suite. So it can finish in 1 hour time.Is it possible to achieve this using selenium webdriver and testng? Please suggest.
It is possible using Selenium Grid and TestNG. Grid can help distribute your tests on various machines or browser instances as you require. To get started, refer : Grid2
I think you might need to change your driver instantiation code a bit to include RemoteWebDriver instead of concrete drivers, but would be ok if your driver instantiation code in your framework is isolated. TestNG and Grid can help provided your tests are well written to support parallel execution.
For TestNG, you can refer, parallel running in TestNG.
If you are using Python - the best way to go is to use py.test to drive the tests. To distribute the tests the pytest-xdist plugin can be used.
Anyway, for both Java and Python you can use Jenkins to run/distribute your tests (using Selenium plugin)
I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.
We are a MSFT shop with a far-reaching MSDN license.
After many years of doing things wrong, we finally have to start doing automated testing.
My group is the Guinea pigs at this. We need to create what was not there before. We looked at the multitude of options out there. Some people get by just fine with open-source alternatives such as CC.Net, Bamboo, MbUnit, etc. We want to give MsTest, CodedUI, Team Build a good try ... might as well because of MSDN licensing and MSFT focus.
The plus and minus of doing things the MSFT way is that MSFT makes monolithic things. You have got to install various tools that play with each other nicely, but with outsiders - not necessarily. The plus is that when things are done correctly, it should all function rather smoothly. There is the option of gated check-ins, of using TFS to store the reports, etc.
Frankly, I am confused by all of the options. Our traditional build system was hacked together with a bunch of perl, batch scripts, executables, but now the build team switched to Team Build, which ought to be cleaner, but for the most part it is just a wrapper to the same old perl crap.
I am inclined to hack things together for testing too, because I can at least see what the pieces are. So, I envision the poor man's version as:
* A dedicated fast computer to run tests
* Some script to copy build files (test code as well as product code) over to that computer.
* A batch/perl script which would run mstest.exe from command line and execute a few test batches on some by-category filter within some test dlls (the product is so huge, that we do want to organize tests by various categories).
* Some script which will invoke the latter script remotely from the build server using psexec.exe (http://technet.microsoft.com/en-us/sysinternals/bb897553), as well as grabbing the xml output from a shared drive, and then sending out an email with results to those who are interested.
This can probably work, but then I have to worry about how well error handling can work with so many potential points of failure. It would be nice to configure things the "right way", taking advantage of whatever MSFT has cooked up. I am just not sure where to turn for a good guide. Have you done something like this?
Eventually we will want to have a farm of test computers, if we are to run out of the allotted time. Something else of concern is - for coded ui tests to succeed, I think a user has to be logged in, so I am not sure if psexec will be of much help here.
Can you share your positive/negative experience, point me to a good guide perhaps? Thanks!
Here are some tips off the top of my head if you want to get started with testing using the MS tools:
If you have an MSDN subscription, install a Test Rig by installing the Test Controller on your network and the the Test Agent service on each of the machines that will be collecting diagnostic data. See the following link for reference: http://msdn.microsoft.com/en-us/library/dd293551.aspx.
Add a Test Project to your solution. See the first part of the following blog post: http://blogs.microsoft.co.il/blogs/eranruso/archive/2010/03/27/visual-studio-2010-coded-ui-test-user-guide-create-a-simple-coded-ui-test.aspx.
Automated test options can be configured through the .testsettings file(s) that are added automatically when you add a Test project (you can also manually add these files to your solution).
Install Team Foundation Server (2010 recommended) in order to take advantage automating your tests with a daily build. You will also need TFS 2010 if you want to use the VS2010 Test Manager tool to define test environments and plan manual tests (these can be fully automated with CodedUI). Customize your new automated build to setup / deploy your application after build and set the build to run tests. Deployment will likely not be necessary for unit tests, but they will be for Web Performance and CodedUI test types.
If you have VS Ultimate or Test Professional licenses, you can also go further and set up virtual test labs using "Lab Management" features.
We are in the process of implementing automated regression testing for our applications, and are looking for a solid batch-testing utility. We have QuickTest Professional 10.0, and it comes bundled with 'Test Batch Runner' which appears to be deprecated. It appears in previous versions there was 'Multi-Test Manager', which has been discontinued as well.
What alternatives exist, if any?
The canonical way to do this is via Quality Center, if you don't have QC you can use QTP's automation model from a vbs file. The documentation for this is available in Start -> Programs -> QuickTest Professional -> Documentation -> Automation Object Model Reference
QTP 10 works excellent with Multi Test Manager V8.2.4.
We use it for our project (previously used it with QTP 9.2).
Try google for an installation (of you don't have one), it should be free but just not supported by HP anymore.
From WinRunner times I very extensively used Test Driver scripts with great success due to the following benefits:
non-programming testers can easily create/maintain batches as their stored in XML format
test input files are externally configurable through mapping
a variety of customization parameters supported, from login credentials to prefixes and switches
test dependencies could be established so that if critical test cases is failed the whole branch of dependant test cases is skipped.
Now I continue using Test Drivers and introducing them to clients.
And Test Driver approach was integrated not only by the client companies that do not use Quality Center. Some others followed it because it gives much more flexibility and robustness in automated test plan execution.
Thank you,
Albert Gareev
http://automationbeyond.wordpress.com
I echo Motti...if i get your question right ....You can see the below written link as well..
Work with Test Batch runner