I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.
Related
We have a test suite that runs 30 or more minutes and it is not uncommon for me to see a situation like this:
I don't generally want to break on a first failure (--stop-on-failure) but in this specific example, I also don't want to wait another 10-15 minutes for the test suite to finish. If I do Ctrl+C, the process stops immediately and I won't see any messages.
I'm specifically interested in the format that PHPUnit uses in the console which I find very useful. For example, logging to a file using --testdox-text produces a nice but not very detailed list. Logging with --log-teamcity is verbose and quite technical.
Is there a way to see "console-like" PHPUnit messages before the test suite fully finishes?
Update: I've also opened an issue in the official repo.
maybe you could register a listener in your phpunit.xml file and implement its addFailure() method to display some info in console or a file...
P.S.: No, I do not want to debug my script. It is pretty awesome.
The problem is the application under test. If I place a few orders, it crashes. So what I want to do is: mid-execution, when I see that the application is crashed, I want to pause the test script, bring the application back up and running, and then resume the test.
I know that this is not the point of time when I should be running the test scripts as the application is not stable enough, but the developers are working on it and hopefully soon enough, they will fix it. I am just curious to know if there is a solution, because I couldn't find one. Of course I could've integrated bringing the application up again when it crashes in my tests, but that is not what I want to do.
My system:
OS: Linux Mint
Tests: Watir (Ruby) + Cucumber on Chrome
I run the tests on linux terminal using cucumber tags.
I just want to know in general if there is any way to pause and resume execution. For example, when I want to stop all the tests, I give the command line interruption Ctrl + C. So is there any such interrupt command to pause and resume?
Okay, since you want a "general" answer, here goes...
Based on your context, you are looking for a "crashed" condition in your project.
My own approach to solving this problem would involve writing a helper method that would look for this condition and, if true, it would "pause". For example...
def pause_if_crashed
sleep 30 if #browser.product_price.nil?
end
Then I would sprinkle this helper method in likely "crash" spots in my other functional methods.
Without more specifics about your needs, this is about as helpful as I can get, I think.
Since I started using git I've had my RPROMPT setup to show the current branch. I've recently been using some of the "fancy" scripts to show un/staged file counts and other useful at a glance things. (https://github.com/olivierverdier/zsh-git-prompt/tree/master)
After using this for a week or two, its performance started to bother me.
Are there faster ways of getting this info or are there maybe ways to asynchronously write the RPROMPT? I don't want to wait to type a command while the RPROMPT is computed and would be perfectly happy with it popping in slightly later then my main PROMPT.
No offense to the aforementioned script; it's great. I'm just impatient.
We have some unreliable tests - unreliable because of environmental reasons.
We'd like to see a history of which tests have failed the most often, so we can drill into why and fix the environment issue that causes that particular failure or class of failure.
Is this possible in TeamCity 6.0.3?
We know about individual test history (although that page is really hard to remember how to find!), but that pre-supposes we already know what we're actually trying to find out.
If you go to the "Current problems" tab for a project, there is a link like "tests failed within 120 hours" at the top. There is some statistics which may be relevant to what you're looking for.
UPDATE: In newer versions of TeamCity, this page is not available. But, there is a new Flaky tests tab, which shows information about tests which fail un-predictably, and this page includes test failure counters.
Each time I run a test using TestDriven, it creates another "rocket" icon on my system tray. I have to manually do right-click Quit to get rid of them. How can I avoid this?
Check for any open file handles you may be creating in your tests. Depending on the size of your test suite that may be too time consuming and tedious. There's an option to turn off caching the test process between test runs in the options for TestDriven.Net. This seems to be designed for instances like what you're seeing. From their documentation:
Cache test process between test runs
By default the external test process will be cached when the ‘Run Test(s)’ command is used. This process appears in the tool tray as a rocket icon which can be used to kill the process. This is fine unless one of your tests starts leaking leaking native resources (such as leaving open a file handle). The best solution is to fix the resource leak, but you now have to option to work around the issue by killing the test process at the end of each test run. This can be useful if the resource leak is in a 3rd party DLL which can’t be easily be changed.
From here: http://weblogs.asp.net/nunitaddin/archive/2008/12/03/testdriven-net-options-pane.aspx
I realize you asked this a year ago, so you may have already figured out a way to fix the problem. In that case, I would ask that you let us know what you did.