When using an automated build system, it is usually a source control entry which executes tests (but I assume this can be configured to not be on every entry in a large team). How comes build applications have actions for source code checkins. Is there any need for this? So to summarise, is a build script executed by a source control entry or at a certain time everyday?
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
Finally, what does a step mean? (Eg one step build)?
Thanks
So to summarize, is a build script executed by a source control entry or at a certain time everyday?
This depends. Some teams use a commit in the version control system as trigger, some teams use a temporal event as trigger (e.g. each hour). If you run the build after each change, you get immediate feedback. If you let some time run between two builds, you delay that feedback and, in case of a build failure, it's harder to identify the change(s) that are the cause. It may require more investigation.
Just to clarify the vocabulary, for me, "the build" is actually the script/tool that automates all the things that needs to be done (compiling, runing tests, etc). Then running this automated build continuously is what people call "continuous integration". And triggering a build on an event (time based or on commit), pulling the sources from the repository, running the build script, notifying people in case of failure is the responsibility of a "continuous integration engine".
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
This is very binary indeed: the build passes, or it doesn't. When it doesn't, there can be many reasons: the code didn't compile, a test failed, a quality check failed (coding standards, code coverage, etc). If you commit some code than causes a build failure (whatever the reason is), then you "broke the build".
Finally, what does a step mean? (Eg one step build)?
In my opinion, a one step build means that you can build your entire application, run the tests, run the quality checks, produce reports, assemble the application, deploy it, etc with one command. This is a synonym of automated build (if you can't run your build in one step, i.e. if it requires human intervention, then it isn't fully automated).
Also, the term "break the build" -
does this mean code put source control
and when the build is executed, it
fails due to the code not passing a
unit test/code coverage app returns
negative results below a certain
threshold?
This could mean different things depending on company, project or team.
Usually "build" is some reference (usually automated) procedure which is either succeeds or not.
Thus "breaking the build" is doing something that leads failing of this reference procedure.
It could include or exclude unit-tests running, or regression test running, or deployment of your product, or whatever your team thinks should never fail.
Related
In UFT 14.50 (but I don't think this is version-specific), I face the following problem:
For action-based GUI tests, I can configure for each test using File/Settings/Run what should happen in case of error:
For BPT GUI-based components, I cannot; there is no "Run" section:
Also in the application area for the component, which would be the second place where it would make sense to put this setting, there is no such setting.
I do understand that I can set this setting programmatically using <App>.<TestOrComponent>.Settings.Run.OnError.
I also understand that one can configure this setting in the execution setting for each component call in a BPT test flow script, or BPT test script, but what about interactive component execution for debugging reasons -- I have to change this setting in every component programmatically during runtime if I want to define/change this setting when doing debug runs in the UFT IDE, is this correct?
Bonus question: What is the rationale to hide this setting for a given component?
It's a good question. It's one i had to think about as there are quite a few components involved.
You build your code for BPT application areas in UFT but the BPT test itself is designed to be managed and executed from ALM. Around version 11 or 12 HP (the vendor at the time) updated the remote agent to have "debug mode" (the top option in these settings):
If you've not seen this you window, you get it from right clicking on the remote agent on your system tray:
(you can also set it by updating the mic.ini file - shout if you need more info on this)
I've not used BPT for about a decade, but that option in the remote agent I am very aware of. POTENTIALLY The run option you're after doesn't exist because, for BPT and it's aLM dependency, it's now all controlled through that debug run setting.
With it CHECKED, when you run a test from ALM you get the popup on error
With it UNCHECKED, when you run a test from ALM, if it hits an issue the error popup is suppressed.
You don't need to set your options programatically.
Bonus points answer: Logically this makes sense as you're potentially kicking off an entire test set from ALM and if you had 1 bad object/line in the first test and it blocked an entire overnight run you'd be fairly angry. At least this way you have your local machine as a deubg run but all remote execution machines have it unchecked so they just keep going. It becomes a machine configuration and not a script configuration.
If this doesn't work as you expect there are other ways of using a common function library with environment variables to set all to debug or all to carry-on mode.
I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.
Is there a way to prevent a build step from failing the overall build?
I have one (sbt) build step that if the tests fail I would like to fail the build, and then another that might exit with a non zero status code, and should only run if the first step was successful, but should not have an impact on the overall build status. Is that at possible?
No, TeamCity developers still nto introduced editable conditions for build steps. You can use PowerShell script to run and outbounding status control for you step.
You might be able to do this by modifying the teamcity-info.xml file for the build.
I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.
We have some unreliable tests - unreliable because of environmental reasons.
We'd like to see a history of which tests have failed the most often, so we can drill into why and fix the environment issue that causes that particular failure or class of failure.
Is this possible in TeamCity 6.0.3?
We know about individual test history (although that page is really hard to remember how to find!), but that pre-supposes we already know what we're actually trying to find out.
If you go to the "Current problems" tab for a project, there is a link like "tests failed within 120 hours" at the top. There is some statistics which may be relevant to what you're looking for.
UPDATE: In newer versions of TeamCity, this page is not available. But, there is a new Flaky tests tab, which shows information about tests which fail un-predictably, and this page includes test failure counters.