I write a functional test for my symfony software and run it from command prompt. Now I want to get the result of functional test as a report(in a text file or word file etc).
How can U resolve this issue ?
Using :
$ php symfony test:functional --xml=log.xml
You can retrieve a xml.
If you want to integrate your test suite in a continuous integration
process, use the --xml option to force the test:functional task to generate a
JUnit compatible XML output.
Related
I'm trying to build and deploy ASP.NET web application via TeamCity and WebDeploy.
Before you ask - I found several similar questions, but neither of them worked in my case.
I'm trying to pass TeamCity parameters to MsBuild. I have a build template which defines the parameters as empty, and then build configuration override them.
Tried system properties, but they didn't work for me. What's even worse, TeamCity doesn't log MsBuild parameter values, so I can't take a look at them.
Here's the example of how I pass parameters to MSBuild in my build template:
/property:MsDeployServiceUrl=https://$(deploy_vm_name):8172/MsDeploy.axd /property:DeployIisAppPath=$(deploy_app_name) /property:SkipExtraFilesOnServer=True /property:UserName=$(deploy_username)
/property:Password=$(deploy_password).
According to the documentation, syntax is correct.
Parameters are system.deploy_app_name, system.deploy_username, system.deploy_password, system.deploy_vm_name.
The error message I get - C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.Publishing.targets(4115, 5): Invalid Web Deploy service URL.
I'm using TeamCity version 10.0.2 with MsBuild version 14.
Any suggestions? What did I miss?
So the correct move was to specify system parameters named exactly after MSBuild parameters and then don't mention those parameters in MSBuild step. After I did that, all went fine.
I recognize it's not very flexible solutions since you might have several MSBuild steps, but if anyone knows better one - please share it
I think because you're defining these properties via arguments in a build step, you need to use the typical %teamcity.parameter% syntax where you are using instead the $(msbuild_parameter) syntax.
Or just skip setting them on the command line entirely. You should be able to resolve the system.parameters from TeamCity in the MSBuild script using the $(msbuild_parameter) syntax.
From the documentation you linked:
For MSBuild (Visual Studio 2005/2008 Project Files) use $(). Note that MSBuild does not support names with dots ("."), so you need to replace "." with "_" when using the property inside a build script.
You aren't inside a build script, you're outside the script defining property arguments.
I have used --dryrun along with my pybot command; I want to know what exactly it validates for in testcase or in library.
It parses all of the test suites and executes the tests. It does not execute any keywords, but it does parse them for correctness. The main benefits listed by the user guide are:
Using keywords that are not found.
Using keywords with wrong number of arguments.
Using user keywords that have invalid syntax.
In addition to these failures, normal execution errors are shown, for example, when test library or resource file imports cannot be resolved.
For more information see http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#dry-run
I've feel like I've tried everything we currently have a solution that upon checkin to TFS we force a build on CruiseControl.net. In our solution we use the Chutzpah JS Test Adapter. We were able to successfully use the Chutzpah.console.exe to fail the build if any of the JS tests fails, now we would like to fail the build on the coverage. I cannot find any way to have Chutzpah.console.exe output coverage to the XML file it generates.
I thought I could solve the problem by writing my own .xsl that would parse _Chutzpha.coverage.html. I was going to convert that to xml using the junit format that CruiseControl already can interpret. Since I just care about failing the build I was going to make the output of my transform just look like more unit tests that failed. in the xsl i would set the attribute failures > 0
<?xml version="1.0" encoding="UTF-8" ?>
<testsuites>
<testsuite name="c:\build\jstest\jstest\tests\TestSpec2.js" tests="1" failures="1">
<testcase name="Coverage" time="46" />
</testsuite>
</testsuites>
But I really can't since the incoming html has self closing tags.
So, now I want to just run Chutzpah.Console.exe and pipe the output to a file, because the console output does display the total average coverage, read that value, and fail the build if it drops below a threshold.
Is there a better option? Am I missing soemthing? I don't know that much about cruisecontrol.net
I think the output to a file, and parse that is the only option left.
A pity that the coverage info is not in the xml file :-(
This is in fact more a problem with Chutzpah than with CCNet,
think of CCNet as an upgraded task scheduler, it has a lot of options, but relies on the input it receives from the called program. If that one can not provide the data, you're stuck with these kind of workarounds :-(
There is another option, but it may be more of a trip than you're interested in embarking upon. Sonarqube is a server based tool for managing and analyzing code quality. It has a plugin called "Build Breaker", which allows you to fail the build if any number of code quality metrics are not met, including unit test coverage. Like I said, it's kind of involved because you have to set up a server and learn this new tool, but it's good to have anyway.
I use Chutzpah with the /lcov command line option so that it outputs the coverage to a file, then tell sonar to find the coverage in that file in the sonar configuration for that project. Then, you add a step to your cruise control.net build process to run the sonar analysis, and if you have configured your build breaker plugin right, it will fail the build in cruise control if the coverage is not at the level you specified.
I recently created a Meteor package and want to write some tests. What my test package basically do is that users can insert into the template {{> abc}} and they'll get an HTML element printed on the page.
With TinyTest, all you can do is test the package's API using something like test.equal(actual, expected, message, not). However, I need it to test whether the element was successfully printed on the page. Furthermore, I will pass the template some parameters and test them too.
It seems like I'd have to create a dummy app, run bash to initiate the app, and test whether the elements can be found on page. So should I only use TinyTest to test the API, and write my own tests (somehow!) for templating? If not, how should I go about it?
I read something about Blaze.toHTML, but I cannot find anything in the documentation on it? Nor it's source page.
I think TinyTest is great for starting with Unit testing, but what you need sounds more like an Integration test.
I would recommend you look into the following links for more info on testing with Meteor, especially with Velocity - Meteor's official testing framework:
Announcing Velocity: the official testing framework for Meteor applications
Velocity
The Meteor Testing Manual
You can create a demo application, and run integration tests using Mocha or Jasmine.
We are looking forward to using squashTA to manage our tests. The problem we are facing is that we already have a big automated tests collection and aren't able to make them run via squash TM using squash TA.
Our tests are using junit+selenium WebDriver+SpringFramework.
Currently, we launch our automated tests via maven (in commandLine), and we have a jenkins server running them regularly.
We tried to reuse our tests in a squash TA project, putting them in src/squashta/resources/selenium/java
But code in this folder doesn't even support java packages. It's like the java in the example isn't real java but a fake java parse by squashTA.
Is there any mean of using such already existing tests with squash(TA/TM) ?
Or, any alternatives you know that could do the job ? (we are currently using testlink and must change).
If your selenium test is in :
src/squashTA/resources/selenium-test/src/main/java/org/squashtest/ta/selenium/PetStoreTest.java
With a such structure, the test automation script to run the selenium test (which is in the package org.squashtest.ta.selenium) is :
TEST :
LOAD selenium-test/src/test AS seleniumTestSource
CONVERT seleniumTestSource TO script.java(compile) AS seleniumTestCompiled
CONVERT seleniumTestCompiled TO script.java.selenium2(script) USING $(org.squashtest.ta.selenium.PetStoreTest) AS seleniumTest
EXECUTE execute WITH seleniumTest AS seleniumResult
ASSERT seleniumResult IS success
If your selenium test has some dependencies to other libraries (like to spring in your case), you have to add those depencencies as dependency of the squash-ta-maven-plugin in the pom.xml of your Squash TA project