I just want to know something about Katalon Studio. I have not worked in automation testing before but now I have some assignment about testing in Katalon.
My client wants to test in Katalon but his requirement is that he wants to run test cases on every build automatically and he also doesn't want to install Katalon IDE or any library he just want reference so that he just added that reference on every build so that all the test cases run automatically on every Dev build.
Is this possible using Katalon? Kindly help me, please. Thanks.
You have to establish full CI Pipeline for your requirements. My advice is, to use Katalon with Jenkins and your developers code repository (perhaps GIT or SVN). Than you are able to implement a server/slave pipeline, where you can execute your Katalon scripts on slave, every time DEV builds.
See:
Katalon/Jenkins Tutorial
Related
I have reportportal installation running on Windows box. I am planning to use it as dashboard to look at unit test and other automated test results. I understand reportportal integration with unit test frameworks is done at the logger level so that the test app itself can send results back to dashboard.
I have a scenario where the test application is an exe that I want to launch by sending a command from dashboard to system under test.
Are there any provisions for doing it?
Do I have to build an agent that talks to reportportal using its api for this?
Thanks
No, nothing similar at the moment.
It is pretty popular request, so we have it in backlog, but still focus on test reports aggregations first. And the other types of functionality will come later.
I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.
I am trying to run my Automated test cases deployed on a virtual machine and trying to trigger it with the help of Octopus Deployment tool. I installed test agent and Octopus Tentacle on my machine. Octopus is triggering the DLL's for Automated test cases very well.But while Octopus trying to run the test cases it's giving me an Error as below:-
Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
Error 01:59:38
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735)
I setup my password in test agent and set it as intractive process but still i am facing the same issue.
I am triggering my DLL's as below through Octopus.
& "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe" "C:\MyWebaPP\Automated_test\Automated_test.dll"
I tried each and every way i found.Please help me out in this.
Thanks in advance!!
We recently encountered the same problem.
During our research, we found this post on the Octopus support forum:
http://help.octopusdeploy.com/discussions/questions/5080-tentacle-running-interactive-tests
We also contacted Octopus Deploy by mail, and they essentially gave us the same response.
While we had no luck with the "scheduled task for test run" approach, we eventually managed to get it working by running the Octopus Tentacle as a process rather than a service.
The challenge here was making sure the Tentacle would start when our test machine started. We wanted this to happen automatically, so RDPing in and starting the process every time was out of the question (this also caused some additional problems for the UI test run...).
The final working solution was to schedule a task that would start the Octopus Tentacle as an interactive process whenever the machine booted (i.e. run Tentacle.exe directly), and then make sure we never RDP to this machine. Make sure the task has sufficient privileges, and that it "Runs whether the user is logged in or not". Also, remember to disable automatic startup of the Octopus Tentacle Service.
Edit: We had some trouble making this solution work across all our environments. It seems that for security reasons, newer versions of Windows are quite skeptical about allowing scheduled tasks to start interactive processes when there is no user logged on.
We did another search for possible solutions, and came across FireDaemon Pro (commercial software), which allows us to register interactive Windows services that run under a domain user. Not quite sure how it works, but they seem to be able to run a UI from a Windows service in session0 (the UI is also isolated). The Octopus Tentacle starts without complain, and the UI tests run the way we want them to.
I have written my tests in such a way that it interacts with the application in test which is deployed somewhere in the cloud (remote). I need to measure the code coverage for these tests.
I am using maven. And intend on using jacoco + sonar for the code coverage
What is it that I need to do to add to my POM file to get the coverage
Do I need to change anything for the JVM where my application in test is running.
I have tried multiple solutions on the internet but none of them have given me any clear solution. I could be missing the point.
Can someone please guide me through step by step? Help would be much appreciated.
I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.