I'm trying to execute a group of tests. I see at nightwatch documentation about group and tasks and I'm using tasks to run specifics tests. The problem is that nightwatch recognizes the test but does not execute it.
nightwatch return:
Starting selenium server in parallel mode... started - PID: 1404
If you want to upload Screenshots to S3
please set your AWS Environment Variables (see readme).
Started child process for: adquirentes\antecipacao
>> adquirentes\antecipacao finished.
As you can see, the test was started but not executed. How can I fix it?
Related
I'm able to deploy a .net core console app on PCF which raises some internal events and runs for sometime(with help of Thread.Delay()) and exits. I want to be able to start and stop this app remotely, using a batch file from windows machine.
When I push this app to PCF I have explicitly put --no-start flag in the push command. The app gets deployed and doesn't starts and I can start this remotely with cf start command. Once it exits successfully PCF tries to restart it considering it as crashed so in order to restart i would first need to cf stop and then use start command.
I need help in understanding - if there is any other better way to do this. Originally we were planning to use Tasks on PCF; but as per my understanding Tasks are command which runs on other application(please correct me if I am wrong)
Any thoughts will help.
Thanks in advance.
I modified my app logic to achieve this. I did the following -
Deployed app with --no-start flag
In the app entry method I checking the value of arg passed from command line
if arg==required_key then run the job else return
I do a cf start which builds, stages the app and the app gets started but no results displayed on console
cf stop
cf run-task APP_NAME "dotnet appname.dll required_key"
The above task runs one time and destroys itself.
On-prem TFS 2015 Update 3.
I have multiple machines (different Operating Systems) that I want to run my tests on. I'm having issues getting this simple flow to work successfully. Here's what I've tried:
Deploy Test Agent task on multiple machines are successful.
If I put multiple machines in one "Run Functional Tests" task, it will execute the test one ONE of those machines in step 1 only (and will complete successful if this is the first task). Logs here: One Task
If I set up 2 separate tasks, one for each machine, the 1st task will execute successfully, but as seen in bullet 2, the test is run on ANY ONE of the machines in step 1 (NOT the specific one specified for the task). In the example attached, the 1st task is set up to run on Win7, but the test was actually executed on the Win8 machine.
Then the 2nd task (which is set up to run against the Win10 machine) will not complete, no matter what machine or test I put in it. Logs for this scenario attached: Two Tasks
It seems that the PS script(s) for this task is broken in our environment.
Thanks!
The solution is that you can configure test agent separately: configure a agent, then run test, then configure another agent and run test.
I want to get the coverage report for my Protractor E2E UI tests against a running node code.
I have tried the following steps:
Using Istanbul, I instrumented the code on one of my app server
managed through Nginx.
istanbul instrument . --complete-copy -o instrumented
Stopped the actual node code, and started instrumented code on the
same port (port 3000), without changing the NGINX config, so that any
traffic hitting that app server will be directed to the instrument
code which is running on the same server.
Run the protractor end to end tests which is on another machine. This is another local machine, which I run the test from and the instrumented app is in another server.
At the end of the run, I stopped the Instrumented code
Now:
- There is no coverage variable available
- There is no Coverage Folder
- No report generated
I thought the coverage report would be generated if the instrumented code was hit through the protractor script.
I also googled around, and found some plugin "protractor-istanbul-plugin" but not sure if this is what I should use.
My questions:
Is it even possible to generate coverage report if the instrumented code is in a different server and protractor script is run from a different machine?
If possible, is my assumption that report would be generated if instrumented code is hit is wrong?
should I use istanbul cover command here and if yes, how?
My goal is to instrument the code after deploying to QA env. and trigger the protractor script which is placed in another machine pointing to the QA env having the instrumented code.
Thanks in Advance.
We use grunt protractor runner and have 49 specs to run.
When I run them in sauce labs, there are times it just runs x number of tests but not all. Any idea why? Are there any sauce settings to be passed over apart from user and key in my protarctor conf.js?
Using SauceLabs selenium server at http://ondemand.saucelabs.com:80/wd/hub
[launcher] Running 1 instances of WebDriver
Started
.....
Ran 5 of 49 specs
5 specs, 0 failures
This kind of output is usually produced when there are "focused" tests present in the codebase. Check if there are fdescribe, fit in your tests.
As a side note, to avoid focused tests being committed to the repository, we've used static code analysis - eslint with eslint-plugin-jasmine plugin. Then, we've added a "pre-commit" git hook with the help of pre-git package that would run the eslint task before every commit eventually prohibiting any code style violations to be committed into the repository.
I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.