I have migrated one of my branches to Cypress 10.6
There are other branches, which shall further run on a lower Cypress version.
When I now run npx cypress open on one of my un-migrated branches, then the Migration Assistent appears. Instead of the old Cypress test-runner.
How can I get rid of the Migration-Assistent and back to the normal test-runner?
I don't want to migrate those branches, yet.
Related
I've got Cypress open running an e2e test that takes about 4 minutes. I've used the command in the CLI npx cypress open.
Whilst that is open and running, I would like to use the inspector tool
from Cypress to build another test.
Whilst building e2e tests, the gui does not allow for me to open another browser and start.
What would be a viable and repeatable option to run one test, whilst using the gui to build out another test with the inspector tool components?
you can just use npx cypress open a second time and have 2 instances of cypress working, one for the inspector tool and one for testing in the background.
While I am aware that Hot Restart is in preview, it was for quite a while now and my testing never really worked since the first time I've tried it. I've waited for a few VS studio versions to come up, hoping that my issue will be fixed. Since I've kept an eye on issues and haven't seen something similar, I tend to believe that the issue is locally with my setup.
Here is the situation:
I've enabled Hot Restart in VS and added my account, picked the team (after I was added) and everything went fine without errors. I have a free apple account, but after I was added to a Team I was able to obtain automatic provisioning to work.
I've tried to set the debug build options similar to how the simulator was set
I've disabled device specific build as I saw in the docs that it can create problems.
I've installed iTunes just as specified in the docs and when I connect my device to my Windows 10 PC, I can select it from the list.
So basically it looks like everything is set up. I use for testing an iphone 7 with latest iOS version.
Here is what happens:
I've selected different types of output logs hopeing to catch some error in there that could lead me to find a solution.
when starting to build I can see "Building offline for a local device" and I can see my provisioning file details on "Detected signed identity"
I see no apparent error while it compiles and does its job, no complains about signing
then it gets to the IpaCopyToStagingDirectory and there it starts to copy a bunch of files from bin\Iphone\Debug to Artifacts\app.iOS.dSYM where I get the final error: Could not copy the file bin\iPhone\Debug\\app.iOS.ipa because it was not found
If I go to the bin folder, there are many files, including app.iOS.exe, but indeed there is no ipa file. I also don't know why it has two \ on its path.
Any ideas why is not working?
I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.
We use grunt protractor runner and have 49 specs to run.
When I run them in sauce labs, there are times it just runs x number of tests but not all. Any idea why? Are there any sauce settings to be passed over apart from user and key in my protarctor conf.js?
Using SauceLabs selenium server at http://ondemand.saucelabs.com:80/wd/hub
[launcher] Running 1 instances of WebDriver
Started
.....
Ran 5 of 49 specs
5 specs, 0 failures
This kind of output is usually produced when there are "focused" tests present in the codebase. Check if there are fdescribe, fit in your tests.
As a side note, to avoid focused tests being committed to the repository, we've used static code analysis - eslint with eslint-plugin-jasmine plugin. Then, we've added a "pre-commit" git hook with the help of pre-git package that would run the eslint task before every commit eventually prohibiting any code style violations to be committed into the repository.
Trying to setup my integration flow and I have some tests that are quite destructive using the velocity-cucumber package.
First issue I find is that these tests are being run on the standard Meteor db. Which on localhost and dev is fine, but not so great for production. As far as I can tell the velocity-cucumber doesn't do anything with mirrors yet.
Because of this I have two cases where I need Meteor to launch in a specific way.
1) On the CI server I need for JUST the tests to run then exit (hopefully with the correct exit code).
2) On the production server I need Meteor to skip all tests and just launch.
Is this currently possible with Meteor command line arguments? I'm contemplating making demeteorize a part of the process, and then use standard node.js testing frameworks.
To run velocity tests and then exit, you can allegedly run meteor with the --test option:
meteor run --test
This isn't working for me, but that's what the documentation says it is supposed to do.
To disable velocity tests, run meteor with the environment variable VELOCITY set to 0. This will skip setting up the mirror, remove the red/green dot, etc.:
VELOCITY=0 meteor run