I need to execute one automated test step, while also another step should be running, so two checks should be run in parallel.
AND option I can not use, because it is two different steps in code
Example: check that car is driving and doors are closed at the same time.
Is any options available to execute two cucumber steps in parallel?
Related
Using rstan, I am running a code that uses 4 cores in parallel. I have access to a computer with 32 cores and I need to run 3 instances of the same code on different datasets, and another 3 instances of a slightly different code on the same datasets, for a total of 6 models. I'm having a hard time figuring what is the best way to accomplish this. Ideally, the computer would be running 4 cores on each model for a total of 24 cores running at a time.
I've used the parallel package many times before but I don't think it can handle this kind of "parallel in parallel". I am also aware of the Jobs feature in RStudio but one of the good things about rstan is that it interactively shows you how the chains progress, so ideally I would like to be able to see these updates. Can this be accomplished by having 6 different RStudio sessions open at once? I tried running two at a time but I'm not sure if they run in parallel to each other as well, so any clarification would be great.
I would suggest using batch jobs instead. In principle, since you don't have that many models, you could simply try writing 9 different R scripts and store them as, e.g., model1.R, model2.R, ..., model6.R. With that, you could then try submitting the jobs in the command line like this:
R CMD BATCH --vanilla model1.R model1.Rout &
This will run the first script in batch mode and output the stdout to a log file, model1.Rout. That way, you can inspect the state of the jobs by just opening that file. Of course, you will need to run the above command for each model.
On-prem TFS 2015 Update 3.
I have multiple machines (different Operating Systems) that I want to run my tests on. I'm having issues getting this simple flow to work successfully. Here's what I've tried:
Deploy Test Agent task on multiple machines are successful.
If I put multiple machines in one "Run Functional Tests" task, it will execute the test one ONE of those machines in step 1 only (and will complete successful if this is the first task). Logs here: One Task
If I set up 2 separate tasks, one for each machine, the 1st task will execute successfully, but as seen in bullet 2, the test is run on ANY ONE of the machines in step 1 (NOT the specific one specified for the task). In the example attached, the 1st task is set up to run on Win7, but the test was actually executed on the Win8 machine.
Then the 2nd task (which is set up to run against the Win10 machine) will not complete, no matter what machine or test I put in it. Logs for this scenario attached: Two Tasks
It seems that the PS script(s) for this task is broken in our environment.
Thanks!
The solution is that you can configure test agent separately: configure a agent, then run test, then configure another agent and run test.
I would like to conduct an extensive tests on my new internet connection using speedtest.net
Is there some way that I can completely automate the process.
The speed test is conducted at a fixed interval of time automatically and a spanshot of the screen is then taken and stored on my system.
I found a good repository that does basically exactly what you're asking, but better (runs multiple tests, can run from the command line.)
I would recommend copying the code (python) from https://github.com/Janhouse/tespeed.
You can run this repeatedly using a cron job, and easily email the results to yourself using the crontab file.
For easy step by step instructions, I found http://www.pythonforbeginners.com/code-snippets-source-code/command-line-speedtest-net-via-tespeed/
We have set of automated test cases around 2000 and we need to run them daily on every new build that come up. It's currently taking 4 hours to finish the test on one machine. In order reduce this we planning to run tests in batches (500 per batch) on same machine by initiating multiple browsers of same type. Say 4 firefox browser sessions per test suite. So it can finish in 1 hour time.Is it possible to achieve this using selenium webdriver and testng? Please suggest.
It is possible using Selenium Grid and TestNG. Grid can help distribute your tests on various machines or browser instances as you require. To get started, refer : Grid2
I think you might need to change your driver instantiation code a bit to include RemoteWebDriver instead of concrete drivers, but would be ok if your driver instantiation code in your framework is isolated. TestNG and Grid can help provided your tests are well written to support parallel execution.
For TestNG, you can refer, parallel running in TestNG.
If you are using Python - the best way to go is to use py.test to drive the tests. To distribute the tests the pytest-xdist plugin can be used.
Anyway, for both Java and Python you can use Jenkins to run/distribute your tests (using Selenium plugin)
When using an automated build system, it is usually a source control entry which executes tests (but I assume this can be configured to not be on every entry in a large team). How comes build applications have actions for source code checkins. Is there any need for this? So to summarise, is a build script executed by a source control entry or at a certain time everyday?
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
Finally, what does a step mean? (Eg one step build)?
Thanks
So to summarize, is a build script executed by a source control entry or at a certain time everyday?
This depends. Some teams use a commit in the version control system as trigger, some teams use a temporal event as trigger (e.g. each hour). If you run the build after each change, you get immediate feedback. If you let some time run between two builds, you delay that feedback and, in case of a build failure, it's harder to identify the change(s) that are the cause. It may require more investigation.
Just to clarify the vocabulary, for me, "the build" is actually the script/tool that automates all the things that needs to be done (compiling, runing tests, etc). Then running this automated build continuously is what people call "continuous integration". And triggering a build on an event (time based or on commit), pulling the sources from the repository, running the build script, notifying people in case of failure is the responsibility of a "continuous integration engine".
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
This is very binary indeed: the build passes, or it doesn't. When it doesn't, there can be many reasons: the code didn't compile, a test failed, a quality check failed (coding standards, code coverage, etc). If you commit some code than causes a build failure (whatever the reason is), then you "broke the build".
Finally, what does a step mean? (Eg one step build)?
In my opinion, a one step build means that you can build your entire application, run the tests, run the quality checks, produce reports, assemble the application, deploy it, etc with one command. This is a synonym of automated build (if you can't run your build in one step, i.e. if it requires human intervention, then it isn't fully automated).
Also, the term "break the build" -
does this mean code put source control
and when the build is executed, it
fails due to the code not passing a
unit test/code coverage app returns
negative results below a certain
threshold?
This could mean different things depending on company, project or team.
Usually "build" is some reference (usually automated) procedure which is either succeeds or not.
Thus "breaking the build" is doing something that leads failing of this reference procedure.
It could include or exclude unit-tests running, or regression test running, or deployment of your product, or whatever your team thinks should never fail.