Is there a way to generate separate TestExecution files when using multiple threads? - automated-tests

I am attempting to write a tool that will automate the generation of a visual studio test playlist based on failed tests from the spec flow report, we recently increased our testThreadCount to 4 and when using the LivingDocumentation plugin to generate the TestExecution.json file it is only generating a result for 1 in 4 tests and I think this is due to the threadCount so 4 tests are being seen as a single execution.
My aim is to generate a fully qualified test name for each of the failed tests using the TestExecution file but this will not work if I am only generating 25% of the results. Could I ask if anyone has an idea of a workaround for this?
<Execution stopAfterFailures="0" testThreadCount="4" testSchedulingMode="Sequential" retryFor="Failing" retryCount="0" />
This is our current execution settings in the .srprofile

We made this possible with the latest version of SpecFlow and the SpecFlow+ LivingDoc Plugin.
You can configure the filename for the TestExecution.json via specflow.json.
Here is an example:
{
"livingDocGenerator": {
"enabled": true,
"filePath": "TestExecution_{ProcessId}_{ThreadId}.json"
}
}
ProcessId and ThreadId will be replaced with values and you get for every thread a separate TestExecution.json.
You can then give the livingdoc CLI tool or the Azure DevOps task a list of TestExecution.jsons.
Example:
livingdoc test-assembly BookShop.AcceptanceTests.dll -t TestExecution*.json
This generates you one LivingDoc with all the test execution results combines.
Documentation links:
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/LivingDocGenerator/Setup-the-LivingDocPlugin.html
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/Guides/Merging-Multiple-test-results.html

Related

Script to guarantee app deploy using rsconnect:deployApp

I am able to deploy my shiny app with:
rsconnect::deployApp(appName = 'Test', launch.browser = FALSE, forceUpdate = T)
However, it does not always successfully deploy the app. I plan to have this run in a script as a Scheduled Task, and want to make sure the deployApp finishes successfully (if the process doesn't succeed, try again).
I imagine you could place this in a while loop, but I am not sure how to include script that would recognize if the function executed successfully or failed. Anyone have ideas?
Error Messages:
Preparing to deploy application...DONE
Error: $ operator is invalid for atomic vectors
As I say in the comment above, I really don't think this is a good idea. To do it safely and robustly will take a lot of work. And the error message you quote above looks pretty "uncontrolled" to me, so I suspect it's got more to do with a problem in your app than a temporary issue with the publishing process. In which case, you will be in an infinitely loop unless you take steps to prevent it. Have you investigated what your publish record and remote deployment log tell you?
That said, this would be my approach if I had to do it.
Create a flag, deploymentFlag, say, in the global environment and set it to FALSE.
Write a function, onDeploymentFailure() say, which sets deploymentFlag to FALSE.
Wrap you call to deployApp in a while loop like this
while(!deploymentFlag) {
deploymentFlag <- TRUE
rsconnect::deployApp(
...,
on.failure=onDeploymentFailure,
logLevel="verbose",
recordDir=<some dir>
)
if (!deploymentFlag) {
...interrogate the publish record to try to determine what went wrong,
and correct it if possible...
}
}
For safety, especially whilst developing and testing, I'd make sure that each attempt wrote a different publish log and I'd limit the maximum number of attempts to a very small number: 1 to start with, then 2 or 3 after I'd solved the initial problems, and so on.

Katalon- Parallel execution of multiple browsers through cmd

Through Katalon studio UI tool we are able to perform parallel execution of test suites in test suite collection for different browsers.
Problem: Same approach is not working when we tried through cmd. Below is the command for the same:
katalon -noSplash -runMode=console -projectPath="<projectPath>" -retry=0 -testSuitePath="<testSuitePath>" -executionProfile="default" -browserType="Chrome,IE"
Note: Works fine for single browser as parameter
Please let us know if above command is correct for multiple browser execution
Expected :
Single report folder containing parallel execution results of both the browsers
You can do that, by using Test Suite Collections.
You put your TC1 (test case) in TS1 (test suite). My test case is called "proba" in this example. Then you create a TSC1 (test suite collection) and you add the same test suite twice to the collection. See screenshot. And change "Run with" parameter to Chrome and IE, respectively.
If you now create a command line argument, you will get something like
katalon -noSplash -runMode=console -consoleLog -projectPath="C:\\Katalon Studio\PROJECT NAME\PROJECT NAME.prj" -retry=0 -testSuiteCollectionPath="Test Suites/TEST SUITE COLLECTION 1"

How to vary creation/not creation of node instances during "install" workflow?

The task is: we have a blueprint with all needed node templates described in it,
and we want to create a deployment, that includes all these nodes, but we don't want all of them to be created during the "install" workflow.
I mean, e.g. it's needed to install all nodes in created deployment, except some of them, for example, openstack instance's volume.
But we know - it may be needed to create and add volume later and we should leave the ability to do so.
As far as volume template expects some input (it's name, for example) i want to pass 'null' as input and NOT to get volume created while "install" workflow.
Solutions like to create many various blueprints, or to delete some nodes after creation - are not acceptable.
Is that possible and how it may be performed?
I appreciate all your insights
Thanks in advance!
We've got a similar sort of requirement. Our plan is to use Cloudify 3.4's scaling capability - which is supposed to be used for multiple instances, but works just as well for just 0 or 1 instances.
Supply 0 as the value for the number_of_nodes input into the blueprint below - only tested with a local cfy install (but should be fine) - and the create & start operations will not be called. To instantiate the node post-install you'd use the built-in scale workflow. Alternatively, supply 1 at install and the node will be created.
tosca_definitions_version: cloudify_dsl_1_3
imports:
- http://www.getcloudify.org/spec/cloudify/3.4.1/types.yaml
inputs:
number_of_nodes:
default: 0
node_templates:
some_vm:
type: cloudify.nodes.Root
capabilities:
scalable:
properties:
default_instances: { get_input: number_of_nodes }
max_instances: 1

spring boot/spring web app embedded version number

What are the strategies to embed a unique version number in a Spring application?
I've got an app using Spring Boot and Spring Web.
Its matured enough that I want to version it and see it displayed on screen at run time.
I believe what you are looking for is generating this version number during build time (Usually by build tools like Ant, Maven or Gradle) as part of their build task chain.
I believe a quite common approach is to either put the version number into the Manifest.mf of the produced JAR and then read it, or create a file that is part of the produced JAR that can be read by your application.
Another solution would be just using Spring Boot's banner customization options described here: http://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-spring-application.html#boot-features-banner
However, this will only allow you to change spring-boot banner.
I also believe that Spring Boot exposes product version that is set in Manifest.MF of your application. To achieve this you will need to make sure Implementation-Version attribute of the manifest is set.
Custom solution for access anywhere in the code
Lets assume you would like to have a version.properties file in your src/main/resources that contains your version information. It will contain placeholders instead of actual values so that these placeholders can be expanded during build time.
version=${prodVersion}
build=${prodBuild}
timestamp=${buildTimestamp}
Now that you have a file like this you need to fill it with actual data. I use Gradle so there I would make sure that processResources task which is automatically running for builds is expanding resources. Something like this should do the trick in the build.gradle file for Git-based code:
import org.codehaus.groovy.runtime.*
import org.eclipse.jgit.api.*
def getGitBranchCommit() {
try {
def git = Git.open(project.file(project.getRootProject().getProjectDir()));
def repo = git.getRepository();
def id = repo.resolve(repo.getFullBranch());
return id.abbreviate(7).name()
} catch (IOException ex) {
return "UNKNOWN"
}
}
processResources {
filesMatching("**/version.properties") {
expand (
"prodVersion": version,
"prodBuild": getGitBranchCommit(),
"buildTimestamp": DateGroovyMethods.format(new Date(), 'yyyy-MM-dd HH:mm')
)
}
}
processResources.outputs.upToDateWhen{ false }
In the code about the following is happening:
We defined a function that can take a build number out of the VCS
(in this case Git). The commit hash is limited to 7 characters.
We configure the processResources task to process
version.properties file and fill it with our variables.
prodVersion is taken from Gradle project version. It's usually set
as version in gradle.properties file (part of the general build
setup).
As a last step we ensure that it's always updated (Gradle
has some mechanics to detect if files ened to be processed
Considering you are on SVN, you will need to have a getSvnBranchCommit() method instead. You could for instance use SVNKit or similar for this.
The last thing that is missing now is reading of the file for use in your application.
This could be achieved by simply reading a classpath resource and parsing it into java.util.Properties. You could take it one step further and for instance create accessor methods specifically for each field, e.g getVersion(), getBuild(), etc.
Hope this helps a bit (even though may not be 100% applicable straight off)
Maven can be used to track the version number, e.g.:
<!-- pom.xml -->
<version>2.0.3</version>
Spring Boot can refer to the version, and expose it via REST using Actuator:
# application.properties
endpoints.info.enabled=true
info.app.version=#project.version#
Then use Ajax to render the version in the browser, for example using Polymer iron-ajax:
<!-- about-page.html -->
<iron-ajax auto url="/info" last-response="{{info}}"></iron-ajax>
Application version is: [[info.app.version]]
This will then show in the browser as:
Application version is: 2.0.3
I'm sure you've probably figured something out since this is an older question, but here's what I just did and it looks good. (Getting it into the banner requires you to duplicate a lot).
I'd recommend switching to git (it's a great SVN client too), and then using this in your build.gradle:
// https://github.com/n0mer/gradle-git-properties
plugins {
id "com.gorylenko.gradle-git-properties" version "1.4.17"
}
// http://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html
springBoot {
buildInfo() // create META-INF/build-info.properties
}
bootRun.dependsOn = [assemble]
And this in your SpringBoot application:
#Resource
GitProperties props;
#Resource
BuildProperties props2;
Or this way to expose those properties into the standard spring environment:
#SpringBootApplication
#PropertySources({
#PropertySource("classpath:git.properties"),
#PropertySource("classpath:META-INF/build-info.properties")
})
public class MySpringBootApplication {
and then referencing the individual properties as needed.
#Value("${git.branch}")
String gitBranch;
#Value("${build.time}")
String buildTime;

How to get and set the default output directory in Robot Framework(Ride) in Run time

I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.

Resources