I'm trying to generate aggregate surefire report using maven-surefire-report-plugin.But the issue is that the report contain previous execution results only. It's mean if I execute the test for Nth time it will display (N-1)th time execution results.When I clean all target from mvn 'clean install' command the execution results will be 0.
Aggregate Surefire report refer the generated TEST-.xml files under ${basedir}/target/surefire-reports to create the report. Currently TEST-TestSuite.xml file is generated under each module. So setting up the aggregate parameter value as true and it's generate aggregate report by referring these Test-.xml file in each module .
Project test design with separate module as below.
├── 1.Scenario
| ├── 1.1 Sub- scenario
| | ├── 1.1.1-test -scenario
| | | ├── src/test
| | | | ├── pom.xml
| | | | ├── target
| | | | | ├──Surefire reports
| | | | | | ├──TEST-*.xml
| | | | | | |
├── target
├── aggregate report (surefire.html)
├── pom.xml (parent)
maven-surefire-plugin has configure inside the module pom and a Test-.xml will create for each module separately.
maven-surefire-report-plugin has configure at the parent pom and suppose to generate aggregate report for all module by referring Test-.xml in each module.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-report-plugin</artifactId>
<version>2.4.2</version>
<inherited>true</inherited>
<configuration>
<aggregate>true</aggregate>
<outputDirectory>${basedir}/target/aggregate-surefire-report</outputDirectory>
</configuration>
<executions>
<execution>
<phase>install</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
Expected result is the aggregate report of the current execution but actual result gives the previous execution result.
Anyone know why this issue occurs?
Solution I found was to execute the maven goals separately.
I mean "mvn clean test" and then "mvn surefire-report:report-only" this generates reports of the run we intended.
Related
I want to create a make rule from rules_foreign_cc.
But even the minimal example below is causing issues for me.
With the following setup:
.
├── BUILD (empty)
├── hello
│ ├── BUILD.bazel
│ ├── hello.c
│ ├── Makefile
│ └── WORKSPACE (empty)
└── WORKSPACE
WORKSPACE:
workspace(name = "test")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_foreign_cc",
sha256 = "2a4d07cd64b0719b39a7c12218a3e507672b82a97b98c6a89d38565894cf7c51",
strip_prefix = "rules_foreign_cc-0.9.0",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/refs/tags/0.9.0.tar.gz",
)
load("#rules_foreign_cc//foreign_cc:repositories.bzl", "rules_foreign_cc_dependencies")
# This sets up some common toolchains for building targets. For more details, please see
# https://bazelbuild.github.io/rules_foreign_cc/0.9.0/flatten.html#rules_foreign_cc_dependencies
rules_foreign_cc_dependencies()
local_repository(
name = "hello",
path = "hello",
)
hello/BUILD.bazel:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "make")
filegroup(
name = "hellosrc",
srcs = glob([
"**",
]),
)
make(
name="hello_build",
lib_source=":hellosrc",
out_bin_dir="",
out_binaries=["hello_binary"],
targets=["all"],
)
hello/Makefile:
all:
gcc hello.c -o hello_binary
clean:
rm hello
hello/hello.c:
#include <stdio.h>
int main () {
printf("hello\n");
return 0;
}
and running
bazel build #hello//:hello_build
I'm getting
INFO: Analyzed target #hello//:hello_build (43 packages loaded, 812 targets configured).
INFO: Found 1 target...
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: output 'external/hello/hello_build/hello_binary' was not created
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: Foreign Cc - Make: Building hello_build failed: not all outputs were created or valid
Target #hello//:hello_build failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 12.426s, Critical Path: 12.22s
INFO: 7 processes: 5 internal, 2 linux-sandbox.
FAILED: Build did NOT complete successfully
Basically, I have no idea where Bazel is looking for the created binaries (are they even created?). I tried to set out_bin_dir to different values or not set it at all, all with the same effect.
I expect Bazel to generate the binary and find it - or at least give me a hint what it does.
And I think I found the solution.
make of rules_foreign_cc expects that the make project has an install target.
If that's not the case, it doesn't find the binaries.
This is how I could fix my minimal example - adding an install target to the Makefile
install:
cp -rpv hello_binary $(PREFIX)
In Hydra I have the following configuration:
├── conf
│ ├── config.yaml
│ ├── callbacks
│ │ ├── callback_01.yaml
│ │ └── callback_02.yaml
│ └── trainer
│ ├── default.yaml
The callbacks have a structure like this:
_target_: callback_to_instantiate
I need to pass to the trainer/default.yaml both the callbacks through interpolation.
I tried like this:
_target_: pytorch_lightning.Trainer
callbacks:
- ${callbacks.callback_01}
- ${callbacks.callback_02}
With the config.yaml like this:
defaults:
- _self_
- trainer: default
I did also other trials but it doesn't seem to work. Is there a way to interpolate like that in a yaml file by using two or more yaml files that are in the config group?
I would like if possible to keep this structure.
Currently the recommended approach is:
compose a mapping whose values are the desired callbacks, and then
use the oc.dict.values OmegaConf resolver to get a list of values from that dictionary.
# conf/config.yaml
defaults:
- callbacks#_callback_dict.cb1: callback_01
- callbacks#_callback_dict.cb2: callback_02
- trainer: default
- _self_
# conf/trainer/default.yaml
_target_: pytorch_lightning.Trainer
callbacks: ${oc.dict.values:_callback_dict}
# my_app.py
from typing import Any
import hydra
from omegaconf import DictConfig, OmegaConf
#hydra.main(config_path="conf", config_name="config")
def app(cfg: DictConfig) -> Any:
OmegaConf.resolve(cfg)
del cfg._callback_dict
print(OmegaConf.to_yaml(cfg))
if __name__ == "__main__":
app()
At the command line:
$ python my_app.py
trainer:
_target_: pytorch_lightning.Trainer
callbacks:
- _target_: callback_to_instantiate_01
- _target_: callback_to_instantiate_02
For reference, there is an open issue on Hydra's github repo advocating for an improved user experience around
I am trying to use renv with my project pipeline in R.
My folder structure is
.
|-- data
| |-- file1.rda
| |-- file2.rda
| |-- folder1
| |-- folder2
`-- repository
|-- rep1
| |-- script1.R
| |-- script2.R
| |-- config.json
`-- rep2
/rep1 is the folder of my analysis pipeline and the folder I am running my scripts from. I am keeping track of the packages I am using with renv, which I initialised in /rep1, but I have not created a snapshot yet.
/data contains file*.rda, which are produced by script1.R and have a considerable size. I cannot move any of them in my /rep1 folder. In order to use them with script2.R, I am calling them with
library(renv)
library(jsonlite)
config <- read_json("config.json")
load(file.path(config$data_folder, "file1.rda"))
and they should load the object stored in them.
Whenever I run that, though, I get the following error:
Error: project "~/data/file1.rda" has no activate script and so cannot be activated
Traceback (most recent calls last):
4: load(file.path(config$data_folder, "file1.rda"))
3: renv_load_switch(project)
2: stopf(fmt, renv_path_pretty(project))
1: stop(sprintf(fmt, ...), call. = call.)
Am I missing something? I have the impression something is going wrong when I switch the folder, but I am not really sure how to fix this.
Thank you in advance for your help
The problem here is that renv::load() is masking base::load(). In general, you should not call library(renv) in your scripts -- instead, you should prefix any renv APIs you want to use with renv::.
Alternatively, explicitly call base::load() to ensure the right version of load() is resolved.
This is a follow up of my previous question.
The structure of the files is shown below. I have to run the scripts using python -m bokeh_module.bokeh_sub_module from the top directory. The image.png might come from an arbitrary directory later.
.
├── other_module
│ ├── __init__.py
│ └── other_sub_module.py
├── bokeh_module
│ ├── __init__.py
│ ├── image.png # not showing
│ └── bokeh_sub_module.py
└── image.png # not showing either
The bokeh_sub_module.py is using the standalone bokeh server. However the image will not show no matter where it is placed. I got this WARNING:tornado.access:404 GET /favicon.ico (::1) 0.50ms I'm not sure if this an issue from bokeh or tornado. Thank you for any help.
from other_module import other_sub_module
import os
from bokeh.server.server import Server
from bokeh.layouts import column
from bokeh.plotting import figure, show
import tornado.web
def make_document(doc):
def update():
pass
# do something with other_sub_module
p = figure(match_aspect=True)
p.image_url( ['file://image.png'], 0, 0, 1, 1)
doc.add_root(column(p, sizing_mode='stretch_both'))
doc.add_periodic_callback(callback=update, period_milliseconds=1000)
apps = {'/': make_document}
application = tornado.web.Application([(r"/(.*)", \
tornado.web.StaticFileHandler, \
{"path": os.path.dirname(__file__)}),])
server = Server(apps, tornado_app=application)
server.start()
server.io_loop.add_callback(server.show, "/")
server.io_loop.start()
I tried the argument extra_patterns and it does not work either...
You cannot use the file:// protocol with web servers and browsers. Just use regular http:// or https://. If you specify the correct URL, the StaticFileHandler should properly handle it.
Apart from that, your usage of Server is not correct. It doesn't have the tornado_app argument. Instead, pass the routes directly:
extra_patterns = [(r"/(.*)", tornado.web.StaticFileHandler,
{"path": os.path.dirname(__file__)})]
server = Server(apps, extra_patterns=extra_patterns)
BTW just in case - in general, you shouldn't serve the root of your app. Otherwise, anyone would be able to see your source code.
When doing a multiproject build, you can list the projects you depend upon in dependsOn, and tasks will run on the dependencies first, so you can depend on their results.
There also is an aggregate task that, eh, aggregates subprojects. How does aggregating and depending on subprojects differ, and in what cases should you use aggregrate instead of dependsOn
The key difference is aggregate does not modify classpath and does not establish ordering between sub-projects. Consider the following multi-project build consisting of root, core and util projects:
├── build.sbt
├── core
│ ├── src
│ └── target
├── project
│ ├── build.properties
│ └── target
├── src
│ ├── main
│ └── test
├── target
│ ├── scala-2.13
│ └── streams
└── util
├── src
└── target
where
core/src/main/scala/example/Core.scala:
package example
object Core {
def foo = "Core.foo"
}
util/src/main/scala/example/Util.scala
package example
object Util {
def foo =
"Util.foo" + Core.foo // note how we depend on source from another project here
}
src/main/scala/example/Hello.scala:
package example
object Hello extends App {
println(42)
}
Note how Util.foo has a classpath dependency on Core.foo from core project. If we now try to establish "dependency" using aggregate like so
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util"))
lazy val core = (project in file("core"))
and then execute compile from root project
root/compile
it will indeed attempt to compile all the aggregated projects however Util will fail compilation because it is missing a classpath dependency:
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[error] /Users/mario/IdeaProjects/aggregate-vs-dependson/util/src/main/scala/example/Util.scala:5:18: not found: value Core
[error] "Util.foo" + Core.foo // note how we depend on source from another project here
[error] ^
[error] one error found
[error] (util / Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed 13-Oct-2019 12:35:51
Another way of seeing this is to execute show util/dependencyClasspath which should have Core dependency missing from output.
On the other hand, dependsOn will modify classpath and establish appropriate ordering between projects
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util")).dependsOn(core)
lazy val core = (project in file("core"))
Now root/compile gives
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[success] Total time: 1 s, completed 13-Oct-2019 12:40:25
and show util/dependencyClasspath shows Core on the classpath:
sbt:aggregate-vs-dependsOn> show util/dependencyClasspath
[info] * Attributed(/Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes)
Finally, aggregate and dependsOn are not mutually exclusive, in fact, it is common practice to use both at the same time, often aggregate on the root all sub-projects to aid building, whilst using dependsOn to handcraft particular orderings for particular sub-projects.