Why does bazel's rules_foreign_cc make not find/create the artifacts? - gnu-make

I want to create a make rule from rules_foreign_cc.
But even the minimal example below is causing issues for me.
With the following setup:
.
├── BUILD (empty)
├── hello
│   ├── BUILD.bazel
│   ├── hello.c
│   ├── Makefile
│   └── WORKSPACE (empty)
└── WORKSPACE
WORKSPACE:
workspace(name = "test")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_foreign_cc",
sha256 = "2a4d07cd64b0719b39a7c12218a3e507672b82a97b98c6a89d38565894cf7c51",
strip_prefix = "rules_foreign_cc-0.9.0",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/refs/tags/0.9.0.tar.gz",
)
load("#rules_foreign_cc//foreign_cc:repositories.bzl", "rules_foreign_cc_dependencies")
# This sets up some common toolchains for building targets. For more details, please see
# https://bazelbuild.github.io/rules_foreign_cc/0.9.0/flatten.html#rules_foreign_cc_dependencies
rules_foreign_cc_dependencies()
local_repository(
name = "hello",
path = "hello",
)
hello/BUILD.bazel:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "make")
filegroup(
name = "hellosrc",
srcs = glob([
"**",
]),
)
make(
name="hello_build",
lib_source=":hellosrc",
out_bin_dir="",
out_binaries=["hello_binary"],
targets=["all"],
)
hello/Makefile:
all:
gcc hello.c -o hello_binary
clean:
rm hello
hello/hello.c:
#include <stdio.h>
int main () {
printf("hello\n");
return 0;
}
and running
bazel build #hello//:hello_build
I'm getting
INFO: Analyzed target #hello//:hello_build (43 packages loaded, 812 targets configured).
INFO: Found 1 target...
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: output 'external/hello/hello_build/hello_binary' was not created
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: Foreign Cc - Make: Building hello_build failed: not all outputs were created or valid
Target #hello//:hello_build failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 12.426s, Critical Path: 12.22s
INFO: 7 processes: 5 internal, 2 linux-sandbox.
FAILED: Build did NOT complete successfully
Basically, I have no idea where Bazel is looking for the created binaries (are they even created?). I tried to set out_bin_dir to different values or not set it at all, all with the same effect.
I expect Bazel to generate the binary and find it - or at least give me a hint what it does.

And I think I found the solution.
make of rules_foreign_cc expects that the make project has an install target.
If that's not the case, it doesn't find the binaries.
This is how I could fix my minimal example - adding an install target to the Makefile
install:
cp -rpv hello_binary $(PREFIX)

Related

What is the proper way to sweep over all configs in an external config group?

I want to store a config group in an external package and then sweep across all configs in that group with a hydra multirun call using glob(*), but the glob(*) doesn't seem to find the available configs.
Specifically, suppose I have some .yaml settings at the root of an external python package called hydra_demo.
hydra_demo
├── __init__.py
└── hyper_param_settings
├── v1.yaml
├── v2.yaml
My primary config file adds the package to the searchpath:
hydra:
searchpath:
- pkg://hydra_demo
As expected, I can use command-line override to use either config in the config group:
$ python experiments/main.py '+hyper_param_settings#=v1'
param=111
$ python experiments/main.py '+hyper_param_settings#=v2'
param=222
Also, as expected, if I don't specify a config from the group, it lists out the two options:
$ python experiments/main.py '+hyper_param_settings#='
In 'config': Could not find 'hyper_param_settings/'
Available options in 'hyper_param_settings':
v1
v2
Config search path:
provider=hydra, path=pkg://hydra.conf
provider=main, path=file:///path/to/experiments
provider=hydra.searchpath in main, path=pkg://hydra_demo
provider=schema, path=structured://
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
However, using glob to run all options doesn't work as I would have expected:
$ python experiments/main.py '+hyper_param_settings#=glob(*)' -m
[2021-12-30 22:42:53,656][HYDRA] Launching 0 jobs locally
But if I copy the hyper_param_settings folder from the package into the folder where the primary config is, I do get the expected behavior (but without achieving my goal of having the config groups in a separate package):
$ python experiments/main.py '+hyper_param_settings#=glob(*)' -m
[2021-12-30 22:43:17,342][HYDRA] Launching 2 jobs locally
[2021-12-30 22:43:17,342][HYDRA] #0 : +hyper_param_settings#=v1
param=111
[2021-12-30 22:43:17,430][HYDRA] #1 : +hyper_param_settings#=v2
param=222
Can anyone spot where I'm misunderstanding hydra?

Image not showing with a standalone bokeh server using BokehTornado instance configured with StaticFileHandler

This is a follow up of my previous question.
The structure of the files is shown below. I have to run the scripts using python -m bokeh_module.bokeh_sub_module from the top directory. The image.png might come from an arbitrary directory later.
.
├── other_module
│ ├── __init__.py
│ └── other_sub_module.py
├── bokeh_module
│ ├── __init__.py
│ ├── image.png # not showing
│ └── bokeh_sub_module.py
└── image.png # not showing either
The bokeh_sub_module.py is using the standalone bokeh server. However the image will not show no matter where it is placed. I got this WARNING:tornado.access:404 GET /favicon.ico (::1) 0.50ms I'm not sure if this an issue from bokeh or tornado. Thank you for any help.
from other_module import other_sub_module
import os
from bokeh.server.server import Server
from bokeh.layouts import column
from bokeh.plotting import figure, show
import tornado.web
def make_document(doc):
def update():
pass
# do something with other_sub_module
p = figure(match_aspect=True)
p.image_url( ['file://image.png'], 0, 0, 1, 1)
doc.add_root(column(p, sizing_mode='stretch_both'))
doc.add_periodic_callback(callback=update, period_milliseconds=1000)
apps = {'/': make_document}
application = tornado.web.Application([(r"/(.*)", \
tornado.web.StaticFileHandler, \
{"path": os.path.dirname(__file__)}),])
server = Server(apps, tornado_app=application)
server.start()
server.io_loop.add_callback(server.show, "/")
server.io_loop.start()
I tried the argument extra_patterns and it does not work either...
You cannot use the file:// protocol with web servers and browsers. Just use regular http:// or https://. If you specify the correct URL, the StaticFileHandler should properly handle it.
Apart from that, your usage of Server is not correct. It doesn't have the tornado_app argument. Instead, pass the routes directly:
extra_patterns = [(r"/(.*)", tornado.web.StaticFileHandler,
{"path": os.path.dirname(__file__)})]
server = Server(apps, extra_patterns=extra_patterns)
BTW just in case - in general, you shouldn't serve the root of your app. Otherwise, anyone would be able to see your source code.

When do you need aggregate over dependsOn

When doing a multiproject build, you can list the projects you depend upon in dependsOn, and tasks will run on the dependencies first, so you can depend on their results.
There also is an aggregate task that, eh, aggregates subprojects. How does aggregating and depending on subprojects differ, and in what cases should you use aggregrate instead of dependsOn
The key difference is aggregate does not modify classpath and does not establish ordering between sub-projects. Consider the following multi-project build consisting of root, core and util projects:
├── build.sbt
├── core
│   ├── src
│   └── target
├── project
│   ├── build.properties
│   └── target
├── src
│   ├── main
│   └── test
├── target
│   ├── scala-2.13
│   └── streams
└── util
├── src
└── target
where
core/src/main/scala/example/Core.scala:
package example
object Core {
def foo = "Core.foo"
}
util/src/main/scala/example/Util.scala
package example
object Util {
def foo =
"Util.foo" + Core.foo // note how we depend on source from another project here
}
src/main/scala/example/Hello.scala:
package example
object Hello extends App {
println(42)
}
Note how Util.foo has a classpath dependency on Core.foo from core project. If we now try to establish "dependency" using aggregate like so
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util"))
lazy val core = (project in file("core"))
and then execute compile from root project
root/compile
it will indeed attempt to compile all the aggregated projects however Util will fail compilation because it is missing a classpath dependency:
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[error] /Users/mario/IdeaProjects/aggregate-vs-dependson/util/src/main/scala/example/Util.scala:5:18: not found: value Core
[error] "Util.foo" + Core.foo // note how we depend on source from another project here
[error] ^
[error] one error found
[error] (util / Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed 13-Oct-2019 12:35:51
Another way of seeing this is to execute show util/dependencyClasspath which should have Core dependency missing from output.
On the other hand, dependsOn will modify classpath and establish appropriate ordering between projects
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util")).dependsOn(core)
lazy val core = (project in file("core"))
Now root/compile gives
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[success] Total time: 1 s, completed 13-Oct-2019 12:40:25
and show util/dependencyClasspath shows Core on the classpath:
sbt:aggregate-vs-dependsOn> show util/dependencyClasspath
[info] * Attributed(/Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes)
Finally, aggregate and dependsOn are not mutually exclusive, in fact, it is common practice to use both at the same time, often aggregate on the root all sub-projects to aid building, whilst using dependsOn to handcraft particular orderings for particular sub-projects.

BridgeInner configuration file location

Should the bridge read the path to its certificates from bridge.conf? I think so (as explained in the doc) but when I start it, it looks for certificates in ./certificates/ folder:
[ERROR] 16:17:53+0200 [main] internal.BridgeStartup.run - Exception during bridge startup
java.nio.file.NoSuchFileException: /opt/corda/bridge/certificates/truststore.jks
Here is the block in bridge.conf:
bridgeMode = BridgeInner
outboundConfig {
artemisBrokerAddress = "myNodeServer:myNodeServerPort"
}
bridgeInnerConfig {
floatAddresses = ["floatServer:floatServerPort"]
expectedCertificateSubject = "CN=Float Local,O=Local Only,L=Paris,C=FR"
customSSLConfiguration {
keyStorePassword = "xxx"
trustStorePassword = "xxx"
sslKeystore = "./bridgecerts/bridge.jks"
trustStoreFile = "./bridgecerts/trust.jks"
crlCheckSoftFail = true
}
}
networkParametersPath = network-parameters
Below the tree :
myServerName:/opt/corda/bridge $ tree .
.
├── bridgecerts
│   ├── bridge.jks
│   └── trust.jks
├── bridge.conf
├── corda-bridgeserver-3.1.jar
├── logs
│   └── node-myServerName.log
└── network-parameters
2 directories, 6 files
Something I did wrong here ?
The weird thing is that I don't have this issue with the float on another server, set up the same way...
The bridge has two connections:
One to the float, called the tunnel connection
One to the node, called the Artemis connection
The settings in the bridgeInnerConfig block configure the tunnel connection. The exception you're seeing is for missing certificates for the Artemis connection. See the docs here:
In particular the BridgeInner setup needs a certificates folder
containing the sslkeystore.jks and truststore.jks copied from the
node and a copied network-parameters file in the workspace folder.
You need to provide the certificates folder and network-parameters file as described.
You can also configure the Artemis connection using an outboundConfig block, but it is not recommended.

sbt and scala.js (with Node.js) can't run with a local .js dependency due to "TypeError: undefined is not a function"

I need help with an error when I run using sbt, scala.js, a local bit of javascript code on Node.js.
[info] Running net.walend.graph.results.PlotTime
Hello from scala
[error] /Users/dwalend/projects/ScalaGraphMinimizer/toGhPages/target/scala-2.11/toghpages-fastopt.js:1854
[error] $g["hello"]();
[error] ^
[error] TypeError: undefined is not a function
[error] at $c_Lnet_walend_graph_results_PlotTime$.main__V (/Users/dwalend/projects/ScalaGraphMinimizer/toGhPages/target/scala-2.11/toghpages-fastopt.js:1854:14)
[error] at $c_Lnet_walend_graph_results_PlotTime$.$$js$exported$meth$main__O (/Users/dwalend/projects/ScalaGraphMinimizer/toGhPages/target/scala-2.11/toghpages-fastopt.js:1861:8)
[error] at $c_Lnet_walend_graph_results_PlotTime$.main (/Users/dwalend/projects/ScalaGraphMinimizer/toGhPages/target/scala-2.11/toghpages-fastopt.js:1864:15)
[error] at Object.<anonymous> (/Users/dwalend/projects/ScalaGraphMinimizer/toGhPages/target/scala-2.11/toghpages-launcher.js:2:107)
[error] at Module._compile (module.js:460:26)
[error] at Object.Module._extensions..js (module.js:478:10)
[error] at Module.load (module.js:355:32)
[error] at Function.Module._load (module.js:310:12)
[error] at Module.require (module.js:365:17)
[error] at require (module.js:384:17)
org.scalajs.jsenv.ExternalJSEnv$NonZeroExitException: node.js exited with code 1
at org.scalajs.jsenv.ExternalJSEnv$AbstractExtRunner.waitForVM(ExternalJSEnv.scala:96)
at org.scalajs.jsenv.ExternalJSEnv$ExtRunner.run(ExternalJSEnv.scala:143)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$.org$scalajs$sbtplugin$ScalaJSPluginInternal$$jsRun(ScalaJSPluginInternal.scala:479)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:539)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:533)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
I'm most suspicious of my build.sbt. (It's in a subproject that has no scala.js in it.) I think I have something out of joint, but don't know what other settings to try.
scalaVersion := "2.11.7"
scalacOptions ++= Seq("-unchecked", "-deprecation","-feature")
libraryDependencies ++= Seq(
"org.scala-js" %%% "scalajs-dom" % "0.8.1"
)
//don't need phantomjs . //jsDependencies += RuntimeDOM
jsDependencies += "org.webjars" % "d3js" % "3.5.5-1" / "d3.min.js"
jsDependencies += ProvidedJS / "algorithmTime.js"
scalaJSStage in Global := FastOptStage
persistLauncher := true
I'm not able to even get a "hello" out of algorthmTime.js with Node.js.
function hello() {
console.log("hello from js")
}
The main() in Scala pretty trim:
object PlotTime extends js.JSApp {
def main(): Unit = {
println("Hello from scala")
global.hello()
val png = global.dataToPng("benchmark/results/v0.1.2/dijkstra.csv")
println(png)
}
}
Before trying Node.js I got a bit further using phantom.js and Rhino. sbt run gets into my local javascript code and stalls inside of d3 with
[info] Running net.walend.graph.results.PlotTime
Hello from scala
hello from js
org.mozilla.javascript.EcmaError: TypeError: Cannot call method "querySelector" of undefined (/Users/dwalend/.ivy2/cache/org.webjars/d3js/jars/d3js-3.5.5-1.jar#META-INF/resources/webjars/d3js/3.5.5/d3.min.js#3)
at org.mozilla.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3701)
at org.mozilla.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3679)
at org.mozilla.javascript.ScriptRuntime.typeError(ScriptRuntime.java:3707)
at org.mozilla.javascript.ScriptRuntime.typeError2(ScriptRuntime.java:3726)
at org.mozilla.javascript.ScriptRuntime.undefCallError(ScriptRuntime.java:3743)
at org.mozilla.javascript.ScriptRuntime.getPropFunctionAndThisHelper(ScriptRuntime.java:2269)
at org.mozilla.javascript.ScriptRuntime.getPropFunctionAndThis(ScriptRuntime.java:2262)
at org.mozilla.javascript.Interpreter.interpretLoop(Interpreter.java:1317)
at org.mozilla.javascript.Interpreter.interpret(Interpreter.java:815)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:109)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3102)
at org.mozilla.javascript.InterpretedFunction.exec(InterpretedFunction.java:120)
at org.mozilla.javascript.Context.evaluateString(Context.java:1078)
at org.scalajs.jsenv.rhino.package$ContextOps$.evaluateFile$extension(package.scala:21)
at org.scalajs.jsenv.rhino.RhinoJSEnv.org$scalajs$jsenv$rhino$RhinoJSEnv$$internalRunJS(RhinoJSEnv.scala:157)
at org.scalajs.jsenv.rhino.RhinoJSEnv$Runner.run(RhinoJSEnv.scala:62)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$.org$scalajs$sbtplugin$ScalaJSPluginInternal$$jsRun(ScalaJSPluginInternal.scala:479)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:539)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:533)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
[trace] Stack trace suppressed: run last toGhPages/compile:run for the full output.
java.lang.RuntimeException: Exception while running JS code: TypeError: Cannot call method "querySelector" of undefined (/Users/dwalend/.ivy2/cache/org.webjars/d3js/jars/d3js-3.5.5-1.jar#META-INF/resources/webjars/d3js/3.5.5/d3.min.js#3)
at scala.sys.package$.error(package.scala:27)
at org.scalajs.jsenv.rhino.RhinoJSEnv.org$scalajs$jsenv$rhino$RhinoJSEnv$$internalRunJS(RhinoJSEnv.scala:173)
at org.scalajs.jsenv.rhino.RhinoJSEnv$Runner.run(RhinoJSEnv.scala:62)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$.org$scalajs$sbtplugin$ScalaJSPluginInternal$$jsRun(ScalaJSPluginInternal.scala:479)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:539)
at org.scalajs.sbtplugin.ScalaJSPluginInternal$$anonfun$45$$anonfun$apply$27$$anonfun$apply$28.apply(ScalaJSPluginInternal.scala:533)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
This error suggests my code is doing what it is supposed to. However, the internet wisdom says chasing in Rhino "querySelector" is a dead end and Node.js is a better choice.
I suspect I'm missing some sbt switch in the system, but don't know what else to look for.
I also don't see how it should work. I'm new to javascript, but I don't see how any one of these javascript files depends on any other in any of the produced files. (The examples on scala.js's tutorial link everything together using script tags in an index.html page.)
> tree toGhPages/target/scala-2.11/
toGhPages/target/scala-2.11/
├── classes
│   ├── JS_DEPENDENCIES
│   ├── algorithmTime.js
│   └── net
│   └── walend
│   └── graph
│   └── results
│   ├── PlotTime$.class
│   ├── PlotTime$.sjsir
│   └── PlotTime.class
├── toghpages-fastopt.js
├── toghpages-fastopt.js.map
└── toghpages-jsdeps.js
The big picture: I'm attempting to use sbt, scala.js, and d3 to create performance charts for a scala graph algorithm library. The first cut of charts look promising, but github doesn't support javascript on README.md pages. For that I'll need a simple image. I want to learn more about both scala.js and d3 which attracted me to this approach.
Quickfix
In order to work in Node.js, do not properly declare the members you want to be visible (i.e. no var or named function):
hello = function() {
console.log("hello from js")
};
This is a terrible hack, but will solve the inclusion problems for algorithmTime.js. "Proper" solution at the end.
Background
Composing different JavaScript files in a general is hard, since there exists no standardized way of doing so. Traditional HTML-include tags just have the semantics of concatenating all the code. This is the semantics we try to emulate in the Scala.js runners.
However, Node.js uses the CommonJS module system. In that system, a library explicitly exports members and the using site puts them into a namespace. This avoids naming collisions.
Example:
// Library (foo.js)
exports.foo = function() { return 1; };
// Using code
var lib = require("foo.js");
lib.foo() // returns 1
This allows the library to declare local values without leaking them into the caller. (Note aside: Although we have a function called require here, this is not RequireJS).
However, in the Scala.js runners, where we are expected to "just include" foo.js, this poses a challenge. What name should we use for the result of the require call? This is what commonJSName is means (see below for example).
If commonJSName for a given dependency is not set, in the Node.js runner, we will just emit
require(<name.js>);
without assigning it to anything. (Why not just dump the file you say? Goodbye reasonable stacktraces).
This has a very interesting effect in Node.js. Consider the following file (bar.js):
var a = 1;
b = 2;
Now we do:
require("bar.js")
console.log(a); // undefined
console.log(b); // 2
It seems that the b leaks into the global context whereas a does not. This is why the quickfix works.
Solutions
For a better solution, you have two choices:
Commit to Node.js, write your library specific to its module system
Autodetect the environment you are included in and adapt dynamically (many JS libraries do this)
Solution 1
modules.exports = function() {
console.log("hello from js")
};
Add commonJSName to your dependency:
jsDependencies += ProvidedJS / "algorithmTime.js" commonJSName "hello"
This will fail miserably in anything but Node.js for two reasons:
The JS VM might not support CommonJS style includes
Overriding the full exports namespace like that is not standard CommonJS but specific to Node.js (IIRC).
Solution 2
Autodetect:
var hello = {};
// Scope to prevent leakage
(function(exp) {
hello.hello = function() {
console.log("hello from js");
}
})(exports ? exports : hello);
You will also need to set commonJSName in this case.
Further, you might already suspect from the code, that this requires you to have an additional indirection, since CommonJS requires the top-level export to be an object (IIRC). Therefore you need to adapt your Scala.js code:
global.hello.hello();
However, if your library exports multiple symbols, this is probably a good idea anyway. Further, this is likely to work in most JS environments (and should work in the three environments we provide with Scala.js).
Epilogue
We (the Scala.js team) are very unhappy about this situation since we believe that including JS libraries should be just as easy as depending on other Scala and/or Java libraries in JVM land. However, we have not found a better solution to this short of supporting every inclusion style, which is a huge design, engineering and certainly maintenance effort (what if a system changes or a new system comes up?).
Related discussions: #457 and #706.

Resources