BridgeInner configuration file location - corda

Should the bridge read the path to its certificates from bridge.conf? I think so (as explained in the doc) but when I start it, it looks for certificates in ./certificates/ folder:
[ERROR] 16:17:53+0200 [main] internal.BridgeStartup.run - Exception during bridge startup
java.nio.file.NoSuchFileException: /opt/corda/bridge/certificates/truststore.jks
Here is the block in bridge.conf:
bridgeMode = BridgeInner
outboundConfig {
artemisBrokerAddress = "myNodeServer:myNodeServerPort"
}
bridgeInnerConfig {
floatAddresses = ["floatServer:floatServerPort"]
expectedCertificateSubject = "CN=Float Local,O=Local Only,L=Paris,C=FR"
customSSLConfiguration {
keyStorePassword = "xxx"
trustStorePassword = "xxx"
sslKeystore = "./bridgecerts/bridge.jks"
trustStoreFile = "./bridgecerts/trust.jks"
crlCheckSoftFail = true
}
}
networkParametersPath = network-parameters
Below the tree :
myServerName:/opt/corda/bridge $ tree .
.
├── bridgecerts
│   ├── bridge.jks
│   └── trust.jks
├── bridge.conf
├── corda-bridgeserver-3.1.jar
├── logs
│   └── node-myServerName.log
└── network-parameters
2 directories, 6 files
Something I did wrong here ?
The weird thing is that I don't have this issue with the float on another server, set up the same way...

The bridge has two connections:
One to the float, called the tunnel connection
One to the node, called the Artemis connection
The settings in the bridgeInnerConfig block configure the tunnel connection. The exception you're seeing is for missing certificates for the Artemis connection. See the docs here:
In particular the BridgeInner setup needs a certificates folder
containing the sslkeystore.jks and truststore.jks copied from the
node and a copied network-parameters file in the workspace folder.
You need to provide the certificates folder and network-parameters file as described.
You can also configure the Artemis connection using an outboundConfig block, but it is not recommended.

Related

Why does bazel's rules_foreign_cc make not find/create the artifacts?

I want to create a make rule from rules_foreign_cc.
But even the minimal example below is causing issues for me.
With the following setup:
.
├── BUILD (empty)
├── hello
│   ├── BUILD.bazel
│   ├── hello.c
│   ├── Makefile
│   └── WORKSPACE (empty)
└── WORKSPACE
WORKSPACE:
workspace(name = "test")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_foreign_cc",
sha256 = "2a4d07cd64b0719b39a7c12218a3e507672b82a97b98c6a89d38565894cf7c51",
strip_prefix = "rules_foreign_cc-0.9.0",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/refs/tags/0.9.0.tar.gz",
)
load("#rules_foreign_cc//foreign_cc:repositories.bzl", "rules_foreign_cc_dependencies")
# This sets up some common toolchains for building targets. For more details, please see
# https://bazelbuild.github.io/rules_foreign_cc/0.9.0/flatten.html#rules_foreign_cc_dependencies
rules_foreign_cc_dependencies()
local_repository(
name = "hello",
path = "hello",
)
hello/BUILD.bazel:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "make")
filegroup(
name = "hellosrc",
srcs = glob([
"**",
]),
)
make(
name="hello_build",
lib_source=":hellosrc",
out_bin_dir="",
out_binaries=["hello_binary"],
targets=["all"],
)
hello/Makefile:
all:
gcc hello.c -o hello_binary
clean:
rm hello
hello/hello.c:
#include <stdio.h>
int main () {
printf("hello\n");
return 0;
}
and running
bazel build #hello//:hello_build
I'm getting
INFO: Analyzed target #hello//:hello_build (43 packages loaded, 812 targets configured).
INFO: Found 1 target...
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: output 'external/hello/hello_build/hello_binary' was not created
ERROR: /home/timotheus/.cache/bazel/_bazel_timotheus/a791a0a19ff4a5d2730aa0c8954985c4/external/hello/BUILD.bazel:10:5: Foreign Cc - Make: Building hello_build failed: not all outputs were created or valid
Target #hello//:hello_build failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 12.426s, Critical Path: 12.22s
INFO: 7 processes: 5 internal, 2 linux-sandbox.
FAILED: Build did NOT complete successfully
Basically, I have no idea where Bazel is looking for the created binaries (are they even created?). I tried to set out_bin_dir to different values or not set it at all, all with the same effect.
I expect Bazel to generate the binary and find it - or at least give me a hint what it does.
And I think I found the solution.
make of rules_foreign_cc expects that the make project has an install target.
If that's not the case, it doesn't find the binaries.
This is how I could fix my minimal example - adding an install target to the Makefile
install:
cp -rpv hello_binary $(PREFIX)

Overwriting hydra configuration groups from CLI

I am trying to overwrite from the CLI a group of parameters and I am not sure how to do it. The structure of my conf is the following
conf
├── config.yaml
├── optimizer
│ ├── adamw.yaml
│ ├── adam.yaml
│ ├── default.yaml
│ └── sgd.yaml
├── task
│ ├── default.yaml
│ └── nlp
│ ├── default_seq2seq.yaml
│ ├── summarization.yaml
│ └── text_classification.yaml
My task/default looks like this
# #package task
defaults:
- _self_
- /optimizer/adam#cfg.optimizer
_target_: src.core.task.Task
_recursive_: false
cfg:
prefix_sep: ${training.prefix_sep}
while the optimiser/default looks like this
_target_: null
lr: ${training.lr}
weight_decay: 0.001
no_decay:
- bias
- LayerNorm.weight
and one specific optimiser, say adam.yaml, looks like this
defaults:
- default
_target_: torch.optim.Adam
In the end the config I'd like to be computed is like this
task:
_target_: src.task.nlp.nli_generation.task.NLIGenerationTask
_recursive_: false
cfg:
prefix_sep: ${training.prefix_sep}
optimizer:
_target_: torch.optim.Adam
lr: ${training.lr}
weight_decay: 0.001
no_decay:
- bias
- LayerNorm.weight
I would like to be able to modify the optimiser via the CLI (say, use sgd), but I am not sure how to achieve this. I tried, but I understand why it fails, this
python train.py task.cfg.optimizer=sgd # fails
python train.py task.cfg.optimizer=/optimizer/sgd #fails
Any tips on how to achieve this?
Github discussion here.
You can't override default list entries in this form.
See this.
In particular:
CONFIG : A config to use when creating the output config. e.g. db/mysql, db/mysql#backup.
GROUP_DEFAULT : An overridable config. e.g. db: mysql, db#backup: mysql.
To be able to override a default list entry, you need to define it as a GROUP_DEFAULT.
In your case, it might look like
defaults:
- _self_
- /optimizer#cfg.optimizer: adam

Image not showing with a standalone bokeh server using BokehTornado instance configured with StaticFileHandler

This is a follow up of my previous question.
The structure of the files is shown below. I have to run the scripts using python -m bokeh_module.bokeh_sub_module from the top directory. The image.png might come from an arbitrary directory later.
.
├── other_module
│ ├── __init__.py
│ └── other_sub_module.py
├── bokeh_module
│ ├── __init__.py
│ ├── image.png # not showing
│ └── bokeh_sub_module.py
└── image.png # not showing either
The bokeh_sub_module.py is using the standalone bokeh server. However the image will not show no matter where it is placed. I got this WARNING:tornado.access:404 GET /favicon.ico (::1) 0.50ms I'm not sure if this an issue from bokeh or tornado. Thank you for any help.
from other_module import other_sub_module
import os
from bokeh.server.server import Server
from bokeh.layouts import column
from bokeh.plotting import figure, show
import tornado.web
def make_document(doc):
def update():
pass
# do something with other_sub_module
p = figure(match_aspect=True)
p.image_url( ['file://image.png'], 0, 0, 1, 1)
doc.add_root(column(p, sizing_mode='stretch_both'))
doc.add_periodic_callback(callback=update, period_milliseconds=1000)
apps = {'/': make_document}
application = tornado.web.Application([(r"/(.*)", \
tornado.web.StaticFileHandler, \
{"path": os.path.dirname(__file__)}),])
server = Server(apps, tornado_app=application)
server.start()
server.io_loop.add_callback(server.show, "/")
server.io_loop.start()
I tried the argument extra_patterns and it does not work either...
You cannot use the file:// protocol with web servers and browsers. Just use regular http:// or https://. If you specify the correct URL, the StaticFileHandler should properly handle it.
Apart from that, your usage of Server is not correct. It doesn't have the tornado_app argument. Instead, pass the routes directly:
extra_patterns = [(r"/(.*)", tornado.web.StaticFileHandler,
{"path": os.path.dirname(__file__)})]
server = Server(apps, extra_patterns=extra_patterns)
BTW just in case - in general, you shouldn't serve the root of your app. Otherwise, anyone would be able to see your source code.

When do you need aggregate over dependsOn

When doing a multiproject build, you can list the projects you depend upon in dependsOn, and tasks will run on the dependencies first, so you can depend on their results.
There also is an aggregate task that, eh, aggregates subprojects. How does aggregating and depending on subprojects differ, and in what cases should you use aggregrate instead of dependsOn
The key difference is aggregate does not modify classpath and does not establish ordering between sub-projects. Consider the following multi-project build consisting of root, core and util projects:
├── build.sbt
├── core
│   ├── src
│   └── target
├── project
│   ├── build.properties
│   └── target
├── src
│   ├── main
│   └── test
├── target
│   ├── scala-2.13
│   └── streams
└── util
├── src
└── target
where
core/src/main/scala/example/Core.scala:
package example
object Core {
def foo = "Core.foo"
}
util/src/main/scala/example/Util.scala
package example
object Util {
def foo =
"Util.foo" + Core.foo // note how we depend on source from another project here
}
src/main/scala/example/Hello.scala:
package example
object Hello extends App {
println(42)
}
Note how Util.foo has a classpath dependency on Core.foo from core project. If we now try to establish "dependency" using aggregate like so
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util"))
lazy val core = (project in file("core"))
and then execute compile from root project
root/compile
it will indeed attempt to compile all the aggregated projects however Util will fail compilation because it is missing a classpath dependency:
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[error] /Users/mario/IdeaProjects/aggregate-vs-dependson/util/src/main/scala/example/Util.scala:5:18: not found: value Core
[error] "Util.foo" + Core.foo // note how we depend on source from another project here
[error] ^
[error] one error found
[error] (util / Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed 13-Oct-2019 12:35:51
Another way of seeing this is to execute show util/dependencyClasspath which should have Core dependency missing from output.
On the other hand, dependsOn will modify classpath and establish appropriate ordering between projects
lazy val root = (project in file(".")).aggregate(core, util)
lazy val util = (project in file("util")).dependsOn(core)
lazy val core = (project in file("core"))
Now root/compile gives
sbt:aggregate-vs-dependsOn> root/compile
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes ...
[info] Compiling 1 Scala source to /Users/mario/IdeaProjects/aggregate-vs-dependson/util/target/scala-2.13/classes ...
[success] Total time: 1 s, completed 13-Oct-2019 12:40:25
and show util/dependencyClasspath shows Core on the classpath:
sbt:aggregate-vs-dependsOn> show util/dependencyClasspath
[info] * Attributed(/Users/mario/IdeaProjects/aggregate-vs-dependson/core/target/scala-2.13/classes)
Finally, aggregate and dependsOn are not mutually exclusive, in fact, it is common practice to use both at the same time, often aggregate on the root all sub-projects to aid building, whilst using dependsOn to handcraft particular orderings for particular sub-projects.

Nginx routing config for multi language

I have a static website using nginx as server, get some issues when I want to implement multi-lang it.
the final output I want is:
mywebsite.dev -> get en-US/index.html
mywebsite.dev/about -> get en-US/about.html
mywebsite.dev/en-US/index.html -> get en-US/index.html
mywebsite.dev/en-US/about.html -> get en-US/about.html
mywebsite.dev/js-JP/index.html -> get js-JP/index.html
Instead, I got (which is obvious, but I don't have any clue to solve this)
mywebsite.dev -> get en-US/index.html
mywebsite.dev/about -> get 404 not found
mywebsite.dev/en-US/index.html -> get en-US/index.html
mywebsite.dev/en-US/about.html -> get en-US/about.html
mywebsite.dev/js-JP/index.html -> get js-JP/index.html
Folder tree:
output folder
├── img/
├── js/
├── css/
├── en-US
│ ├── about.html
│ └── index.html
└── js-JP
├── about.html
└── index.html
nginx config:
server {
listen 80;
server_name mywebsite.dev www.mywebsite.dev;
access_log /tmp/mywebsite.access.log;
error_log /tmp/mywebsite.error.log;
root /Users/path/to/output/folder;
index en-US/index.html;
}
Is it possible to just modify nginx config to do that?

Resources