Failing to create JuJu controller on openstack/microstack bare metal - openstack

I am following these steps https://microstack.run/docs/using-juju and I cannot seem to create a controller. i get error message 20:23:24 ERROR juju.cmd.juju.commands bootstrap.go:884 failed to bootstrap model: cannot start bootstrap instance: no metadata for "bionic" images in microstack with arch amd64
I installed openstack single node installation followed all instructions in the guides.
I have tried logging into horizon as admin and add metadata series and arch to the image uploaded via cli but it does not work.
full log
# juju bootstrap --bootstrap-series=$OS_SERIES --metadata-source=~/simplestreams --model-default network=test --model-default external-network=external --model-default use-floating-ip=true microstack microstack --debug
20:23:22 INFO juju.cmd supercommand.go:56 running juju [2.9.31 0f2ce8e528a67fa3f735dff39a1a68c44540bb97 gc go1.18.2]
20:23:22 DEBUG juju.cmd supercommand.go:57 args: []string{"/snap/juju/19414/bin/juju", "bootstrap", "--bootstrap-series=bionic", "--metadata-source=~/simplestreams", "--model-default", "network=test", "--model-default", "external-network=external", "--model-default", "use-floating-ip=true", "microstack", "microstack", "--debug"}
20:23:22 DEBUG juju.cmd.juju.commands bootstrap.go:1307 authenticating with region "" and credential "admin" ()
20:23:22 DEBUG juju.cmd.juju.commands bootstrap.go:1455 provider attrs: map[external-network:external network:test policy-target-group: use-default-secgroup:false use-floating-ip:true use-openstack-gbp:false]
20:23:23 INFO cmd authkeys.go:114 Adding contents of "/root/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
20:23:23 DEBUG juju.cmd.juju.commands bootstrap.go:1530 preparing controller with config: map[agent-metadata-url: agent-stream:released apt-ftp-proxy: apt-http-proxy: apt-https-proxy: apt-mirror: apt-no-proxy: authorized-keys:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDnkIQi6rQURr822ggum4NZtdY68kCJRgS0l9YrjGvnYhWLwuj0h2/nA81MVSDOLqy4aRmQCqIlGAcWVj+0cagwz0iIvkZK6U9CIpCmNjsd6bvV+nVvDaAnFK5/AwkYixflOV867q2ybCuLkUNVIYgUzHY/FMhvFMcgl9Ldxs+2KDKaRam78x2xsp/zPs05bKpPVJoCAkABcSFBBvfMsNlqu07He7Y4tsZJaFvti/mV+rO1N41UMerWbPcH1VkqvVVGqZNTPR3fSVZsVmWFs1L0w5P4+AaKrvHW74WFg/wLDmUEwj5rybDK5H1aEoJPggtP6+QhSpPFAYIPRTdASeJx juju-client-key
automatically-retry-hooks:true backup-dir: charmhub-url:https://api.charmhub.io cloudinit-userdata: container-image-metadata-url: container-image-stream:released container-inherit-properties: container-networking-method: default-series:focal default-space: development:false disable-network-management:false disable-telemetry:false egress-subnets: enable-os-refresh-update:true enable-os-upgrade:true external-network:external fan-config: firewall-mode:instance ftp-proxy: http-proxy: https-proxy: ignore-machine-addresses:false image-metadata-url: image-stream:released juju-ftp-proxy: juju-http-proxy: juju-https-proxy: juju-no-proxy:127.0.0.1,localhost,::1 logforward-enabled:false logging-config: logging-output: lxd-snap-channel:latest/stable max-action-results-age:336h max-action-results-size:5G max-status-history-age:336h max-status-history-size:5G name:controller net-bond-reconfigure-delay:17 network:test no-proxy:127.0.0.1,localhost,::1 num-container-provision-workers:4 num-provision-workers:16 policy-target-group: provisioner-harvest-mode:destroyed proxy-ssh:false resource-tags: snap-http-proxy: snap-https-proxy: snap-store-assertions: snap-store-proxy: snap-store-proxy-url: ssl-hostname-verification:true test-mode:false transmit-vendor-metrics:true type:openstack update-status-hook-interval:5m use-default-secgroup:false use-floating-ip:true use-openstack-gbp:false uuid:414283ed-20ff-4226-8bf5-762dc6414ca4]
20:23:23 INFO juju.provider.openstack provider.go:169 opening model "controller"
20:23:23 WARN juju.provider.openstack config.go:181 Config attribute "use-floating-ip" is deprecated.
You can instead use the constraint "allocate-public-ip".
20:23:23 DEBUG juju.provider.openstack provider.go:979 authURL: https://10.255.52.101:5000/v3
20:23:23 DEBUG juju.provider.openstack provider.go:979 authURL: https://10.255.52.101:5000/v3
20:23:23 DEBUG goose logger.go:44 DEBUG: auth details: &{Token:gAAAAABis3o7i7xAk0aKxVqGM032h27eTaab39ody8-PDV4hNAgylnlhvdmFV1AaK6Sn5IkYj43T5jCB97JvdvDYSjG5Yk1koYz3NO7PDmASw24jnuXidKC8-yJC8pp9emNaA0GEprkUhP7TOKs7K5VIzwrUO6BOqBNDvpWmXrFP2aZUMvmwCds TenantId:75a496aa41f944a4bd867557bcc7489d TenantName:admin UserId:47980441e7bb47a69473b734091e19d5 Domain: RegionServiceURLs:map[microstack:map[compute:https://10.255.52.101:8774/v2.1 identity:https://10.255.52.101:5000/v3/ image:https://10.255.52.101:9292 network:https://10.255.52.101:9696 placement:https://10.255.52.101:8778 volumev2:https://10.255.52.101:8776/v2/75a496aa41f944a4bd867557bcc7489d volumev3:https://10.255.52.101:8776/v3/75a496aa41f944a4bd867557bcc7489d]]}
20:23:23 INFO cmd bootstrap.go:855 Creating Juju controller "microstack" on microstack/microstack
20:23:23 DEBUG goose logger.go:44 TRACE: api version will be inserted between "https://10.255.52.101:8774/" and "/"
20:23:23 DEBUG goose logger.go:44 DEBUG: discovered API versions: [{Version:{Major:2 Minor:0} Links:[{Href:http://127.0.0.1:8764/v2/ Rel:self}] Status:SUPPORTED} {Version:{Major:2 Minor:1} Links:[{Href:http://127.0.0.1:8764/v2.1/ Rel:self}] Status:CURRENT}]
20:23:23 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/flavors/detail
20:23:24 INFO juju.cmd.juju.commands bootstrap.go:921 combined bootstrap constraints:
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:320 model "controller" supports application/machine networks: true
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:322 network management by juju enabled: true
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:1065 no agent directory found, using default agent metadata source: https://streams.canonical.com/juju/tools
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:1090 setting default image metadata source: /root/simplestreams/images
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:423 searching for signed metadata in datasource "bootstrap metadata"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL file:///root/simplestreams/images/streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:473 streams/v1/index2.sjson not accessed, actual error: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/simplestreams/datasource.go:192: "file:///root/simplestreams/images/streams/v1/index2.sjson" not found}]
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:474 streams/v1/index2.sjson not accessed, trying legacy index path: streams/v1/index.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:489 cannot load index "file:///root/simplestreams/images/streams/v1/index.sjson": "file:///root/simplestreams/images/streams/v1/index.sjson" not found
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:427 falling back to search for unsigned metadata in datasource "bootstrap metadata"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL file:///root/simplestreams/images/streams/v1/index2.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:473 streams/v1/index2.json not accessed, actual error: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/simplestreams/datasource.go:192: "file:///root/simplestreams/images/streams/v1/index2.json" not found}]
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:474 streams/v1/index2.json not accessed, trying legacy index path: streams/v1/index.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:493 read metadata index at "file:///root/simplestreams/images/streams/v1/index.json"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:1025 finding products at path "streams/v1/com.ubuntu.cloud-released-imagemetadata.json"
20:23:24 DEBUG juju.environs imagemetadata.go:45 new user image datasource registered: bootstrap metadata
20:23:24 INFO juju.environs.bootstrap bootstrap.go:1127 custom image metadata added to search path
20:23:24 INFO cmd bootstrap.go:397 Loading image metadata
20:23:24 DEBUG juju.environs imagemetadata.go:119 obtained image datasource "bootstrap metadata"
20:23:24 DEBUG juju.environs imagemetadata.go:119 obtained image datasource "default ubuntu cloud images"
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:959 constraints for image metadata lookup &{{{microstack https://10.255.52.101:5000/v3} [disco xenial win2012hvr2 centos8 opensuseleap win2012hv win2016nano eoan yakkety quantal trusty saucy raring genericlinux win2008r2 bionic vivid utopic win7 win8 win81 kubernetes focal zesty centos7 win2016hv wily win10 win2012r2 precise impish cosmic artful win2012 centos9 win2016 win2019 jammy hirsute groovy] [amd64 i386 armhf arm64 ppc64el s390x] released}}
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:423 searching for signed metadata in datasource "bootstrap metadata"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL file:///root/simplestreams/images/streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:473 streams/v1/index2.sjson not accessed, actual error: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/simplestreams/datasource.go:192: "file:///root/simplestreams/images/streams/v1/index2.sjson" not found}]
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:474 streams/v1/index2.sjson not accessed, trying legacy index path: streams/v1/index.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:489 cannot load index "file:///root/simplestreams/images/streams/v1/index.sjson": "file:///root/simplestreams/images/streams/v1/index.sjson" not found
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:427 falling back to search for unsigned metadata in datasource "bootstrap metadata"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL file:///root/simplestreams/images/streams/v1/index2.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:473 streams/v1/index2.json not accessed, actual error: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/simplestreams/datasource.go:192: "file:///root/simplestreams/images/streams/v1/index2.json" not found}]
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:474 streams/v1/index2.json not accessed, trying legacy index path: streams/v1/index.json
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:493 read metadata index at "file:///root/simplestreams/images/streams/v1/index.json"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:497 skipping index "file:///root/simplestreams/images/streams/v1/index.json" because of missing information: index file has no data for cloud {microstack https://10.255.52.101:5000/v3} not found
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:967 ignoring image metadata in bootstrap metadata: index file has no data for cloud {microstack https://10.255.52.101:5000/v3} not found
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:423 searching for signed metadata in datasource "default ubuntu cloud images"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL http://cloud-images.ubuntu.com/releases/streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:473 streams/v1/index2.sjson not accessed, actual error: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/simplestreams/datasource.go:192: "http://cloud-images.ubuntu.com/releases/streams/v1/index2.sjson" not found}]
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:474 streams/v1/index2.sjson not accessed, trying legacy index path: streams/v1/index.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:493 read metadata index at "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:497 skipping index "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" because of missing information: index file has no data for cloud {microstack https://10.255.52.101:5000/v3} not found
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:967 ignoring image metadata in default ubuntu cloud images: index file has no data for cloud {microstack https://10.255.52.101:5000/v3} not found
20:23:24 DEBUG juju.environs.bootstrap bootstrap.go:975 found 0 image metadata from all image data sources
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/flavors/detail
20:23:24 INFO cmd bootstrap.go:470 Looking for packaged Juju agent version 2.9.31 for amd64
20:23:24 INFO juju.environs.bootstrap tools.go:82 looking for bootstrap agent binaries: version=2.9.31
20:23:24 DEBUG juju.environs.tools tools.go:87 finding agent binaries in stream: "released"
20:23:24 DEBUG juju.environs.tools tools.go:89 reading agent binaries with major.minor version 2.9
20:23:24 DEBUG juju.environs.tools tools.go:98 filtering agent binaries by version: 2.9.31
20:23:24 DEBUG juju.environs.tools tools.go:101 filtering agent binaries by os type: ubuntu
20:23:24 DEBUG juju.environs.tools tools.go:104 filtering agent binaries by architecture: amd64
20:23:24 DEBUG juju.environs.tools urls.go:133 trying datasource "keystone catalog"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:423 searching for signed metadata in datasource "default simplestreams"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:458 looking for data index using path streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:754 using default candidate for content id "com.ubuntu.juju:released:agents" are {20210329 mirrors:1.0 content-download streams/v1/cpc-mirrors-agents.sjson []}
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:470 looking for data index using URL https://streams.canonical.com/juju/tools/streams/v1/index2.sjson
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:493 read metadata index at "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson"
20:23:24 DEBUG juju.environs.simplestreams simplestreams.go:1025 finding products at path "streams/v1/com.ubuntu.juju-released-agents.sjson"
20:23:24 INFO juju.environs.bootstrap tools.go:84 found 1 packaged agent binaries
20:23:24 INFO cmd bootstrap.go:483 Located Juju agent version 2.9.31-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/2.9.31/juju-2.9.31-linux-amd64.tgz
20:23:24 WARN juju.provider.openstack config.go:181 Config attribute "use-floating-ip" is deprecated.
You can instead use the constraint "allocate-public-ip".
20:23:24 INFO cmd bootstrap.go:578 Starting new instance for initial controller
20:23:24 INFO cmd bootstrap.go:167 Launching controller instance(s) on microstack/microstack...
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/os-availability-zone
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/os-availability-zone
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/flavors/detail
20:23:24 DEBUG juju.environs.instances image.go:66 instance constraints {region: microstack, series: bionic, arch: amd64, constraints: mem=3584M, storage: []}
20:23:24 ERROR juju.cmd.juju.commands bootstrap.go:884 failed to bootstrap model: cannot start bootstrap instance: no metadata for "bionic" images in microstack with arch amd64
20:23:24 DEBUG juju.cmd.juju.commands bootstrap.go:885 (error details: [{/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/cmd/juju/commands/bootstrap.go:984: failed to bootstrap model} {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/bootstrap/bootstrap.go:706: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/bootstrap/bootstrap.go:582: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/provider/common/bootstrap.go:60: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/provider/common/bootstrap.go:277: cannot start bootstrap instance} {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/provider/openstack/provider.go:1078: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/provider/openstack/provider.go:1099: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/provider/openstack/image.go:87: } {/build/snapcraft-juju-25888574271dd1b08771e6ebeeab8ad6/parts/juju/src/environs/instances/image.go:68: no metadata for "bionic" images in microstack with arch amd64}])
20:23:24 DEBUG juju.cmd.juju.commands bootstrap.go:1641 cleaning up after failed bootstrap
20:23:24 INFO juju.provider.common destroy.go:21 destroying model "controller"
20:23:24 INFO juju.provider.common destroy.go:32 destroying instances
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/servers/detail
20:23:24 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/servers/detail
20:23:24 DEBUG juju.provider.openstack provider.go:1691 terminating instances []
20:23:25 DEBUG goose logger.go:44 DEBUG: auth details: &{Token:gAAAAABis3o9I_BvbB1iv2OXxsPzXJnBHDxVCHI3KDQl2W0e4ROMa4-544z7MBbEHTd5Sp-NZPt57ur11VfMO5YziG-duxRDS9vQXVyYsnVpgRmgZCYh8idZjhJH9rCCZnDyc9KOJ3v3BXCEEksZs21T5wZKUbozLUdWwtr0z2TWhoB7A3rLwxg TenantId:75a496aa41f944a4bd867557bcc7489d TenantName:admin UserId:47980441e7bb47a69473b734091e19d5 Domain: RegionServiceURLs:map[microstack:map[compute:https://10.255.52.101:8774/v2.1 identity:https://10.255.52.101:5000/v3/ image:https://10.255.52.101:9292 network:https://10.255.52.101:9696 placement:https://10.255.52.101:8778 volumev2:https://10.255.52.101:8776/v2/75a496aa41f944a4bd867557bcc7489d volumev3:https://10.255.52.101:8776/v3/75a496aa41f944a4bd867557bcc7489d]]}
20:23:25 DEBUG goose logger.go:44 TRACE: api version will be inserted between "https://10.255.52.101:9696/" and "/"
20:23:25 DEBUG goose logger.go:44 DEBUG: discovered API versions: [{Version:{Major:2 Minor:0} Links:[{Href:http://127.0.0.1:9686/v2.0/ Rel:self}] Status:CURRENT}]
20:23:25 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:9696/v2.0/security-groups
20:23:25 INFO juju.provider.common destroy.go:56 destroying storage
20:23:25 DEBUG juju.provider.openstack cinder.go:119 volume URL: https://10.255.52.101:8776/v3/75a496aa41f944a4bd867557bcc7489d
20:23:25 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:9696/v2.0/security-groups
20:23:25 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/servers/detail
20:23:25 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:8774/v2.1/servers/detail
20:23:26 DEBUG goose logger.go:44 TRACE: MakeServiceURL: https://10.255.52.101:9696/v2.0/security-groups
20:23:26 INFO cmd supercommand.go:544 command finished
my yaml file. I had to change endpoint to public address and add ca-certificates due to bug https://bugs.launchpad.net/microstack/+bug/1955133
# cat microstack.yaml
clouds:
microstack:
type: openstack
auth-types: [access-key,userpass]
regions:
microstack:
endpoint: https://10.255.52.101:5000/v3
ca-certificates:
- |
-----BEGIN CERTIFICATE-----
MIIC3zCCAcegAwIBAgIUWXI8I2kpUvUTQabdoZdiSqEhMg4wDQYJKoZIhvcNAQEL
BQAwEjEQMA4GA1UEAwwHc3RhdGxlcjAeFw0yMjA2MjEyMTUzMzNaFw0zMjA2MjEy
MTUzMzNaMBIxEDAOBgNVBAMMB3N0YXRsZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQDagPjjLmG+MmOK1mppRzEobPMQALgU1u1yF0lhwbJ7us+13PDp
dob9VjMxJdjh088ViOXl+g/4gpRD+qumzpCT+LRYvQjlZ8EPUW/eg/tOhTm2uVR7
N8opF24VEed6YcfA1+Zp8vocqZ7ULWlF3FEjInPAgPmM4rWtihq4FsXQA5KdZF9v
0oktdFL4CIaWTElxNMbBSxm5tCePLbFjhRE4hRRnLlgPCE7/XsO5IzR6B8MqgcGX
RoBE5IBaHTfXBXDnJCrQQkHyXK2a2eRYR+5YGf9odOKUqVQ6+evy6JXr1aBGnVkX
rpw32zNW7p8XFgk+oYfveJK3Cl11v6LoVFOFAgMBAAGjLTArMA8GA1UdEwQIMAYB
Af8CAQAwGAYDVR0RBBEwD4IHc3RhdGxlcocECv80ZTANBgkqhkiG9w0BAQsFAAOC
AQEAVC9Q+MsLdMtTe/5zTfwzzml22qYELdUDgEIOg+d2kugA1UR/q9y+ub7XkFok
9jVPO9sDq06PmYHsQZo50Rn6qbqN4z4bBIFy653ONoLf0hQkCGEnBoh1T1lqboEC
7hpc4ZIg6qs4XlN4RtytTJev0h2vl8AvIqw/wesrbIB6PCe/ADlgDVBEcjiK11K0
sBnZmfF+uH3asCDUfWcR1v1ubkaLQlVYPjcoqVCzjLAEjwAgu7187MWPyOl4KcHH
1FwqRyCQx8/pkc0tZWd3j/Z3tD5Gd4GLFDMoNl36SnSUGNq37h/URkI1kQ5e6J+g
q58wfsWLoHVaI9sLKFLl3IO+WA==
-----END CERTIFICATE-----

Related

Cannot run the website deployed by Visual Studio - GET http://localhost:8080/_configuration/MyApp 404?

I created a new ASP.NET Core Web application in Visual Studio 2019 (.Net Core 3.1, Angular, with authentication of "Individual User Accounts/Store user accounts in-app").
Then I published the application to folder ...\bin\Release\netcoreapp3.1\publish\.
Then I try to test the deployed website
Run MyApp.exe in ...\bin\Release\netcoreapp3.1\publish\.
PS C:\Users\.....\bin\Release\netcoreapp3.1\publish> .\MyApp.exe
info: IdentityServer4.Startup[0]
Starting IdentityServer4 version 3.0.0.0
info: IdentityServer4.Startup[0]
Using explicitly configured authentication scheme Identity.Application for IdentityServer
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Users\wfaaf\source\repos\MyApp2\MyApp\bin\Release\netcoreapp3.1\publish
copy and host \bin\Release\netcoreapp3.1\publish\ClientApp\dist\ in IIS.
The web page shows in Browser. However, the console shows errors?
polyfills-es2015.0ef207fb7b4761464817.js:1 GET http://localhost:8080/_configuration/MyApp 404 (Not Found)
polyfills-es2015.0ef207fb7b4761464817.js:1 GET http://localhost:8080/_configuration/MyApp 404 (Not Found)
main-es2015.faf629d7395bba656bc6.js:1 ERROR Error: Uncaught (in promise): Error: Could not load settings for 'MyApp'
main-es2015.faf629d7395bba656bc6.js:1 ERROR Error: Could not load settings for 'MyApp'
main-es2015.faf629d7395bba656bc6.js:1 ERROR Error: Could not load settings for 'MyApp'
The host of http://localhost:8080/_configuration/MyApp should be the https://localhost:5001. How to set it?

Gradle Build failed for the Hello World example in the Corda website

I followed the IOU example on the Corda website. I have Java 1.8 121 version.
Not sure why this problem happening.
I am running my application in my intelliJ in windows box.
I have copy pasted the gradle file content in the IOU example. Alternatively I used the same gradle file which was there in the downloaded zip itself.
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':contracts')
cordapp project(':workflows')
}
node {
name "O=Notary,L=London,C=GB"
notary = [validating : false]
p2pPort 10002
rpcSettings {
address("localhost:10003")
adminAddress("localhost:10043")
}
}
node {
name "O=PartyA,L=London,C=GB"
p2pPort 10005
rpcSettings {
address("localhost:10006")
adminAddress("localhost:10046")
}
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
node {
name "O=PartyD,L=New York,C=US"
p2pPort 10008
rpcSettings {
address("0.0.0.0:10009")
adminAddress("0.0.0.0:10010")
}
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
}
task installQuasar(type: Copy) {
destinationDir rootProject.file("lib")
from(configurations.quasar) {
rename 'quasar-core(.*).jar', 'quasar.jar'
}
}
Below is the complete error
D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java>gradlew clean deployNodes
Picked up _JAVA_OPTIONS: -Xmx512M
Starting a Gradle Daemon (subsequent builds will be faster)
> Task :deployNodes
Running Cordform task
Deleting D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes
Bootstrapping local test network in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes
Generating node directory for Notary
Generating node directory for PartyA
Generating node directory for PartyD
Waiting for all nodes to generate their node-info files...
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs ####
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyD\logs ####
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\Notary\logs ####
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
> Task :deployNodes FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':deployNodes'.
> Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
> Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 41s
15 actionable tasks: 14 executed, 1 up-to-date
D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java>gradlew clean deployNodes --stacktrace
Picked up _JAVA_OPTIONS: -Xmx512M
> Task :deployNodes
Running Cordform task
Deleting D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes
Bootstrapping local test network in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes
Generating node directory for Notary
Generating node directory for PartyA
Generating node directory for PartyD
Waiting for all nodes to generate their node-info files...
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs ####
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\Notary\logs ####
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
#### Error while generating node info file D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyD\logs ####
CAPSULE EXCEPTION: Could not parse version line: Picked up _JAVA_OPTIONS: -Xmx512M (for stack trace, run with -Dcapsule.log=verbose)
USAGE: java <options> -jar corda.jar
Actions:
capsule.version - Prints the capsule and application versions.
capsule.modes - Prints all available capsule modes.
capsule.jvms - Prints a list of all JVM installations found.
capsule.help - Prints this help message.
Options:
capsule.mode=<value> - Picks the capsule mode to run.
capsule.reset - Resets the capsule cache before launching. The capsule to be re-extracted (if applicable), and other possibly cached files will be recreated.
capsule.log=<value> (default: quiet) - Picks a log level. Must be one of none, quiet, verbose, or debug.
capsule.java.home=<value> - Sets the location of the Java home (JVM installation directory) to use; If 'current' forces the use of the JVM that launched the capsule.
capsule.java.cmd=<value> - Sets the path to the Java executable to use.
capsule.jvm.args=<value> - Sets additional JVM arguments to use when running the application.
Picked up _JAVA_OPTIONS: -Xmx512M
> Task :deployNodes FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':deployNodes'.
> Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
> Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':deployNodes'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:110)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.OutputDirectoryCreatingTaskExecuter.execute(OutputDirectoryCreatingTaskExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:59)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:59)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:101)
at org.gradle.api.internal.tasks.execution.FinalizeInputFilePropertiesTaskExecuter.execute(FinalizeInputFilePropertiesTaskExecuter.java:44)
at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:91)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:62)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:59)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.run(EventFiringTaskExecuter.java:51)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:301)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:293)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:175)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:46)
at org.gradle.execution.taskgraph.LocalTaskInfoExecutor.execute(LocalTaskInfoExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareWorkItemExecutor.execute(DefaultTaskExecutionGraph.java:277)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareWorkItemExecutor.execute(DefaultTaskExecutionGraph.java:262)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$ExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:135)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$ExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:130)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$ExecutorWorker.execute(DefaultTaskPlanExecutor.java:200)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$ExecutorWorker.executeWithWork(DefaultTaskPlanExecutor.java:191)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$ExecutorWorker.run(DefaultTaskPlanExecutor.java:130)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: org.gradle.api.InvalidUserCodeException: Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
at net.corda.plugins.Baseform.bootstrapNetwork(Baseform.kt:244)
at net.corda.plugins.Cordform.build(Cordform.kt:70)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:46)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:39)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:26)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:801)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:768)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:131)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:301)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:293)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:175)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:120)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:99)
... 31 more
Caused by: java.lang.IllegalStateException: Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyA\logs.
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.printNodeInfoGenLogToConsole(NetworkBootstrapper.kt:135)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.generateNodeInfo(NetworkBootstrapper.kt:114)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.access$generateNodeInfo(NetworkBootstrapper.kt:67)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:93)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:67)
at net.corda.core.internal.concurrent.ValueOrException$DefaultImpls.capture(CordaFutureImpl.kt:130)
at net.corda.core.internal.concurrent.OpenFuture$DefaultImpls.capture(CordaFutureImpl.kt)
at net.corda.core.internal.concurrent.CordaFutureImpl.capture(CordaFutureImpl.kt:142)
at net.corda.core.internal.concurrent.CordaFutureImplKt$fork$$inlined$also$lambda$1.run(CordaFutureImpl.kt:22)
Suppressed: java.lang.IllegalStateException: Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\Notary\logs.
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.printNodeInfoGenLogToConsole(NetworkBootstrapper.kt:135)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.generateNodeInfo(NetworkBootstrapper.kt:114)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.access$generateNodeInfo(NetworkBootstrapper.kt:67)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:93)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:67)
at net.corda.core.internal.concurrent.ValueOrException$DefaultImpls.capture(CordaFutureImpl.kt:130)
at net.corda.core.internal.concurrent.OpenFuture$DefaultImpls.capture(CordaFutureImpl.kt)
at net.corda.core.internal.concurrent.CordaFutureImpl.capture(CordaFutureImpl.kt:142)
at net.corda.core.internal.concurrent.CordaFutureImplKt$fork$$inlined$also$lambda$1.run(CordaFutureImpl.kt:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.lang.IllegalStateException: Error while generating node info file. Please check the logs in D:\Arvinth\BlockChain\Corda\corda-java\cordapp-template-java\build\nodes\PartyD\logs.
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.printNodeInfoGenLogToConsole(NetworkBootstrapper.kt:135)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.generateNodeInfo(NetworkBootstrapper.kt:114)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion.access$generateNodeInfo(NetworkBootstrapper.kt:67)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:93)
at net.corda.nodeapi.internal.network.NetworkBootstrapper$Companion$generateNodeInfos$1$1.invoke(NetworkBootstrapper.kt:67)
at net.corda.core.internal.concurrent.ValueOrException$DefaultImpls.capture(CordaFutureImpl.kt:130)
at net.corda.core.internal.concurrent.OpenFuture$DefaultImpls.capture(CordaFutureImpl.kt)
at net.corda.core.internal.concurrent.CordaFutureImpl.capture(CordaFutureImpl.kt:142)
at net.corda.core.internal.concurrent.CordaFutureImplKt$fork$$inlined$also$lambda$1.run(CordaFutureImpl.kt:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
* Get more help at https://help.gradle.org
BUILD FAILED in 20s
15 actionable tasks: 14 executed, 1 up-to-date
The problem was due to the java. I had multiple java versions. I cleaned up all java. in-fact I uninstalled all the Java, restarted, and installed freshly Java1.8_221 version.And able to solve it.

Unable to request the metadata service Service-Id after upgrading to Artifactory 6.9.1 from 6.0.2

I have upgraded a few test instances of our Artifactory 6.0.2 Installation to version 6.9.1 using the provided instructions found at:
https://www.jfrog.com/confluence/display/RTF/Upgrading+Artifactory
I have tried both the yum upgrade and rpm installation pathways, and would like to end the installation with no error messages in the log files to minimize any potential issues from such errors.
After installation and many manual remediation steps I have reached an error "Unable to request the Metadata Service Service-Id" that I cannot find Google results for:
2019-04-09 16:22:13,409 [art-init] [ERROR] (o.a.m.s.s.ArtifactoryMetadataClientConfigStore:111) - Unable to request the Metadata Service Service-Id
2019-04-09 16:22:13,409 [art-init] [ERROR] (o.a.m.s.MetadataEventServiceImpl:188) - Unable to init the Metadata client. The Metadata Event pipeline will be disabled.
I have tried both the yum upgrade and rpm installation pathways.
After the upgrades, I noticed errors in the Catalina and Artifactory log files and followed the google search results regarding those error messages (added below my question for posterity):
(1) Created:
/var/opt/jfrog/artifactory/access/etc/bootstrap.creds
Containing:
access-admin#127.0.0.1=NEW_PASSWORD
(2) Removed access folder:
rm -rf /opt/jfrog/artifactory/tomcat/webapps/access
(3) Changed permissions of Artifactory directories:
cd /var/opt/jfrog/artifactory/
chown -R artifactory:artifactory .
(4) EDITED the artifactory.system.properties file adding:
artifactory.pathChecksum.migration.job.enabled=true
(5) Enabled sha 256 migration by adding this content to this same file:
##SHA2 Migration block
artifactory.sha2.migration.job.enabled=true
artifactory.sha2.migration.job.queue.workers=5
(5) Finally, Rebooted instance.
Yet the errors including The Metadata Event pipeline will be disabled persist.
I expect that the final state of the Artifactory server will be such that there are no error messages in the Artifactory nor Catalina log files.
Any assistance on remediating this error so that i can deploy the latest Artifactory build will be highly appreciated.
Thank you in advance.
======================
Here are some of the ERROR LOGS WHICH INITIATED CHANGES SHOWN ABOVE:
(1) 2019-03-27 05:03:22,872 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) - Unrecognized ErrorsModel by Access. Original message: Failed on executing /api/v1/system/ping, with response: Not Found
2019-03-27 05:03:22,872 [art-init] [ERROR] (o.a.s.a.AccessServiceImpl:364) - Could not ping access server: {}
org.jfrog.access.client.AccessClientHttpException: HTTP response status 404:Failed on executing /api/v1/system/ping, with response: Not Found.
(2)2019-03-27 05:06:53,235 [art-exec-3] [INFO ]
(o.a.s.j.m.s.Sha256MigrationJobDelegate:216) - SHA256 migration job (for existing artifacts) is disabled and will not run, there are 52496 artifacts without SHA256 values in the database.  Future versions of Artifactory may enforce this migration as a prerequisite for upgrades.
(3) 2019-04-04 16:20:10,951 [localhost-startStop-1] [JFrog-Access] [WARN ] (o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext:550) - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration': Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jerseyConfig' defined in URL [jar:file:/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-4.2.0.jar!/org/jfrog/access/rest/config/JerseyConfig.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.jfrog.access.rest.config.JerseyConfig]: Constructor threw exception; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'systemResource' defined in URL [jar:file:/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-server-rest-4.2.0.jar!/org/jfrog/access/server/rest/resource/system/SystemResource.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'backupSubResource' defined in URL [jar:file:/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-server-rest-4.2.0.jar!/org/jfrog/access/server/rest/resource/system/backup/BackupSubResource.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'backupServiceImpl': Unsatisfied dependency expressed through field 'importerExporter'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'accessImporterExporterImpl': Unsatisfied dependency expressed through method 'setServerBootstrap' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'accessServerBootstrapImpl': Invocation of init method failed; nested exception is java.lang.IllegalStateException: Failed to bootstrap initial access credentials.
Thank you for the feedback #error404.
We ended up opening a ticket with Jfrog support (https://www.jfrog.com/jira/browse/RTFACT-18939) who was able to replicate the issue and subsequently marked the ticket as "Fix Version/s: 6.10.0"
Although not strictly stated, I assume this Fix Version update implies that the described issue will be remediated in the next release.

org.alfresco.error.AlfrescoRuntimeException: 09050000 GetModelsDiff return status is 404 while running the Alfresco

I have imported a maven project from git hub and followed the instructions given in its README file to run Alfresco. While testing the application I have entered http://localhost:8080/share/ for which I am successfully getting the login page for Alfresco. But when I am giving the default username and password I am not able to login to the application. Getting an error "Your authentication details have not been recognized or Alfresco may not be available at this time." When I checked the console and the Alfresco log file, I found org.springframework.beans.factory.BeanCreationException followed by org.alfresco.error.AlfrescoRuntimeException: 09050000 GetModelsDiff return status is 404.
Installed the following:
Apache Tomcat 7.0 version
PostgreSQL 9.4
Also installed few dependencies needed for the project(Elastic Search6.4 and ActiveMQ5.0).
Working on Java8.
Github repository link of the imported project: GitHub - Open-MBEE/mms: Model Management System
Below is the exceptions observed in the console
INFO: Initializing Spring root WebApplicationContext
2018-10-05 13:25:28,063 INFO [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:h2:C:\Users\alien147\git\mms_modified\mms-ent/alf_data_dev/h2_data/alf_dev;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000;MVCC=FALSE;LOCK_MODE=0' with user 'alfresco'.
2018-10-05 13:25:28,065 INFO [alfresco.repo.admin] [localhost-startStop-1] Connected to database H2 version 1.4.190 (2015-10-11)
2018-10-05 13:25:32,648 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
2018-10-05 13:25:32,648 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V5.1-metadata-query-indexes
2018-10-05 13:25:38,538 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Authentication' subsystem, ID: [Authentication, managed, alfrescoNtlm1]
2018-10-05 13:25:38,715 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'Authentication' subsystem, ID: [Authentication, managed, alfrescoNtlm1] complete
2018-10-05 13:25:40,942 WARN [context.support.XmlWebApplicationContext] [localhost-startStop-1] Exception encountered during context initialization - cancelling refresh attempt
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'emsConfig' defined in class path resource [alfresco/module/mms-amp/context/mms-init-service-context.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1514)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:191)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:618)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:934)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:410)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112)
at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:70)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at java.util.Properties$LineReader.readLine(Properties.java:434)
at java.util.Properties.load0(Properties.java:353)
at java.util.Properties.load(Properties.java:341)
at gov.nasa.jpl.view_repo.util.EmsConfig.setProperties(EmsConfig.java:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:269)
at org.springframework.beans.factory.config.MethodInvokingFactoryBean.doInvoke(MethodInvokingFactoryBean.java:162)
at org.springframework.beans.factory.config.MethodInvokingFactoryBean.afterPropertiesSet(MethodInvokingFactoryBean.java:152)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1573)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1511)
... 22 more
org.alfresco.error.AlfrescoRuntimeException: 09050000 GetModelsDiff return status is 404
at org.alfresco.solr.client.SOLRAPIClient.getModelsDiff(SOLRAPIClient.java:1157)
at org.alfresco.solr.tracker.ModelTracker.trackModelsImpl(ModelTracker.java:249)
at org.alfresco.solr.tracker.ModelTracker.trackModels(ModelTracker.java:207)
at org.alfresco.solr.tracker.ModelTracker.ensureFirstModelSync(ModelTracker.java:229)
at org.alfresco.solr.tracker.CoreWatcherJob.registerForCore(CoreWatcherJob.java:131)
at org.alfresco.solr.tracker.CoreWatcherJob.execute(CoreWatcherJob.java:74)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:563)
Can anyone please help me out in solving the problem?
Thanks in advance.
The alfresco WAR and share WAR are completely separate. It is quite common for the share WAR to start up and show the login page while the back-end it talks to (the alfresco WAR) has failed to start.
That's what is happening in this case. It appears that the emsConfig bean, defined in https://github.com/Open-MBEE/mms/blob/develop/mms-ent/repo-amp/src/main/amp/config/alfresco/module/mms-amp/context/mms-init-service-context.xml, is getting a null pointer, probably because it cannot find that properties file.
On the installation instructions for this project it is written :
"Create and edit the mms.properties file in the $TOMCAT_HOME/shared/classes directory (You can copy mms-ent/mms.properties.example)".
Have you performed this step ?

Disable open office in Alfresco

When trying to start alfresco 4 community edition I am getting the following error.
11:23:02,512 INFO [org.alfresco.repo.management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'OOoDirect' subsystem, ID: [OOoDirect, default] complete
11:23:02,604 INFO [org.alfresco.repo.management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'OOoJodconverter' subsystem, ID: [OOoJodconverter, default]
11:23:02,645 ERROR [org.alfresco.enterprise.repo.content.JodConverterSharedInstance] [localhost-startStop-1] Unexpected error in configuring or starting the JodConverter library.The following error is shown for informational purposes only.
java.lang.IllegalArgumentException: officeHome must exist and be a directory
at org.artofsolving.jodconverter.office.DefaultOfficeManagerConfiguration.checkArgument(DefaultOfficeManagerConfiguration.java:238)
at org.artofsolving.jodconverter.office.DefaultOfficeManagerConfiguration.setOfficeHome(DefaultOfficeManagerConfiguration.java:59)
We are not using open office, so we didn't select open office during the installation. How do we configure alfresco to ignore this check to have a clean startup log?
The following should work:
jodconverter.enabled=false
jodconverter.officeHome=null
jodconverter.portNumbers=8100
The configuration file: tomcat\shared\classes\alfresco-global.properties

Resources