Oracle PGX on Yarn - 404 on WebService - bigdata

I'm running Yarn on Oracle BDA X7-2, specs:
Cloudera Enterprise 5.14.3
Java 1.8.0_171
PGX 2.7.1
I'm trying to run PGX on Yarn following this manual:
https://docs.oracle.com/cd/E56133_01/2.5.0/tutorials/yarn.html
Managed to run the installation script, completed the config file provided by it with the following:
{
"pgx_yarn_jar_hdfs_path": "hdfs:/user/pgx/pgx-yarn-2.7.1.jar",
"pgx_war_hdfs_path": "hdfs:/user/pgx/pgx-webapp-2.7.1.war",
"pgx_conf_hdfs_path": "hdfs:/user/pgx/pgx.conf",
"pgx_log4j_conf_hdfs_path": "hdfs:/user/pgx/log4j2.xml",
"pgx_dist_log4j_conf_hdfs_path": "hdfs:/user/pgx/dist_log4j.xml",
"pgx_cluster_host_hdfs_path": "hdfs:/user/pgx/cluster-host.tgz",
"zookeeper_connect_string": "bda1node05,bda1node06,bda1node07",
"standard_library_path": "/usr/lib64/gcc/4.8.2",
"min_heap_size": "512m",
"max_heap_size": "12g",
"container_cores": 9,
"container_memory": 0,
"container_priority": 0,
"num_machines": 1
}
Yarn has a pgx-service application in RUNNING state, no errors in stderr, the log shows me the service is running in the address:
http://bda1node06:7007
And the linux Java process is running with the following command:
/usr/java/default/bin/java -Xms512m -Xmx12g oracle.pgx.yarn.PgxService bda1node06 /u11/hadoop/yarn/nm/usercache/root/appcache/application_1539869144089_2070/container_e22_1539869144089_2070_01_000002/pgx-server.war 7007 bda1node05,bda1node06,bda1node07 /pgx-8eef44e2-1657-403a-8193-0102f5266680
And after the execution of the PGX client for testing purposes:
$PGX_HOME/bin/pgx --base_url http://bda1node06:7007
I get:
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: cannot connect to server; requested http://bda1node06:7007/version?extendedInfo=true and expected status 200, got 404 instead; response body = ""
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at oracle.pgx.api.PgxFuture.get(PgxFuture.java:99)
at oracle.pgx.api.ServerInstance.createSession(ServerInstance.java:559)
at oracle.pgx.shell.Console.initSession(Console.java:280)
at oracle.pgx.shell.Console.(Console.java:153)
at oracle.pgx.shell.Console.main(Console.java:296)
Caused by: java.lang.IllegalStateException: cannot connect to server; requested http://bda1node06:7007/version?extendedInfo=true and expected status 200, got 404 instead; response body = ""
at oracle.pgx.api.ClientApiProvider.lambda$versionCheck$2(ClientApiProvider.java:189)
at oracle.pgx.client.RemoteUtils.lambda$asyncRequest$5(RemoteUtils.java:278)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have no idea of how to debug and check if there's any extra path needed in the connection URL.
How may I proceed to debug?
Thanks in advance!

By default, PGX has a base path of /pgx, which means you should connect as follows:
$PGX_HOME/bin/pgx --base_url http://bda1node06:7007/pgx

I'll do a little follow up here.
We've managed to start a pgx server and manipulate hbase graph! :D
PGX "Hello World"
We wrote a small code to insert vertices, edgex, instantiate pgx and run a simple example, this is it:
cfg = GraphConfigBuilder.forPropertyGraphHbase().setName('sinapse').setZkQuorum('bda1node05').build()
opg = OraclePropertyGraph.getInstance(cfg)
​
a = opg.addVertex()
a.setProperty('nome', 'Felipe')
​
b = opg.addVertex()
b.setProperty('nome', 'Rhenan')
​
c = opg.addVertex()
c.setProperty('nome', 'Hugo')
​
opg.addEdge(a, b, 'Pai de')
opg.addEdge(b, c, 'Pai de')
opg.addEdge(a, c, 'Avo de')
opg.commit()
​
session = Pgx.createSession('sinapsepgx')
analyst = session.createAnalyst()
pgxGraph = session.readGraphWithProperties(opg.getConfig(), true)
analyst.countTriangles(pgxGraph, true)
And that worked just fine!
Client - Server architecture
The next step, we moved to a client/server mode, starting the start-server script.
We managed to do that just fine too!
This is our config files:
server.conf
{
"port": 7007,
"enable_tls": false,
"enable_client_authentication": false
}
pgx.conf
{
"allow_idle_timeout_overwrite": true,
"allow_local_filesystem": false,
"allow_task_timeout_overwrite": true,
"enable_gm_compiler": true,
"enterprise_scheduler_config": {
"analysis_task_config": {
"priority": "MEDIUM",
"weight": 12,
"max_threads": 12
},
"fast_analysis_task_config": {
"priority": "HIGH",
"weight": 1,
"max_threads": 12
},
"num_io_threads_per_task": 12
},
"preload_graphs": [
{"path": "graphs/sinapse_conf.json",
"name": "sinapse"}
],
"max_active_sessions": 1024,
"max_queue_size_per_session": -1,
"max_snapshot_count": 0,
"memory_cleanup_interval": 600,
"path_to_gm_compiler": null,
"release_memory_threshold": 0.85,
"session_idle_timeout_secs": 0,
"session_task_timeout_secs": 0,
"strict_mode": true,
"tmp_dir": "/tmp"
}
sinapse_conf.json
{
"edge_props": [
{
"name": "relacao",
"type": "string"
}
],
"db_engine": "HBASE",
"vertex_props": [
{
"name": "nome",
"type": "string"
},
{
"name": "cpf",
"type": "string"
}
],
"format": "pg",
"name": "sinapse",
"error_handling": {},
"vertex_id_type": "long",
"attributes": {},
"loading": {},
"zk_quorum": "bda1node05,bda1node06,bda1node07"
}
start-script ran just fine with that, preloaded our hbase graph, works like a charm.
Connected to the server using the pgx client:
./bin/pgx -b http://localhost:7007
And managed to do the same we did in the groovy shell.
That's awesome.
PGX on Yarn
Well, now we are back in our challenge: run and manage PGX on Yarn.
We've copied our pgx.conf file to the hdfs, like this:
hdfs://user/pgx/pgx.conf
{
"allow_idle_timeout_overwrite": true,
"allow_local_filesystem": false,
"allow_task_timeout_overwrite": true,
"enable_gm_compiler": true,
"enterprise_scheduler_config": {
"analysis_task_config": {
"priority": "MEDIUM",
"weight": 12,
"max_threads": 12
},
"fast_analysis_task_config": {
"priority": "HIGH",
"weight": 1,
"max_threads": 12
},
"num_io_threads_per_task": 12
},
"preload_graphs": [
{"path": "graphs/sinapse_conf.json",
"name": "sinapse"}
],
"max_active_sessions": 1024,
"max_queue_size_per_session": -1,
"max_snapshot_count": 0,
"memory_cleanup_interval": 600,
"path_to_gm_compiler": null,
"release_memory_threshold": 0.85,
"session_idle_timeout_secs": 0,
"session_task_timeout_secs": 0,
"strict_mode": true,
"tmp_dir": "/tmp"
}
/opt/oracle/oracle-spatial-graph/property_graph/pgx/yarn/conf/yarn.conf
{
"pgx_yarn_jar_hdfs_path": "hdfs://mpmapas-ns/user/pgx/pgx-yarn-2.7.1.jar",
"pgx_war_hdfs_path": "hdfs://mpmapas-ns/user/pgx/pgx-webapp-2.7.1.war",
"pgx_conf_hdfs_path": "hdfs://mpmapas-ns/user/pgx/pgx.conf",
"pgx_log4j_conf_hdfs_path": "hdfs://mpmapas-ns/user/pgx/log4j2.xml",
"pgx_dist_log4j_conf_hdfs_path": "hdfs://mpmapas-ns/user/pgx/dist_log4j.xml",
"pgx_cluster_host_hdfs_path": "hdfs://mpmapas-ns/user/pgx/cluster-host.tgz",
"zookeeper_connect_string": "bda1node05.pgj.rj.gov.br,bda1node06.pgj.rj.gov.br,bda1node07.pgj.rj.gov.br",
"standard_library_path": "/usr/lib64/gcc/4.8.2",
"min_heap_size": "512m",
"max_heap_size": "12g",
"container_cores": 9,
"container_memory": 0,
"container_priority": 0,
"num_machines": 1
}
Also, #albert recomended us to remove the log4j2.xml from the server/shared-mem/pgx-webapp-2.7.1.war file so we may handle log4j logging using only the file placed on our hdfs folder.
So we've unpacked, removed, repacked the war file, edited the log4j2.xml file on hdfs like this:
hdfs://user/pgx/log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss,SSS} %p %C{1} - %m%n"/>
</Console>
<File name="LogFile" fileName="file:/tmp/pg_trace.log">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="LogFile"/>
</Root>
<Logger name="oracle.pgx.engine.admin.Ctrl" level="debug">
<AppenderRef ref="LogFile"/>
</Logger>
<Logger name="pgx.dist.cluster_host" level="debug">
<AppenderRef ref="LogFile"/>
</Logger>
</Loggers>
</Configuration>
And finally ran the yarn start server command, just like this:
yarn jar yarn/pgx-yarn-2.7.1.jar yarn/conf/yarn.conf
And we get the bottom of the logfile that seems realy nice!:
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:os.version=4.1.12-124.14.1.el7uek.x86_64
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:user.name=root
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/oracle/oracle-spatial-graph/property_graph/pgx
18/12/11 16:25:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bda1node05.pgj.rj.gov.br,bda1node06.pgj.rj.gov.br,bda1node07.pgj.rj.gov.br sessionTimeout=10000 watcher=oracle.pgx.yarn.ClientZkClient#32da97fd
18/12/11 16:25:03 INFO zookeeper.ClientCnxn: Opening socket connection to server bda1node07.pgj.rj.gov.br/192.168.8.7:2181. Will not attempt to authenticate using SASL (unknown error)
18/12/11 16:25:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.8.5:33299, server: bda1node07.pgj.rj.gov.br/192.168.8.7:2181
18/12/11 16:25:03 INFO zookeeper.ClientCnxn: Session establishment complete on server bda1node07.pgj.rj.gov.br/192.168.8.7:2181, sessionid = 0x3668759ae4553df, negotiated timeout = 10000
18/12/11 16:25:05 INFO yarn.StartService: waiting for PGX service (yarn appId == 'application_1539869144089_2555') to come up ...
18/12/11 16:25:10 INFO yarn.StartService: retrieved PGX host: http://bda1node07.pgj.rj.gov.br:7007
18/12/11 16:25:10 INFO yarn.StartService: to connect a remote shell to this host, run '$PGX_HOME/bin/pgx --base_url http://bda1node07.pgj.rj.gov.br:7007'
18/12/11 16:25:10 INFO yarn.StartService: to shut the PGX service down, run 'yarn application -kill application_1539869144089_2555'
18/12/11 16:25:10 INFO zookeeper.ZooKeeper: Session: 0x3668759ae4553df closed
18/12/11 16:25:10 INFO zookeeper.ClientCnxn: EventThread shut down
But connecting to it still returns 404 ;(
The last intel I may give you is the yarn stderr log, wich also informs that we are not using log4j correctly:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u09/hadoop/yarn/nm/filecache/890/pgx-yarn-2.7.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
18/12/11 16:25:06 INFO yarn.AppMaster: register app
18/12/11 16:25:06 INFO yarn.AppMaster: RM response = [queue=root.users.root,maxCap=<memory:65536, vCores:9>]
18/12/11 16:25:06 INFO yarn.AppMaster: max capability of cluster: <memory:65536, vCores:9>
18/12/11 16:25:06 INFO yarn.AppMaster: attempting to allocate 1 containers
18/12/11 16:25:06 INFO yarn.AppMaster: attempt 1: got 0 containers. Available: <memory:194560, vCores:180>
18/12/11 16:25:06 INFO yarn.AppMaster: attempt 2: got 0 containers. Available: <memory:194560, vCores:180>
18/12/11 16:25:06 INFO yarn.AppMaster: attempt 3: got 1 containers. Available: <memory:129024, vCores:171>
18/12/11 16:25:06 INFO yarn.AppMaster: copy hdfs://mpmapas-ns/user/pgx/pgx-yarn-2.7.1.jar into pgx-yarn.jar
18/12/11 16:25:06 INFO yarn.AppMaster: copy hdfs://mpmapas-ns/user/pgx/pgx-webapp-2.7.1.war into pgx-server.war
18/12/11 16:25:06 INFO yarn.AppMaster: copy hdfs://mpmapas-ns/user/pgx/pgx.conf into conf/pgx.conf
18/12/11 16:25:06 INFO yarn.AppMaster: copy hdfs://mpmapas-ns/user/pgx/log4j2.xml into conf/log4j2.xml
18/12/11 16:25:07 INFO yarn.AppMaster: server env = {CLASSPATH=conf:pgx-server/WEB-INF/lib/*:pgx-yarn.jar:$HADOOP_CONF_DIR}
18/12/11 16:25:07 INFO yarn.AppMaster: server command = $JAVA_HOME/bin/java -Xms512m -Xmx12g oracle.pgx.yarn.PgxService bda1node07.pgj.rj.gov.br $PWD/pgx-server.war 7007 bda1node05.pgj.rj.gov.br,bda1node06.pgj.rj.gov.br,bda1node07.pgj.rj.gov.br /pgx-37a121ce-e028-432c-8761-104027126c3b 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr;
18/12/11 16:25:07 INFO yarn.AppMaster: check for completion
18/12/11 16:25:08 INFO yarn.AppMaster: check for completion
18/12/11 16:25:08 INFO yarn.AppMaster: check for completion
18/12/11 16:25:09 INFO yarn.AppMaster: check for completion
18/12/11 16:25:09 INFO yarn.AppMaster: check for completion
18/12/11 16:25:10 INFO yarn.AppMaster: check for completion
18/12/11 16:25:10 INFO yarn.AppMaster: check for completion
18/12/11 16:25:11 INFO yarn.AppMaster: check for completion
18/12/11 16:25:11 INFO yarn.AppMaster: check for completion
18/12/11 16:25:12 INFO yarn.AppMaster: check for completion
.
.
.
This is the farthest we've managed to go.
We can start our work now! That's realy exciting.
Now I know how to properly start a service, preload, insert, manage data, and we will import our existing graph database to it and do some experimentation.
Would be lovely to have this running on Yarn at the production level.
Thank you all for the extreme dedication and attention.

Related

Send the trace data of a website using Jaeger and Opentelemetry to Opensearch

I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)

ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7712

There was two similar question in stackoverflow but cloud't help me to solve the probelm.
I want to use loadbalancing for two nodes of wso2 api manager 3.2.0 using Nginx. I configured api manager as following :
could you please guide me?
[transport.https.properties]
proxyPort = 443
[server]
hostname = "api.am.wso2.com"
node_ip = "127.0.0.1"
#offset=0
mode = "single" #single or ha
[super_admin]
username = "admin"
password = "admin"
create_admin_account = true
[user_store]
type = "database_unique_id"
[apim.throttling]
event_duplicate_url = ["tcp://172.24.64.115:5673"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://172.24.64.114:9611"]
traffic_manager_auth_urls = ["ssl://172.24.64.114:9711"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://172.24.64.115:9612"]
traffic_manager_auth_urls = ["ssl://172.24.64.115:9712"]
type = "loadbalance"
[apim.analytics]
enable = true
store_api_url = "https://localhost:7444"
#username = "$ref{super_admin.username}"
#password = "$ref{super_admin.password}"
event_publisher_type = "default"
event_publisher_impl = "org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher"
publish_response_size = true
[[apim.analytics.url_group]]
analytics_url =["tcp://172.25.129.69:7611","tcp://172.25.129.70:7611"]
analytics_auth_url =["ssl://172.25.129.69:7711","ssl://172.25.129.70:7711"]
type = "loadbalance"
#[[apim.analytics.url_group]]
#analytics_url =["tcp://analytics1:7612","tcp://analytics2:7612"]
#analytics_auth_url =["ssl://analytics1:7712","ssl://analytics2:7712"]
#type = "failover"
.
While ruuning wso2server.sh faced following exception :
StandardEngine[Catalina].StandardHost[localhost].StandardContext[/keymanager-operations].File[/opt/v1/wso2am-3.2.0/repository/deployment/server/webapps/keymanager-operations.war]
[2021-07-28 16:29:32,729] WARN - DataEndpointGroup No receiver is reachable at reconnection, will try to reconnect every 30 sec
[2021-07-28 16:29:32,732] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7712
org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://localhost:7712
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:147) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:59) [org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_291]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_291]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_291]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_291]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException: Error while trying to connect to ssl://localhost:7712
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:81) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212) ~[commons-pool_1.5.6.wso2v1.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:137) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
... 6 more
Caused by: org.apache.thrift.transport.TTransportException: Could not connect to localhost on port 7712
at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:273) ~[libthrift_0.12.0.wso2v1.jar:?]
at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:173) ~[libthrift_0.12.0.wso2v1.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:64) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212) ~[commons-pool_1.5.6.wso2v1.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:137) ~[org.wso2.carbon.databridge.agent_5.2.26.jar:?]
... 6 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_291]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476) ~[?:1.8.0_291]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218) ~[?:1.8.0_291]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200) ~[?:1.8.0_291]
Server (69) & Server(70) as following:
# Carbon Configuration Parameters
wso2.carbon:
type: wso2-apim-analytics
# value to uniquely identify a server
id: wso2-am-analytics
# server name
name: WSO2 API Manager Analytics Server
# ports used by this server
ports:
# port offset
offset: 1
wso2.transport.http:
transportProperties:
-
name: "server.bootstrap.socket.timeout"
value: 60
-
name: "client.bootstrap.socket.timeout"
value: 60
-
name: "latency.metrics.enabled"
value: true
-
# Data receiver configuration
dataReceiver:
# Data receiver type
# THIS IS A MANDATORY FIELD
type: Thrift
# Data receiver properties
properties:
tcpPort: '7611'
sslPort: '7711'
-
# Data receiver configuration
dataReceiver:
# Data receiver type
# THIS IS A MANDATORY FIELD
type: Binary
# Data receiver properties
properties:
tcpPort: '9611'
sslPort: '9711'
tcpReceiverThreadPoolSize: '100'
sslReceiverThreadPoolSize: '100'
hostName: 0.0.0.0
Clould you please help me ?
If you have configured the Analytics node with an offset than the default, it is required to configure both analytics_url and analytics_auth_url under apim.analytics.url_group configuration pointing to the correct Hostname and Ports of the Analytics server.
To fail to do so, we can observe the above-mentioned Connection Refused error in the API Manager side, as it was trying to communicate with the Analytics and there aren't any ports opened to listen to the communication channel. Therefore, please make sure that you have configured the analytics_url and analytics_auth_url pointing to the Analytics node in both of your API Manager nodes. You can refer to the following docs for instructions.

How to connect to Neptune using Version 4 Signing dependency

I have an EC2 instance that can connect to gremlin using the Gremlin Console, or by pulling in this repository and running the maven command.
However, when I use the recommended Version 4 signing dependency:
dependencies {
compile(
...
// neptune sigv4
[group: "com.amazonaws", name:"aws-java-sdk-core", version: "1.11.307"],
[group: "com.amazonaws", name:"amazon-neptune-sigv4-signer", version: "1.0"],
[group: "com.amazonaws", name:"amazon-neptune-gremlin-java-sigv4", version: "1.0"],
...
)
}
On a very similar hello world program:
package com.test.neptune;
import org.apache.tinkerpop.gremlin.driver.Client;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.Result;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer;
import org.neo4j.cypher.internal.frontend.v2_3.repeat;
public class NeptuneExampleCopy {
private static final String NEPTUNE_ENDPOINT = "my.endpoint.url";
private static final int NEPTUNE_PORT = 0;
public static void main(String[] args) {
// connect to the neptune cluster
final Cluster cluster = Cluster.build()
.addContactPoint(NEPTUNE_ENDPOINT)
.port(NEPTUNE_PORT)
.channelizer(SigV4WebSocketChannelizer.class)
.create();
// run a traversal, print the results
final Client client = cluster.connect();
final ResultSet rs = client.submit("g.V().count()");
for (Result r : rs) {
System.out.println(r);
}
// close the cluster
cluster.close();
}
}
Gradle throws the following exception:
Apr 25, 2019 5:24:21 PM io.netty.channel.ChannelInitializer exceptionCaught
WARNING: Failed to initialize a channel. Closing: [id: 0xd894eb28]
com.amazon.neptune.gremlin.driver.exception.SigV4PropertiesNotFoundException: Unable to load SigV4 properties from any of the providers
at com.amazon.neptune.gremlin.driver.sigv4.ChainedSigV4PropertiesProvider.getSigV4Properties(ChainedSigV4PropertiesProvider.java:74)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.loadProperties(AwsSigV4ClientHandshaker.java:102)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.<init>(AwsSigV4ClientHandshaker.java:64)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.createHandler(SigV4WebSocketChannelizer.java:210)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.configure(SigV4WebSocketChannelizer.java:176)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:140)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:92)
at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105)
at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:617)
at io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:46)
at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1467)
at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1141)
at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:666)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:510)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:214)
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:198)
at com.test.neptune.NeptuneExampleCopy.main(NeptuneExampleCopy.java:25)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:310)
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:242)
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:212)
... 2 more
Caused by: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client$ClusteredClient.chooseConnection(Client.java:499)
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:305)
... 4 more
Apr 25, 2019 5:24:22 PM io.netty.channel.ChannelInitializer exceptionCaught
WARNING: Failed to initialize a channel. Closing: [id: 0xc3ff34e0]
com.amazon.neptune.gremlin.driver.exception.SigV4PropertiesNotFoundException: Unable to load SigV4 properties from any of the providers
at com.amazon.neptune.gremlin.driver.sigv4.ChainedSigV4PropertiesProvider.getSigV4Properties(ChainedSigV4PropertiesProvider.java:74)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.loadProperties(AwsSigV4ClientHandshaker.java:102)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.<init>(AwsSigV4ClientHandshaker.java:64)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.createHandler(SigV4WebSocketChannelizer.java:210)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.configure(SigV4WebSocketChannelizer.java:176)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:140)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:92)
at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
How could this code be fixed? Is there a better Version 4 signing dependency?
SigV4 handler tries to fetch your AWS Credentials through multiple credential providers. If no credential provider was initialized, then you are bound to see this exception. How have you initialized your AWS Credentials? You could use any of the standard sources, like environment variables, or JVM system properties or the like. See the documentation below for more details:
https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-connecting-gremlin-java.html
Update: Do make sure you are using the latest versions of all the packages and dependencies.
For example:
// neptune sigv4 [group: "com.amazonaws",
name:"aws-java-sdk-core", version: "1.11.542"],
[group: "com.amazonaws",
name:"amazon-neptune-sigv4-signer", version: "1.0.4"],
[group: "com.amazonaws",
name:"amazon-neptune-gremlin-java-sigv4", version: "1.0.5"],
// for neptune [group: "org.apache.tinkerpop",
name: "gremlin-driver", version: "3.4.1"]

NotaryFlow$Client is not registered Error in Corda Obligation Code

I am doing some tests in a Linux server environment with Corda Enterprise 3.1 and the Obligation sample code from R3. When I try to execute a flow, I get an exception from the Notary in the log of the node initiating the flow. I have pasted log entries in below. All the nodes are being recognized as valid, but the message seems to say that the client for the notary flow is not registered. Where would I register it? What would I register?
The code being executed is:
val flowHandle = service.proxy.startFlowDynamic(
IssueObligation.Initiator::class.java,
issueAmount,
lenderIdentity,
true
)
The logs show this stack trace:
[INFO ] 2018-09-27T23:07:51,522Z [nioEventLoopGroup-2-2] netty.AMQPChannelHandler.invoke - Handshake completed with subject: O=Notary, L=London, C=GB {allowedRemoteLegalNames=O=Notary, L=London, C=GB, localCert=O=PartyB, L=New York, C=US, remoteAddress=xxxxx:10102, remoteCert=O=Notary, L=London, C=GB, serverMode=false}
[INFO ] 2018-09-27T23:07:51,524Z [nioEventLoopGroup-2-2] bridging.AMQPBridgeManager$AMQPBridge.invoke - Bridge Connected {bridgeName=internal.peers.DLCYA2tcXLrUnF9bkTMwouBuHooVn416Dc8Gk8JBaze4Gk -> xxxxx:10102, legalNames=O=Notary, L=London, C=GB, maxMessageSize=10485760, queueName=internal.peers.DLCYA2tcXLrUnF9bkTMwouBuHooVn416Dc8Gk8JBaze4Gk, target=xxxxx:10102}
[INFO ] 2018-09-27T23:07:51,530Z [nioEventLoopGroup-2-2] engine.ConnectionStateMachine.invoke - Connection local open org.apache.qpid.proton.engine.impl.ConnectionImpl#2e08c638 {localLegalName=O=PartyB, L=New York, C=US, remoteLegalName=O=Notary, L=London, C=GB, serverMode=false}
[INFO ] 2018-09-27T23:07:56,988Z [flow-worker] corda.flow.run - Flow threw exception... sending to flow hospital {actor_id=user1, actor_owningIdentity=O=PartyB, L=New York, C=US, actor_store_id=NODE_CONFIG, fiber-id=10396171, flow-id=af75a3d3-402c-45e2-89af-48dce5b11998, invocation_id=1399fb77-c4ed-4a09-972b-76c6ffb09bdc, invocation_timestamp=2018-09-27T23:07:49.588Z, session_id=d3c04d4a-f722-4cbd-95ec-dd2f744a36cd, session_timestamp=2018-09-27T23:05:30.595Z, thread-id=201, tx_id=2D94EABC8EBE902AC24A64A2562C293A1FB26747D0A04821A497FD333944F220}
net.corda.core.flows.UnexpectedFlowEndException: class net.corda.core.flows.NotaryFlow$Client is not registered
at net.corda.node.services.statemachine.FlowStateMachineImpl.processEventsUntilFlowIsResumed(FlowStateMachineImpl.kt:166) ~[corda-node-3.1.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.suspend(FlowStateMachineImpl.kt:396) ~[corda-node-3.1.jar:?]
at net.corda.core.flows.FlowLogic.sendAndReceiveWithRetry$core(FlowLogic.kt:245) ~[corda-core-3.1.jar:?]
at net.corda.core.flows.NotaryFlow$Client.sendAndReceiveNonValidating(NotaryFlow.kt:140) ~[corda-core-3.1.jar:?]
at net.corda.core.flows.NotaryFlow$Client.notarise(NotaryFlow.kt:94) ~[corda-core-3.1.jar:?]
at net.corda.core.flows.NotaryFlow$Client.call(NotaryFlow.kt:65) ~[corda-core-3.1.jar:?]
at net.corda.core.flows.NotaryFlow$Client.call(NotaryFlow.kt:45) ~[corda-core-3.1.jar:?]
This was due to the notary not being configured as such. As a result, it did not have a responder flow for NotaryFlow$Client installed.
The follow-up error you received, net.corda.core.transactions.FilteredTransaction cannot be cast to net.corda.core.transactions.SignedTransaction., indicates that the node requesting notarisation incorrectly identified the notary as non-validating rather than validating. As a result, it sent the wrong transaction type.

Exception 504 when registering the consumer

I've been working with Symfony 2.7 and the RabbitMQBundle to handle some long processes asynchronously.
After facing the issue where the MySQL connection dies after a few minutes, I discovered rabbitmq-cli-consumer, a small app in Go that takes care of consuming the queue, and gives its content to a command.
In my case, I use it with this command: ./rabbitmq-cli-consumer -c configuration-stock.conf --include -V -e 'php app/console amqp:consume:stock --env=prod -vvv', with this configuration file:
[rabbitmq]
host = HOST
username = USERNAME
password = PASSWORD
vhost=/VHOST
port=PORT
queue=stock
compression=Off
[exchange]
name=exports
type=direct
durable=On
[queuesettings]
routingkey=stock
messagettl=10000
deadLetterExchange=exports.dl
deadLetterroutingkey=stock
priority=10
To handle errors, I intend to use RabbitMQ's x-dead-letter-exchange and x-dead-letter-routing-key configuration, to be able to retry the message later (in case something went temporarly wrong).
My issue is that, when I define my queues in RabbitMQBundle's configuration, rabbitmq-cli-consumer is unable to consume the queue, throwing this error:
2018/04/23 11:35:54 Connecting RabbitMQ...
2018/04/23 11:35:54 Connected.
2018/04/23 11:35:54 Opening channel...
2018/04/23 11:35:54 Done.
2018/04/23 11:35:54 Setting QoS...
2018/04/23 11:35:54 Succeeded setting QoS.
2018/04/23 11:35:54 Declaring queue "stock"...
2018/04/23 11:35:54 Registering consumer...
2018/04/23 11:35:54 failed to register a consumer: Exception (504) Reason: "channel/connection is not open"
Here is the configuration I use for RabbitMQBundle:
old_sound_rabbit_mq:
producers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
exports_dl:
connection: default
exchange_options:
name: 'exports.dl'
type: direct
consumers:
stock_dead_letter:
connection: default
exchange_options:
name: exports.dl
type: direct
queue_options:
name: stock.dl
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports']
x-dead-letter-routing-key: ['S', 'stock']
x-message-ttl: ['I', 60000]
callback: amqp.consumers.exports.stock
multiple_consumers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
queues:
stock:
name: stock
callback: amqp.consumers.exports.stock
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports.dl']
x-dead-letter-routing-key: ['S', 'stock']
Has anyone ever encountered something similar ? And how did you solve it ?

Resources