Serialization issue wit corda node as docker - corda

I am trying to run corda node as docker as container.
docker code:
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/config:/etc/corda \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/certificates:/opt/corda/certificates \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary:/opt/corda/persistence \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/logs:/opt/corda/logs \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/cordapps:/opt/corda/cordapps \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/node-infos:/opt/corda/additional-node-infos \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/network-parameters:/opt/corda/network-parameters \
-v /home/dlt-acc-admin/corda/docker-images/node-notary/Notary/node.conf:/etc/corda/node.conf \
-p 10002:10002 \
-p 10004:10004 \
-p 10045:10045 \
-p 9002:9002 \
corda/corda-zulu-4.0
When I try to create a RPC connection I get below error:
a_2==$shiro1$SHA-256$500000$4WdM0Gi63LSSHqiM543f4Q==$NOYlBrHAQBwdeWtEpYRcznRUR02o2jor/OhRvn9/tnc=
[http-nio-8080-exec-1] INFO com.dlt.accelerator.serviceImpl.UserLoginServiceImpl - UserLoginServiceImpl >> createCordaRPCConnection params ==>>172.16.239.103==10008
[Thread-0 (ActiveMQ-client-global-threads)] WARN net.corda.nodeapi.internal.serialization.SerializationFactoryImpl - Cannot find serialization scheme for: ([636F726461010000], RPCClient), registeredSchemes are: [net.corda.client.rpc.internal.KryoClientSerializationScheme#22a6a670, net.corda.nodeapi.internal.serialization.amqp.AMQPClientSerializationScheme#3a06fb64]
E 05:28:57 56 client.run - AMQ214000: Failed to call onMessage
java.lang.UnsupportedOperationException: Serialization scheme not supported.
at net.corda.nodeapi.internal.serialization.NotSupportedSerializationScheme.doThrow(SerializationScheme.kt:19) ~[corda-node-api-3.3-corda.jar:?]
at net.corda.nodeapi.internal.serialization.NotSupportedSerializationScheme.deserialize(SerializationScheme.kt:23) ~[corda-node-api-3.3-corda.jar:?]
at net.corda.nodeapi.internal.serialization.SerializationFactoryImpl$deserialize$1$1.invoke(SerializationScheme.kt:111) ~[corda-node-api-3.3-corda.jar:?]
at net.corda.core.serialization.SerializationFactory.withCurrentContext(SerializationAPI.kt:66) ~[corda-core-3.3-corda.jar:?]
at net.corda.nodeapi.internal.serialization.SerializationFactoryImpl$deserialize$1.invoke(SerializationScheme.kt:111) ~[corda-node-api-3.3-corda.jar:?]
at net.corda.nodeapi.internal.serialization.SerializationFactoryImpl$deserialize$1.invoke(SerializationScheme.kt:86) ~[corda-node-api-3.3-corda.jar:?]
I am using corda 3.3, but there is no coda-zulu image except for corda 4.0.
I had thought corda 4.0 was backward compatible with 3.x. Is it not?
Upgrading is not on the current requirement so can you please help me here?

You will need to rebuild your client against Corda4.0
The serialization mechanism was changed when Corda upgraded to 4.0. Here it the docs about Corda4.0 Serialization: https://docs.corda.net/serialization-index.html
And the latest serialization engine of Corda4.0 is not backward compatible with 3.3.

Related

Tensorflow serving(docker): How to get the grpc request log sent by client?

I've started tensorflow serving using docker:
docker run -d -p 8500:8500 -p 8501:8501 -p 8503:8503 \
-v /tfserving/models:/models \
-v /tfserving/config:/tfconfig \
tensorflow/serving:1.14.0 \
--model_config_file=/tfconfig/models.config \
--monitoring_config_file=/tfconfig/monitor.config \
--enable_batching=true \
--batching_parameters_file=/tfconfig/batching.config \
--model_config_file_poll_wait_seconds=30 \
--file_system_poll_wait_seconds=30
--rest_api_port=8503 \
--allow_version_labels_for_unavailable_models=true
I can get the log by docker logs containerid, but only show the basic log:
2019-11-22 07:03:48.912930: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:103]
2019-11-22 07:03:48.938568: I tensorflow_serving/model_servers/server.cc:324] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2019-11-22 07:03:48.947993: I tensorflow_serving/model_servers/server.cc:344] Exporting HTTP/REST API at:localhost:8503 ...
[evhttp_server.cc : 239] RAW: Entering the event loop ...
When I sent grpc request from client, there aren't any more logs output.
And I'v tried to use the -v=1 when starting, but it didn't work.
What I want to do is to get the grpc request log from server for problem solving. How to do with it? Thanks~

Compiling a .Net Core Console App with Npgsql and CoreRT

I'm trying to compile a .net core console application into native executable (linux-x64) on an ubuntu 18.04 docker container, using both coreRT and Npgsql. I'm currently using docker-compose to set up the DB and application containers.
docker-compose.yml
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpassword
- POSTGRES_DB=dbsample
ports:
- 5432:5432
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=400m
volumes:
- ./db-init:/docker-entrypoint-initdb.d
prototype:
build: .
depends_on:
- database
links:
- database:database
Dockerfile
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y \
apt-transport-https \
build-essential \
clang \
cmake \
curl \
git-core \
gpg \
libbz2-dev \
libkrb5-dev \
libncurses5-dev \
libncursesw5-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
llvm \
make \
parallel \
wget \
zlib1g-dev
RUN wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg \
&& mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/prod.list \
&& mv prod.list /etc/apt/sources.list.d/microsoft-prod.list \
&& chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg \
&& chown root:root /etc/apt/sources.list.d/microsoft-prod.list \
&& apt-get update \
&& apt-get install -y dotnet-sdk-2.2
ENV CppCompilerAndLinker=clang-6.0
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true
WORKDIR /home/app
COPY ./HelloWorld.fsproj /home/app
COPY ./nuget.config /home/app
RUN dotnet restore
COPY ./ /home/app
RUN dotnet publish -r linux-x64 -c Release -v detailed -o outside
CMD ./outside/HelloWorld
When It gets to compile it (dotnet publish -r linux-x64 -c Release -v detailed -o outside), it enters infinite loop consuming all the memory avaiable for the container. Until it shows this error:
Task "Exec"
"/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"
Killed
1:7>/root/.nuget/packages/microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/build/Microsoft.NETCore.Native.targets(249,5): error MSB3073: The command ""/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"" exited with code 137. [/home/app/HelloWorld.fsproj]
Done executing task "Exec" -- FAILED.
1:7>Done building target "IlcCompile" in project "HelloWorld.fsproj" -- FAILED.
1:7>Done Building Project "/home/app/HelloWorld.fsproj" (Publish target(s)) -- FAILED.
It seems to be somehow related with the usage of generics and reflections in F#. I've looked in both Npgsql and coreRT repos and couldn't find someone close to get them both working. Have anyone faced this problem? Or managed to use Npgsql and coreRT?

Create Airflow connections on Cloud Composer using gcloud CLI

I am trying to create airflow connections on Cloud Composer using gcloud CLI.
I follow the document and run the following comment.
https://cloud.google.com/composer/docs/how-to/managing/connections#creating_new_airflow_connections
gcloud composer environments run my-env \
--project my-project \
--location asia-northeast1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra '{"extra\__google\_cloud\_platform\__project": "my-project", \
"extra\__google\_cloud\_platform\__key_path": "/home/airflow/gcs/data/keys/my-key.json", \
"extra\__google\_cloud\_platform\__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for asia-northeast1-my-env-44718514-gke.
Executing within the following kubectl namespace: default
W0525 22:51:11.244104 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244246 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244256 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
[2019-05-25 14:51:13,215] {settings.py:176} INFO - setting.configure_orm():
Using pool settings. pool_size=5, pool_recycle=1800
[2019-05-25 14:51:13,598] {default_celery.py:80} WARNING - You have configured a
result_backend of redis://airflow-redis-service:6379/0, it is highly recommended to use an
alternative result_backend (i.e. a database).
[2019-05-25 14:51:13,600] {__init__.py:51} INFO - Using executor CeleryExecutor
[2019-05-25 14:51:13,680] {app.py:51} WARNING - Using default Composer Environment Variables.
Overrides have not been applied.
[2019-05-25 14:51:13,688] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
[2019-05-25 14:51:13,698] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
The connection is successfully created but project ID, key file path and scope are empty. So the connection is invalid.
When I manually created, those properties are not empty. Am I missing something?
Composer image: composer-1.5.0-airflow-1.10.1
I am not able to replicate it. When I run the following command the connection does get added with the extras field:
gcloud composer environments run my-env \
--project my-project \
--location europe-west1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra='{"extra__google_cloud_platform__project": "my-project", "extra__google_cloud_platform__key_path":"/tesf"}
Found some syntax error on escaping quotation. This works fine.
$ CONNECTION_CREATE_COMMAND="gcloud composer environments run $COMPOSER_ENVIRONMENT \
--project $COMPOSER_PROJECT \
--location ${COMPOSER_LOCATION} \
connections -- --add \
--conn_id=${CONN_ID_BASE}_${app}_${c} \
--conn_type=google_cloud_platform \
--conn_extra '{\"extra__google_cloud_platform__project\": \"${BQ_PROJECT}\", \
\"extra__google_cloud_platform__key_path\": \"${KEY_JSON_FILE_PATH}\", \
\"extra__google_cloud_platform__scope\": \"https://www.googleapis.com/auth/cloud-platform\"}'"
$ eval $CONNECTION_CREATE_COMMAND

Getting No FileSystem for Schema WASB . Hdinsight Map Reduce

I am running a simple map reduce job in Azure HDInsight,below is the command that we are running:
java -jar WordCount201.jar wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa/CustData.csv wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa
Getting the below error :
java.io.IOException: No FileSystem for scheme: wasb
For Java use jdk1.8 and below POM org.apache.hadoop hadoop-mapreduce-examples2.7.3scope>provided org.apache.hadoophadoop-mapreduce-client-common2.7.3providedjdk.toolsjdk.toolsorg.apache.hadoophadoop-common2.7.3provided
WASB is a wrapper around HDFS file system. I am not sure you can use it in normal java program. Do you have any reference / link which you referred to?
You can try to get the https equivalent of the custData.csv file.Below is an example of Spark job I am able to submit on HDInsight cluster using WASB
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/learning-spark-1.0.jar \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/ratings.csv \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/movies.csv
And here is an example of passing the same files using their equivalent https URI
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/learning-spark-1.0.jar \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/ratings.csv \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/movies.csv
For hadoop job kindly run the jar from root user . Once you login to HDinsight run the command sudo su - . And the create a folder and place the jar to that folder and run the jar .

Kubernetes services networking

Ive been been trying to get spark working on kubernetes on my local machine.
However I`m having an issue trying to understand how the networking of services work.
I`m running kubernetes in containers on my laptop:
Etcd 2.0.5.1
Kubelet 1.1.2
Proxy 1.1.2
SkyDns 2015-03-11-001
Sky2kube 1.11
Then i`m launching spark which is located in the examples of the kubernetes github repo.
kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml
kubectl create -f kubernetes/examples/spark/spark-master-service.yaml
kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml
My local network: 10.7.64.0/24
My docker network: 172.17.0.1/16
What works:
Spark master launches and I can connect to the webUI.
Spark worker tries to do dns query for spark-master and is
successful. (it returns the correct service ip of the master)
What does not work:
Spark worker cannot connect to the service ip. there is no route to
this host in that container nor on the local machine(laptop). Also
I see nothing happening in iptables. It tries to connect to somewhere
in the 10.0.0.0/8 network which i don`t have any routing too. Can
someone shed a light on this ?
Details:
How I start the containers:
sudo docker run \
--net=host \
-d kubernetes/etcd:2.0.5.1 \
/usr/local/bin/etcd \
--addr=$(hostname -i):4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.2.0 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local
sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""
sudo docker run -d --net=host --restart=always \
gcr.io/google_containers/kube2sky:1.11 \
-v=10 -logtostderr=true -domain=kubernetes.local \
-etcd-server="http://127.0.0.1:4001"
sudo docker run -d --net=host --restart=always \
-e ETCD_MACHINES="http://127.0.0.1:4001" \
-e SKYDNS_DOMAIN="kubernetes.local" \
-e SKYDNS_ADDR="10.7.64.184:53" \
-e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
gcr.io/google_containers/skydns:2015-03-11-001
Thanks !
I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.

Resources