gRPC sending repeated field efficiently to server - grpc

I've a need to send batch of logs to server, but this log can be huge amount batch, for example 10k rows. I'm currently using repeated, but I realize repeated is taxing on memory for client side. Is there any way I could send in stream of logs together with my other parameter?
I'm currently coding in Dart, and can't figure out anyway to stream "logs" from client side to server. Is there any way I could use stream as a field to grpc server?
This is my sample .proto file
service Logger {
//
// Obtains the Features available within the given Rectangle. Results are
// streamed rather than returned at once (e.g. in a response message with a
// repeated field), as the rectangle may cover a large area and contain a
// huge number of features.
rpc saveLogs(SaveLogsRequest) returns (DummyResult) {}
}
message DummyResult {
bool success = 1;
string error = 2;
}
message SaveLogsRequest {
string country = 2;
string app = 3;
repeated Log logs = 4;
}

See: https://github.com/DazWilkin/dart-grpc-logger
My Dart skills are almost non-existent. This is a very basic, working example.
The code is derived from Google's route-guide sample and is here:
https://github.com/DazWilkin/dart-grpc-logger/blob/master/dart/client.dart
The server (and a second client) are written in Golang.
In one terminal, run the server:
docker run \
--interactive --tty \
--publish=50051:50051 \
dazwilkin/dart-grpc-logger-server:84a06d9cc166692ddf00c941856c96e853594695
NOTE While the server can be reconfigured to run on any available port --grpc_endpoint=..., the Dart client requires localhost:50051 because my Dart skills are limited.
The server starts, then pauses on a client:
2020/06/05 20:49:36 [main] Starting gRPC Logger Server
2020/06/05 20:49:36 [main] Creating a gRPC Server
2020/06/05 20:49:36 Starting gRPC Listener [:50051]
In another terminal, run the client:
docker run \
--interactive --tty \
--net=host \
dazwilkin/dart-grpc-logger-client:84a06d9cc166692ddf00c941856c96e853594695
NOTE The client must --net=host to be able to access the containerized server.
The client connects to the server and streams 10 randomly-sized batches of logs:
[Client:main] Entered
[Client:main] Configuring channel
[Client:main] Creating gRPC Logger Client
[Client:runSaveLogs] Entered
[Client:generateLogs] Count=10
[Client:generateLogs] Batch=1
[Client:generateLogs] Batch=4
[Client:generateLogs] Batch=4
[Client:generateLogs] Batch=4
[Client:generateLogs] Batch=7
[Client:generateLogs] Batch=8
[Client:generateLogs] Batch=7
[Client:generateLogs] Batch=7
[Client:generateLogs] Batch=8
[Client:generateLogs] Batch=9
[Client:runSaveLogs] Finished
The server receives the stream of batches of logs:
2020/06/05 20:49:47 [Server:SaveLogs] 2020-06-05T20:49:47Z Entered
2020/06/05 20:49:48 [Server:SaveLogs] Received 1 logs
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] Received 4 logs
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] Received 4 logs
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] Received 4 logs
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:48 [Server:SaveLogs] 2020-06-05T20:49:48Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] Received 7 logs
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] Received 8 logs
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] Received 7 logs
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] Received 7 logs
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:49 [Server:SaveLogs] 2020-06-05T20:49:49Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] Received 8 logs
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] Received 9 logs
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z Log: test
2020/06/05 20:49:50 [Server:SaveLogs] 2020-06-05T20:49:50Z EOF

Related

Istio Ingress Gateway for gRPC with SIMPLE TLS : Remote Reset Error

We have been trying to Secure Gateways with SIMPLE TLS for our gRPC Backend which is deployed in Minikube (minikube version: v1.25.2) for now by following this link.
We were able to successfully access the gRPC service (gRPC server with .NET 6) over plaintext through Istio Ingress Gateway using grpcurl client.
But while we tried to use SIMPLE TLS, we have been experiencing -
ERROR:
Code: Unavailable
Message: upstream connect error or disconnect/reset before headers. reset reason: remote reset
Please find the steps -
Created a certificate and a private key for sc-imcps-bootstrap-lb.example.com (Sample Domain for gRPC Server for Minikube)
$ openssl req -out sc-imcps-bootstrap-lb.example.com.csr -newkey rsa:2048 -nodes -keyout sc-imcps-bootstrap-lb.example.com.key -config sc-imcps-bootstrap-lb.cnf
sc-imcps-bootstrap-lb.cnf
[req]
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
O = sc-imcps organization
OU = R&D
CN = sc-imcps-bootstrap-lb.example.com
$ openssl x509 -req -sha256 -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in sc-imcps-bootstrap-lb.example.com.csr -out sc-imcps-bootstrap-lb.example.com.crt -extfile v3.ext
v3.ext:
subjectAltName = #alt_names
[alt_names]
IP.1 = 10.97.36.53
DNS.1 = sc-imcps-bootstrap-lb.example.com
Create kubernetes secrets by following this command -
$ kubectl create -n istio-system secret tls sc-imcps-bootstrap-lb-credential --key=sc-imcps-bootstrap-lb.example.com.key --cert=sc-imcps-bootstrap-lb.example.com.crt
Created Gateway manifest. (kubectl apply -n foo -f gateway.yaml) [gateway.yaml is attached]
Configure the gateway's traffic routes. by creating VirtualService definition [virtualservice.yaml is attached]
Added Host Entry to C:\Windows\System32\drivers\etc\hosts file -
10.97.36.53 sc-imcps-bootstrap-lb.example.com
Client execution from host -
$ grpcurl -v -H Host:sc-imcps-bootstrap-lb.example.com -d '{"AppName": "SC", "AppVersion": 1, "PID": 8132, "ContainerID": "asd-2", "CloudInternal": true}' -cacert example.com.crt -proto imcps.proto sc-imcps-bootstrap-lb.example.com:443 imcps.IMCPS/Init
RESULT:
Resolved method descriptor:
// Sends a greeting
rpc Init ( .imcps.ClientInfo ) returns ( .imcps.InitOutput );
Request metadata to send:
(empty)
Response headers received:
(empty)
Response trailers received:
content-type: application/grpc
date: Tue, 18 Oct 2022 10:32:07 GMT
server: istio-envoy
x-envoy-upstream-service-time: 46
Sent 1 request and received 0 responses
ERROR:
Code: Unavailable
Message: upstream connect error or disconnect/reset before headers. reset reason: remote reset
NOTE:
$ istioctl version
client version: 1.15.0
control plane version: 1.15.0
data plane version: 1.15.0 (5 proxies)
Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sc-imcps-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: sc-imcps-bootstrap-lb-credential
hosts:
- sc-imcps-bootstrap-lb.example.com
Virtual Service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sc-imcps-bootstrap-route
spec:
hosts:
- sc-imcps-bootstrap-lb.example.com
gateways:
- sc-imcps-gateway
http:
- match:
- uri:
prefix: /imcps.IMCPS/Init
route:
- destination:
host: sc-imcps-bootstrap-svc
port:
number: 17080
Please find the logs from istio-proxy container from gRPC backend server pod -
2022-10-18T10:04:29.412448Z debug envoy http [C190] new stream
2022-10-18T10:04:29.412530Z debug envoy http [C190][S8764333332205046325] request headers complete (end_stream=false):
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-envoy-internal', 'true'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-decorator-operation', 'sc-imcps-bootstrap-svc.foo.svc.cluster.local:17080/imcps.IMCPS/Init*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKHAoMSU5TVEFOQ0VfSVBTEgwaCjEwLjg4LjAuNTMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKvwMKBkxBQkVMUxK0AyqxAwodCgNhcHASFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKEwoFY2hhcnQSChoIZ2F0ZXdheXMKFAoIaGVyaXRhZ2USCBoGVGlsbGVyCjYKKWluc3RhbGwub3BlcmF0b3IuaXN0aW8uaW8vb3duaW5nLXJlc291cmNlEgkaB3Vua25vd24KGQoFaXN0aW8SEBoOaW5ncmVzc2dhdGV3YXkKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKMAobb3BlcmF0b3IuaXN0aW8uaW8vY29tcG9uZW50EhEaD0luZ3Jlc3NHYXRld2F5cwohChFwb2QtdGVtcGxhdGUtaGFzaBIMGgo1ODVkNjQ1ODU1ChIKB3JlbGVhc2USBxoFaXN0aW8KOQofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIWGhRpc3Rpby1pbmdyZXNzZ2F0ZXdheQovCiNzZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1yZXZpc2lvbhIIGgZsYXRlc3QKIgoXc2lkZWNhci5pc3Rpby5pby9pbmplY3QSBxoFZmFsc2UKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsCi8KBE5BTUUSJxolaXN0aW8taW5ncmVzc2dhdGV3YXktNTg1ZDY0NTg1NS1icmt4NAobCglOQU1FU1BBQ0USDhoMaXN0aW8tc3lzdGVtCl0KBU9XTkVSElQaUmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9pc3Rpby1zeXN0ZW0vZGVwbG95bWVudHMvaXN0aW8taW5ncmVzc2dhdGV3YXkKFwoRUExBVEZPUk1fTUVUQURBVEESAioACicKDVdPUktMT0FEX05BTUUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXk='
'x-envoy-peer-metadata-id', 'router~10.88.0.53~istio-ingressgateway-585d645855-brkx4.istio-system~istio-system.svc.cluster.local'
'x-envoy-attempt-count', '1'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
2022-10-18T10:04:29.412567Z debug envoy connection [C190] current connecting state: false
2022-10-18T10:04:29.412674Z debug envoy router [C190][S8764333332205046325] cluster 'inbound|17080||' match for URL '/imcps.IMCPS/Init'
2022-10-18T10:04:29.412692Z debug envoy upstream transport socket match, socket default selected for host with address 10.244.120.108:17080
2022-10-18T10:04:29.412696Z debug envoy upstream Created host 10.244.120.108:17080.
2022-10-18T10:04:29.412729Z debug envoy upstream addHost() adding 10.244.120.108:17080
2022-10-18T10:04:29.412784Z debug envoy upstream membership update for TLS cluster inbound|17080|| added 1 removed 0
2022-10-18T10:04:29.412789Z debug envoy upstream re-creating local LB for TLS cluster inbound|17080||
2022-10-18T10:04:29.412742Z debug envoy router [C190][S8764333332205046325] router decoding headers:
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-attempt-count', '1'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/default;Hash=dda6034f03e05bbb9d0183b80583ee9b5842670599dd86827c8f8b6a74060fa0;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account'
2022-10-18T10:04:29.412802Z debug envoy upstream membership update for TLS cluster inbound|17080|| added 1 removed 0
2022-10-18T10:04:29.412804Z debug envoy upstream re-creating local LB for TLS cluster inbound|17080||
2022-10-18T10:04:29.412809Z debug envoy pool queueing stream due to no available connections (ready=0 busy=0 connecting=0)
2022-10-18T10:04:29.412813Z debug envoy pool trying to create new connection
2022-10-18T10:04:29.412816Z debug envoy pool creating a new connection (connecting=0)
2022-10-18T10:04:29.412869Z debug envoy http2 [C320] updating connection-level initial window size to 268435456
2022-10-18T10:04:29.412873Z debug envoy connection [C320] current connecting state: true
2022-10-18T10:04:29.412875Z debug envoy client [C320] connecting
2022-10-18T10:04:29.412877Z debug envoy connection [C320] connecting to 10.244.120.108:17080
2022-10-18T10:04:29.412928Z debug envoy connection [C320] connection in progress
2022-10-18T10:04:29.412939Z debug envoy http [C190][S8764333332205046325] request end stream
2022-10-18T10:04:29.412960Z debug envoy upstream membership update for TLS cluster inbound|17080|| added 1 removed 0
2022-10-18T10:04:29.412965Z debug envoy upstream re-creating local LB for TLS cluster inbound|17080||
2022-10-18T10:04:29.412972Z debug envoy connection [C320] connected
2022-10-18T10:04:29.412975Z debug envoy client [C320] connected
2022-10-18T10:04:29.412979Z debug envoy pool [C320] attaching to next stream
2022-10-18T10:04:29.412981Z debug envoy pool [C320] creating stream
2022-10-18T10:04:29.412988Z debug envoy router [C190][S8764333332205046325] pool ready
2022-10-18T10:04:29.517255Z debug envoy http2 [C320] stream 1 closed: 1
2022-10-18T10:04:29.517291Z debug envoy client [C320] request reset
2022-10-18T10:04:29.517301Z debug envoy pool [C320] destroying stream: 0 remaining
2022-10-18T10:04:29.517318Z debug envoy router [C190][S8764333332205046325] upstream reset: reset reason: remote reset, transport failure reason:
2022-10-18T10:04:29.517366Z debug envoy http [C190][S8764333332205046325] Sending local reply with details upstream_reset_before_response_started{remote_reset}
2022-10-18T10:04:29.517607Z debug envoy http [C190][S8764333332205046325] encoding headers via codec (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '14'
'grpc-message', 'upstream connect error or disconnect/reset before headers. reset reason: remote reset'
'x-envoy-peer-metadata', 'ChwKDkFQUF9DT05UQUlORVJTEgoaCHNjLWltY3BzChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwogCgxJTlNUQU5DRV9JUFMSEBoOMTAuMjQ0LjEyMC4xMDgKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKjgIKBkxBQkVMUxKDAiqAAgoRCgNhcHASChoIc2MtaW1jcHMKMQoYY29udHJvbGxlci1yZXZpc2lvbi1oYXNoEhUaE3NjLWltY3BzLTU5Njg0YzY3ODgKJAoZc2VjdXJpdHkuaXN0aW8uaW8vdGxzTW9kZRIHGgVpc3RpbwotCh9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEgoaCHNjLWltY3BzCi8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoyCiJzdGF0ZWZ1bHNldC5rdWJlcm5ldGVzLmlvL3BvZC1uYW1lEgwaCnNjLWltY3BzLTAKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsChQKBE5BTUUSDBoKc2MtaW1jcHMtMAoSCglOQU1FU1BBQ0USBRoDZm9vCkkKBU9XTkVSEkAaPmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9mb28vc3RhdGVmdWxzZXRzL3NjLWltY3BzChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAobCg1XT1JLTE9BRF9OQU1FEgoaCHNjLWltY3Bz'
'x-envoy-peer-metadata-id', 'sidecar~10.244.120.108~sc-imcps-0.foo~foo.svc.cluster.local'
'date', 'Tue, 18 Oct 2022 10:04:29 GMT'
'server', 'istio-envoy'
2022-10-18T10:04:29.517689Z debug envoy http2 [C190] stream 3 closed: 0
2022-10-18T10:04:29.517832Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_response_messages_total , stat=12, recurrent=1
2022-10-18T10:04:29.517843Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_request_messages_total , stat=16, recurrent=1
2022-10-18T10:04:29.520398Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_requests_total , stat=24, recurrent=0
2022-10-18T10:04:29.522737Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_response_bytes , stat=18, recurrent=0
2022-10-18T10:04:29.526875Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_request_duration_milliseconds , stat=22, recurrent=0
2022-10-18T10:04:29.530799Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:664]::report() metricKey cache miss istio_request_bytes , stat=26, recurrent=0
2022-10-18T10:04:29.553171Z debug envoy http [C190] new stream
2022-10-18T10:04:29.553272Z debug envoy http [C190][S417038132095363947] request headers complete (end_stream=false):
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-envoy-internal', 'true'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-decorator-operation', 'sc-imcps-bootstrap-svc.foo.svc.cluster.local:17080/imcps.IMCPS/Init*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKHAoMSU5TVEFOQ0VfSVBTEgwaCjEwLjg4LjAuNTMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKvwMKBkxBQkVMUxK0AyqxAwodCgNhcHASFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKEwoFY2hhcnQSChoIZ2F0ZXdheXMKFAoIaGVyaXRhZ2USCBoGVGlsbGVyCjYKKWluc3RhbGwub3BlcmF0b3IuaXN0aW8uaW8vb3duaW5nLXJlc291cmNlEgkaB3Vua25vd24KGQoFaXN0aW8SEBoOaW5ncmVzc2dhdGV3YXkKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKMAobb3BlcmF0b3IuaXN0aW8uaW8vY29tcG9uZW50EhEaD0luZ3Jlc3NHYXRld2F5cwohChFwb2QtdGVtcGxhdGUtaGFzaBIMGgo1ODVkNjQ1ODU1ChIKB3JlbGVhc2USBxoFaXN0aW8KOQofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIWGhRpc3Rpby1pbmdyZXNzZ2F0ZXdheQovCiNzZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1yZXZpc2lvbhIIGgZsYXRlc3QKIgoXc2lkZWNhci5pc3Rpby5pby9pbmplY3QSBxoFZmFsc2UKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsCi8KBE5BTUUSJxolaXN0aW8taW5ncmVzc2dhdGV3YXktNTg1ZDY0NTg1NS1icmt4NAobCglOQU1FU1BBQ0USDhoMaXN0aW8tc3lzdGVtCl0KBU9XTkVSElQaUmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9pc3Rpby1zeXN0ZW0vZGVwbG95bWVudHMvaXN0aW8taW5ncmVzc2dhdGV3YXkKFwoRUExBVEZPUk1fTUVUQURBVEESAioACicKDVdPUktMT0FEX05BTUUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXk='
'x-envoy-peer-metadata-id', 'router~10.88.0.53~istio-ingressgateway-585d645855-brkx4.istio-system~istio-system.svc.cluster.local'
'x-envoy-attempt-count', '2'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
2022-10-18T10:04:29.553290Z debug envoy connection [C190] current connecting state: false
2022-10-18T10:04:29.553412Z debug envoy router [C190][S417038132095363947] cluster 'inbound|17080||' match for URL '/imcps.IMCPS/Init'
2022-10-18T10:04:29.553445Z debug envoy upstream Using existing host 10.244.120.108:17080.
2022-10-18T10:04:29.553462Z debug envoy router [C190][S417038132095363947] router decoding headers:
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-attempt-count', '2'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/default;Hash=dda6034f03e05bbb9d0183b80583ee9b5842670599dd86827c8f8b6a74060fa0;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account'
2022-10-18T10:04:29.553473Z debug envoy pool [C320] using existing fully connected connection
2022-10-18T10:04:29.553477Z debug envoy pool [C320] creating stream
2022-10-18T10:04:29.553487Z debug envoy router [C190][S417038132095363947] pool ready
2022-10-18T10:04:29.553519Z debug envoy http [C190][S417038132095363947] request end stream
2022-10-18T10:04:29.554585Z debug envoy http2 [C320] stream 3 closed: 1
2022-10-18T10:04:29.554607Z debug envoy client [C320] request reset
2022-10-18T10:04:29.554616Z debug envoy pool [C320] destroying stream: 0 remaining
2022-10-18T10:04:29.554631Z debug envoy router [C190][S417038132095363947] upstream reset: reset reason: remote reset, transport failure reason:
2022-10-18T10:04:29.554671Z debug envoy http [C190][S417038132095363947] Sending local reply with details upstream_reset_before_response_started{remote_reset}
2022-10-18T10:04:29.554756Z debug envoy http [C190][S417038132095363947] encoding headers via codec (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '14'
'grpc-message', 'upstream connect error or disconnect/reset before headers. reset reason: remote reset'
'x-envoy-peer-metadata', 'ChwKDkFQUF9DT05UQUlORVJTEgoaCHNjLWltY3BzChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwogCgxJTlNUQU5DRV9JUFMSEBoOMTAuMjQ0LjEyMC4xMDgKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKjgIKBkxBQkVMUxKDAiqAAgoRCgNhcHASChoIc2MtaW1jcHMKMQoYY29udHJvbGxlci1yZXZpc2lvbi1oYXNoEhUaE3NjLWltY3BzLTU5Njg0YzY3ODgKJAoZc2VjdXJpdHkuaXN0aW8uaW8vdGxzTW9kZRIHGgVpc3RpbwotCh9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEgoaCHNjLWltY3BzCi8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoyCiJzdGF0ZWZ1bHNldC5rdWJlcm5ldGVzLmlvL3BvZC1uYW1lEgwaCnNjLWltY3BzLTAKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsChQKBE5BTUUSDBoKc2MtaW1jcHMtMAoSCglOQU1FU1BBQ0USBRoDZm9vCkkKBU9XTkVSEkAaPmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9mb28vc3RhdGVmdWxzZXRzL3NjLWltY3BzChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAobCg1XT1JLTE9BRF9OQU1FEgoaCHNjLWltY3Bz'
'x-envoy-peer-metadata-id', 'sidecar~10.244.120.108~sc-imcps-0.foo~foo.svc.cluster.local'
'date', 'Tue, 18 Oct 2022 10:04:29 GMT'
'server', 'istio-envoy'
2022-10-18T10:04:29.554788Z debug envoy http2 [C190] stream 5 closed: 0
2022-10-18T10:04:29.554893Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=12
2022-10-18T10:04:29.554903Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=16
2022-10-18T10:04:29.554905Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=24
2022-10-18T10:04:29.554914Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=18
2022-10-18T10:04:29.554917Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=22
2022-10-18T10:04:29.554919Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=26
2022-10-18T10:04:29.561521Z debug envoy http [C190] new stream
2022-10-18T10:04:29.561614Z debug envoy http [C190][S7465002415732961759] request headers complete (end_stream=false):
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-envoy-internal', 'true'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-decorator-operation', 'sc-imcps-bootstrap-svc.foo.svc.cluster.local:17080/imcps.IMCPS/Init*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKHAoMSU5TVEFOQ0VfSVBTEgwaCjEwLjg4LjAuNTMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKvwMKBkxBQkVMUxK0AyqxAwodCgNhcHASFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKEwoFY2hhcnQSChoIZ2F0ZXdheXMKFAoIaGVyaXRhZ2USCBoGVGlsbGVyCjYKKWluc3RhbGwub3BlcmF0b3IuaXN0aW8uaW8vb3duaW5nLXJlc291cmNlEgkaB3Vua25vd24KGQoFaXN0aW8SEBoOaW5ncmVzc2dhdGV3YXkKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKMAobb3BlcmF0b3IuaXN0aW8uaW8vY29tcG9uZW50EhEaD0luZ3Jlc3NHYXRld2F5cwohChFwb2QtdGVtcGxhdGUtaGFzaBIMGgo1ODVkNjQ1ODU1ChIKB3JlbGVhc2USBxoFaXN0aW8KOQofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIWGhRpc3Rpby1pbmdyZXNzZ2F0ZXdheQovCiNzZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1yZXZpc2lvbhIIGgZsYXRlc3QKIgoXc2lkZWNhci5pc3Rpby5pby9pbmplY3QSBxoFZmFsc2UKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsCi8KBE5BTUUSJxolaXN0aW8taW5ncmVzc2dhdGV3YXktNTg1ZDY0NTg1NS1icmt4NAobCglOQU1FU1BBQ0USDhoMaXN0aW8tc3lzdGVtCl0KBU9XTkVSElQaUmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9pc3Rpby1zeXN0ZW0vZGVwbG95bWVudHMvaXN0aW8taW5ncmVzc2dhdGV3YXkKFwoRUExBVEZPUk1fTUVUQURBVEESAioACicKDVdPUktMT0FEX05BTUUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXk='
'x-envoy-peer-metadata-id', 'router~10.88.0.53~istio-ingressgateway-585d645855-brkx4.istio-system~istio-system.svc.cluster.local'
'x-envoy-attempt-count', '3'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
2022-10-18T10:04:29.561647Z debug envoy connection [C190] current connecting state: false
2022-10-18T10:04:29.561750Z debug envoy router [C190][S7465002415732961759] cluster 'inbound|17080||' match for URL '/imcps.IMCPS/Init'
2022-10-18T10:04:29.561796Z debug envoy upstream Using existing host 10.244.120.108:17080.
2022-10-18T10:04:29.561825Z debug envoy router [C190][S7465002415732961759] router decoding headers:
':method', 'POST'
':scheme', 'https'
':path', '/imcps.IMCPS/Init'
':authority', 'sc-imcps-bootstrap-lb.example.com:443'
'content-type', 'application/grpc'
'user-agent', 'grpcurl/v1.8.6 grpc-go/1.44.1-dev'
'te', 'trailers'
'x-forwarded-for', '10.88.0.1'
'x-forwarded-proto', 'https'
'x-request-id', '0d9b8e43-da2e-4f99-bbd8-a5c0c56f799f'
'x-envoy-attempt-count', '3'
'x-b3-traceid', '17b50b6247fe2fcbbc2b2057ef4db96d'
'x-b3-spanid', 'bc2b2057ef4db96d'
'x-b3-sampled', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/default;Hash=dda6034f03e05bbb9d0183b80583ee9b5842670599dd86827c8f8b6a74060fa0;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account'
2022-10-18T10:04:29.561841Z debug envoy pool [C320] using existing fully connected connection
2022-10-18T10:04:29.561844Z debug envoy pool [C320] creating stream
2022-10-18T10:04:29.561850Z debug envoy router [C190][S7465002415732961759] pool ready
2022-10-18T10:04:29.561877Z debug envoy http [C190][S7465002415732961759] request end stream
2022-10-18T10:04:29.616003Z debug envoy http2 [C320] stream 5 closed: 1
2022-10-18T10:04:29.616037Z debug envoy client [C320] request reset
2022-10-18T10:04:29.616045Z debug envoy pool [C320] destroying stream: 0 remaining
2022-10-18T10:04:29.616057Z debug envoy router [C190][S7465002415732961759] upstream reset: reset reason: remote reset, transport failure reason:
2022-10-18T10:04:29.616083Z debug envoy http [C190][S7465002415732961759] Sending local reply with details upstream_reset_before_response_started{remote_reset}
2022-10-18T10:04:29.616133Z debug envoy http [C190][S7465002415732961759] encoding headers via codec (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '14'
'grpc-message', 'upstream connect error or disconnect/reset before headers. reset reason: remote reset'
'x-envoy-peer-metadata', 'ChwKDkFQUF9DT05UQUlORVJTEgoaCHNjLWltY3BzChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwogCgxJTlNUQU5DRV9JUFMSEBoOMTAuMjQ0LjEyMC4xMDgKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE1LjAKjgIKBkxBQkVMUxKDAiqAAgoRCgNhcHASChoIc2MtaW1jcHMKMQoYY29udHJvbGxlci1yZXZpc2lvbi1oYXNoEhUaE3NjLWltY3BzLTU5Njg0YzY3ODgKJAoZc2VjdXJpdHkuaXN0aW8uaW8vdGxzTW9kZRIHGgVpc3RpbwotCh9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEgoaCHNjLWltY3BzCi8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoyCiJzdGF0ZWZ1bHNldC5rdWJlcm5ldGVzLmlvL3BvZC1uYW1lEgwaCnNjLWltY3BzLTAKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsChQKBE5BTUUSDBoKc2MtaW1jcHMtMAoSCglOQU1FU1BBQ0USBRoDZm9vCkkKBU9XTkVSEkAaPmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9mb28vc3RhdGVmdWxzZXRzL3NjLWltY3BzChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAobCg1XT1JLTE9BRF9OQU1FEgoaCHNjLWltY3Bz'
'x-envoy-peer-metadata-id', 'sidecar~10.244.120.108~sc-imcps-0.foo~foo.svc.cluster.local'
'date', 'Tue, 18 Oct 2022 10:04:29 GMT'
'server', 'istio-envoy'
2022-10-18T10:04:29.616158Z debug envoy http2 [C190] stream 7 closed: 0
2022-10-18T10:04:29.616256Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=12
2022-10-18T10:04:29.616265Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=16
2022-10-18T10:04:29.616267Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=24
2022-10-18T10:04:29.616270Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=18
2022-10-18T10:04:29.616272Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=22
2022-10-18T10:04:29.616274Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:645]::report() metricKey cache hit , stat=26
2022-10-18T10:04:29.664070Z debug envoy conn_handler [C321] new connection from 192.168.1.13:40686
PS : We have successfully implemented SIMPLE and MUTUAL TLS for REST Services.
Any help will be very much appreciated? I am stuck here! Eventually, after this, we will need to setup mTLS.
Thanks in advance.
We have been using gRPC server with .NET 6. And gRPC kestrel .NET 6 gRPC server is running in k8s under http transport, a minikube load balancer terminates SSL and sends request to the app with :scheme pseudo-header set to "https", but the actual transport is "http" results in this error. Here is the issue. Also find the discussions here thread-1 and thread-2,
For my case, the solution is to add following Kestrel Configuration -
webBuilder.UseKestrel(opts =>
{
opts.AllowAlternateSchemes = true;
});

error when migrating to latest version of Firebase/Firestore dependency

my app works perfectly fine when using the dependencies with this version as mention
below, but when migrating to the latest version of Firestore dependency my app unable to add or delete data in Firestore, I know the lastest dependency don't need firebase-core dependency, though following all the required of firebase mention in official document my app still not working with the latest dependency but working perfectly fine with the dependencies mention below
I want to use the FirestoreUI that's why migrating my project to the latest version,
I also tried Release Note transitive dependency criteria mention in this website,
https://github.com/firebase/FirebaseUI-Android/releases
but still, show this error, please help me I try all possible things from my side, I hope our StackOverflow family help the newcomer developer
Thank you for your valuable time... happy coding :)
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation 'com.google.firebase:firebase-core:16.0.4'
implementation 'com.google.firebase:firebase-firestore:17.1.2'
implementation 'com.google.firebase:firebase-auth:16.0.5'
implementation 'com.firebaseui:firebase-ui-auth:6.2.0'
implementation 'com.google.android.material:material:1.2.0-alpha06'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test:runner:1.2.0'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
repeatedly showing this error(mention below) when adding data to Firestore with latest version,
implementation 'com.google.firebase:firebase-firestore:17.1.2'
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: CONNECTING
I/Firestore: (21.4.1) [GrpcCallProvider]: Setting the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: connectivityAttemptTimer elapsed. Resetting the channel.
I/Firestore: (21.4.1) [GrpcCallProvider]: Clearing the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: SHUTDOWN
I/Firestore: (21.4.1) [WriteStream]: stream callback skipped by CloseGuardedRunner.
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream closed with status: Status{code=UNAVAILABLE, description=Channel shutdownNow invoked, cause=null}.
W/DynamiteModule: Local module descriptor class for providerinstaller not found.
I/DynamiteModule: Considering local module providerinstaller:0 and remote module providerinstaller:0
W/ProviderInstaller: Failed to load providerinstaller module: No acceptable module found. Local version is 0 and remote version is 0.
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: IDLE
I/Firestore: (21.4.1) [GrpcCallProvider]: Channel successfully reset.
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream is open
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream sending: # com.google.firestore.v1.WriteRequest#1f41a322
database: "projects/notes-2e3bb/databases/(default)"
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: CONNECTING
I/Firestore: (21.4.1) [GrpcCallProvider]: Setting the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: connectivityAttemptTimer elapsed. Resetting the channel.
I/Firestore: (21.4.1) [GrpcCallProvider]: Clearing the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: SHUTDOWN
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream closed with status: Status{code=UNAVAILABLE, description=Channel shutdownNow invoked, cause=null}.
W/DynamiteModule: Local module descriptor class for providerinstaller not found.
I/Firestore: (21.4.1) [ExponentialBackoff]: Backing off for 0 ms (base delay: 1000 ms, delay with jitter: 509 ms, last attempt: 15181 ms ago)
I/DynamiteModule: Considering local module providerinstaller:0 and remote module providerinstaller:0
W/ProviderInstaller: Failed to load providerinstaller module: No acceptable module found. Local version is 0 and remote version is 0.
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: IDLE
I/Firestore: (21.4.1) [GrpcCallProvider]: Channel successfully reset.
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream is open
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream sending: # com.google.firestore.v1.WriteRequest#1f41a322
database: "projects/notes-2e3bb/databases/(default)"
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: CONNECTING
I/Firestore: (21.4.1) [GrpcCallProvider]: Setting the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: connectivityAttemptTimer elapsed. Resetting the channel.
I/Firestore: (21.4.1) [GrpcCallProvider]: Clearing the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: SHUTDOWN
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream closed with status: Status{code=UNAVAILABLE, description=Channel shutdownNow invoked, cause=null}.
W/DynamiteModule: Local module descriptor class for providerinstaller not found.
I/Firestore: (21.4.1) [ExponentialBackoff]: Backing off for 0 ms (base delay: 1000 ms, delay with jitter: 509 ms, last attempt: 15181 ms ago)
I/DynamiteModule: Considering local module providerinstaller:0 and remote module providerinstaller:0
W/ProviderInstaller: Failed to load providerinstaller module: No acceptable module found. Local version is 0 and remote version is 0.
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: IDLE
I/Firestore: (21.4.1) [GrpcCallProvider]: Channel successfully reset.
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream is open
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream sending: # com.google.firestore.v1.WriteRequest#1f41a322
database: "projects/notes-2e3bb/databases/(default)"
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: CONNECTING
I/Firestore: (21.4.1) [GrpcCallProvider]: Setting the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: connectivityAttemptTimer elapsed. Resetting the channel.
I/Firestore: (21.4.1) [GrpcCallProvider]: Clearing the connectivityAttemptTimer
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: SHUTDOWN
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream closed with status: Status{code=UNAVAILABLE, description=Channel shutdownNow invoked, cause=null}.
W/DynamiteModule: Local module descriptor class for providerinstaller not found.
I/Firestore: (21.4.1) [ExponentialBackoff]: Backing off for 0 ms (base delay: 1500 ms, delay with jitter: 1741 ms, last attempt: 15170 ms ago)
I/DynamiteModule: Considering local module providerinstaller:0 and remote module providerinstaller:0
W/ProviderInstaller: Failed to load providerinstaller module: No acceptable module found. Local version is 0 and remote version is 0.
I/Firestore: (21.4.1) [GrpcCallProvider]: Current gRPC connectivity state: IDLE
I/Firestore: (21.4.1) [GrpcCallProvider]: Channel successfully reset.
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream is open
I/Firestore: (21.4.1) [WriteStream]: (2b0984f) Stream sending: # com.google.firestore.v1.WriteRequest#1f41a322
```

WSO2 EI 6.5.0 Sending a Simple Message to a Service HTTP 404 error

I am referring https://docs.wso2.com/display/EI650/Quick+Start+Guide
I am trying to do Routing requests based on message content.
I am using WSO2 EI 6.5.0 as part of my preparation for WSO2EI developer exam.
Can anybody assist me why I am getting following error when follow exactly what is instructed in this page. Properly Installed WSO2 EI 6.5.0 as suggested. Running perfectly MSF4J and The ESB server of WSO2.
base) user#user-Lenovo-G400:~/wso2/WSO2 EI$ curl -v -X POST --data #request.json http://localhost:8280/healthcare/categories/surgery/reserve --header 'Content-Type:application/json'
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 127.0.0.1:8280...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8280 (#0)
> POST /healthcare/categories/surgery/reserve HTTP/1.1
> Host: localhost:8280
> User-Agent: curl/7.65.3
> Accept: */*
> Content-Type:application/json
> Content-Length: 285
>
* upload completely sent off: 285 out of 285 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Sat, 02 Nov 2019 13:34:53 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
WSO2 EI 6.5.0 Wirelogs:
(base) user#user-Lenovo-G400:~$ sudo wso2ei-6.5.0-integrator
/usr/lib/wso2/wso2ei/6.5.0/bin/integrator.sh: line 135: warning: command substitution: ignored null byte in input
JAVA_HOME environment variable is set to /usr/lib/jvm/java-8-oracle
CARBON_HOME environment variable is set to /usr/lib/wso2/wso2ei/6.5.0
Using Java memory options: -Xms256m -Xmx1024m
[2019-11-03 18:30:00,288] [EI-Core] INFO - CarbonCoreActivator Starting WSO2 Carbon...
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Operating System : Linux 4.15.0-66-generic, amd64
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Java Home : /usr/lib/jvm/java-8-oracle/jre
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Java Version : 1.8.0_201
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Java VM : Java HotSpot(TM) 64-Bit Server VM 25.201-b09,Oracle Corporation
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Carbon Home : /usr/lib/wso2/wso2ei/6.5.0
[2019-11-03 18:30:00,330] [EI-Core] INFO - CarbonCoreActivator Java Temp Dir : /usr/lib/wso2/wso2ei/6.5.0/wso2/tmp
[2019-11-03 18:30:00,331] [EI-Core] INFO - CarbonCoreActivator User : wso2, en-US, Asia/Kolkata
[2019-11-03 18:30:00,526] [EI-Core] INFO - DefaultCryptoProviderComponent 'CryptoService.Secret' property has not been set. 'org.wso2.carbon.crypto.provider.SymmetricKeyInternalCryptoProvider' won't be registered as an internal crypto provider. Please set the secret if the provider needs to be registered.
[2019-11-03 18:30:00,577] [EI-Core] INFO - GoogleTokenGenDSComponent Activating GoogleTokengen DS component
[2019-11-03 18:30:00,755] [EI-Core] INFO - KafkaEventAdapterServiceDS Successfully deployed the Kafka output event adaptor service
[2019-11-03 18:30:06,379] [EI-Core] INFO - EmbeddedRegistryService Configured Registry in 107ms
[2019-11-03 18:30:06,563] [EI-Core] INFO - RegistryCoreServiceComponent Registry Mode : READ-WRITE
[2019-11-03 18:30:13,341] [EI-Core] INFO - SolrClient Default Embedded Solr Server Initialized
[2019-11-03 18:30:13,837] [EI-Core] INFO - UserStoreMgtDSComponent Carbon UserStoreMgtDSComponent activated successfully.
[2019-11-03 18:30:34,003] [EI-Core] INFO - TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined
[2019-11-03 18:30:35,279] [EI-Core] INFO - ClusterBuilder Clustering has been disabled
[2019-11-03 18:30:35,604] [EI-Core] INFO - UserStoreConfigurationDeployer User Store Configuration Deployer initiated.
[2019-11-03 18:30:35,604] [EI-Core] INFO - UserStoreConfigurationDeployer User Store Configuration Deployer initiated.
[2019-11-03 18:30:36,341] [EI-Core] INFO - VFSTransportSender VFS Sender started
[2019-11-03 18:30:36,388] [EI-Core] INFO - PassThroughHttpSender Initializing Pass-through HTTP/S Sender...
[2019-11-03 18:30:36,451] [EI-Core] INFO - PassThroughHttpSender Pass-through HTTP Sender started...
[2019-11-03 18:30:36,451] [EI-Core] INFO - PassThroughHttpSSLSender Initializing Pass-through HTTP/S Sender...
[2019-11-03 18:30:36,461] [EI-Core] INFO - PassThroughHttpSSLSender Pass-through HTTPS Sender started...
[2019-11-03 18:30:36,482] [EI-Core] INFO - PassThroughHttpListener Initializing Pass-through HTTP/S Listener...
[2019-11-03 18:30:36,517] [EI-Core] INFO - PassThroughHttpSSLListener Initializing Pass-through HTTP/S Listener...
[2019-11-03 18:30:36,663] [EI-Core] INFO - ModuleDeployer Deploying module: addressing-1.6.1-wso2v35 - file:/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/client/modules/addressing-1.6.1-wso2v35.mar
[2019-11-03 18:30:36,667] [EI-Core] INFO - ModuleDeployer Deploying module: rampart-1.6.1-wso2v34 - file:/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/client/modules/rampart-1.6.1-wso2v34.mar
[2019-11-03 18:30:37,246] [EI-Core] INFO - DeploymentEngine Deploying Web service: org.wso2.carbon.business.messaging.hl7.store-4.6.150 -
[2019-11-03 18:30:38,110] [EI-Core] INFO - DeploymentEngine Deploying Web service: org.wso2.carbon.message.processor-4.6.150 -
[2019-11-03 18:30:38,122] [EI-Core] INFO - DeploymentEngine Deploying Web service: org.wso2.carbon.message.store-4.6.150 -
[2019-11-03 18:30:38,729] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: wso2carbon-sts {super-tenant}
[2019-11-03 18:30:38,755] [EI-Core] INFO - DeploymentEngine Deploying Web service: org.wso2.carbon.sts-5.2.19 -
[2019-11-03 18:30:38,880] [EI-Core] INFO - DeploymentEngine Deploying Web service: org.wso2.carbon.tryit-4.6.65 -
[2019-11-03 18:30:39,145] [EI-Core] INFO - CarbonServerManager Repository : /usr/lib/wso2/wso2ei/6.5.0/repository/deployment/server/
[2019-11-03 18:30:39,224] [EI-Core] INFO - TenantLoadingConfig Using tenant lazy loading policy...
[2019-11-03 18:30:39,236] [EI-Core] INFO - PermissionUpdater Permission cache updated for tenant -1234
[2019-11-03 18:30:39,280] [EI-Core] INFO - RuleEngineConfigDS Successfully registered the Rule Config service
[2019-11-03 18:30:39,719] [EI-Core] INFO - ServiceBusInitializer Starting ESB...
[2019-11-03 18:30:39,735] [EI-Core] INFO - ServiceBusInitializer Initializing Apache Synapse...
[2019-11-03 18:30:39,743] [EI-Core] INFO - SynapseControllerFactory Using Synapse home : /usr/lib/wso2/wso2ei/6.5.0/.
[2019-11-03 18:30:39,743] [EI-Core] INFO - SynapseControllerFactory Using synapse.xml location : /usr/lib/wso2/wso2ei/6.5.0/././repository/deployment/server/synapse-configs/default
[2019-11-03 18:30:39,743] [EI-Core] INFO - SynapseControllerFactory Using server name : localhost
[2019-11-03 18:30:39,753] [EI-Core] INFO - SynapseControllerFactory The timeout handler will run every : 15s
[2019-11-03 18:30:39,763] [EI-Core] INFO - Axis2SynapseController Initializing Synapse at : Sun Nov 03 18:30:39 IST 2019
[2019-11-03 18:30:39,773] [EI-Core] INFO - CarbonSynapseController Loading the mediation configuration from the file system
[2019-11-03 18:30:39,776] [EI-Core] INFO - MultiXMLConfigurationBuilder Building synapse configuration from the synapse artifact repository at : ././repository/deployment/server/synapse-configs/default
[2019-11-03 18:30:39,794] [EI-Core] INFO - XMLConfigurationBuilder Generating the Synapse configuration model by parsing the XML configuration
[2019-11-03 18:30:39,872] [EI-Core] INFO - DependencyTracker Sequence : fault was added to the Synapse configuration successfully
[2019-11-03 18:30:39,876] [EI-Core] INFO - DependencyTracker Sequence : main was added to the Synapse configuration successfully
[2019-11-03 18:30:39,876] [EI-Core] INFO - SynapseConfigurationBuilder Loaded Synapse configuration from the artifact repository at : ././repository/deployment/server/synapse-configs/default
[2019-11-03 18:30:39,877] [EI-Core] INFO - DependencyTracker Local entry : SERVER_HOST was added to the Synapse configuration successfully
[2019-11-03 18:30:39,877] [EI-Core] INFO - DependencyTracker Local entry : SERVER_IP was added to the Synapse configuration successfully
[2019-11-03 18:30:39,880] [EI-Core] INFO - Axis2SynapseController Loading mediator extensions...
[2019-11-03 18:30:39,886] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: echo {super-tenant}
[2019-11-03 18:30:39,886] [EI-Core] INFO - DeploymentEngine Deploying Web service: Echo.aar - file:/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/server/axis2services/Echo.aar
[2019-11-03 18:30:39,894] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: Version {super-tenant}
[2019-11-03 18:30:39,895] [EI-Core] INFO - DeploymentEngine Deploying Web service: Version.aar - file:/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/server/axis2services/Version.aar
[2019-11-03 18:30:39,911] [EI-Core] INFO - EventPublisherDeployer Event Publisher deployment held back and in inactive state :MessageFlowConfigurationPublisher.xml, Stream validation exception : Stream org.wso2.esb.analytics.stream.ConfigEntry:1.0.0 does not exist
[2019-11-03 18:30:39,913] [EI-Core] INFO - EventPublisherDeployer Event Publisher deployment held back and in inactive state :MessageFlowStatisticsPublisher.xml, Stream validation exception : Stream org.wso2.esb.analytics.stream.FlowEntry:1.0.0 does not exist
[2019-11-03 18:30:39,999] [EI-Core] INFO - EventPublisherDeployer Event Publisher undeployed successfully : MessageFlowConfigurationPublisher.xml
[2019-11-03 18:30:40,447] [EI-Core] INFO - EventJunction WSO2EventConsumer added to the junction. Stream:org.wso2.esb.analytics.stream.ConfigEntry:1.0.0
[2019-11-03 18:30:40,450] [EI-Core] INFO - EventPublisherDeployer Event Publisher configuration successfully deployed and in active state : MessageFlowConfigurationPublisher
[2019-11-03 18:30:40,450] [EI-Core] INFO - EventStreamDeployer Stream definition is deployed successfully : org.wso2.esb.analytics.stream.ConfigEntry:1.0.0
[2019-11-03 18:30:40,469] [EI-Core] INFO - EventPublisherDeployer Event Publisher undeployed successfully : MessageFlowStatisticsPublisher.xml
[2019-11-03 18:30:40,487] [EI-Core] INFO - EventJunction WSO2EventConsumer added to the junction. Stream:org.wso2.esb.analytics.stream.FlowEntry:1.0.0
[2019-11-03 18:30:40,488] [EI-Core] INFO - EventPublisherDeployer Event Publisher configuration successfully deployed and in active state : MessageFlowStatisticsPublisher
[2019-11-03 18:30:40,488] [EI-Core] INFO - EventStreamDeployer Stream definition is deployed successfully : org.wso2.esb.analytics.stream.FlowEntry:1.0.0
[2019-11-03 18:30:40,858] [EI-Core] INFO - TomcatGenericWebappsDeployer Deployed webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/odata].File[/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/server/webapps/odata.war]
[2019-11-03 18:30:43,200] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: DOCTORS_DataService {super-tenant}
[2019-11-03 18:30:43,200] [EI-Core] INFO - DeploymentEngine Deploying Web service: DOCTORS_DataService.dbs - file:/usr/lib/wso2/wso2ei/6.5.0/repository/deployment/server/dataservices/DOCTORS_DataService.dbs
[2019-11-03 18:30:43,200] [EI-Core] INFO - Axis2SynapseController Deploying the Synapse service...
[2019-11-03 18:30:43,201] [EI-Core] INFO - Axis2SynapseController Deploying Proxy services...
[2019-11-03 18:30:43,202] [EI-Core] INFO - Axis2SynapseController Deploying EventSources...
[2019-11-03 18:30:43,219] [EI-Core] INFO - ServerManager Server ready for processing...
[2019-11-03 18:30:43,261] [EI-Core] INFO - MediationStatisticsComponent Global Message-Flow Statistic Reporting is Disabled
[2019-11-03 18:30:44,976] [EI-Core] INFO - ApplicationManager Deploying Carbon Application : SampleServicesCompositeApplication_1.0.0.car...
[2019-11-03 18:30:46,633] [EI-Core] INFO - DependencyTracker Endpoint : QueryDoctorEP was added to the Synapse configuration successfully - [ Deployed From Artifact Container: SampleServicesCompositeApplication ]
[2019-11-03 18:30:46,634] [EI-Core] INFO - EndpointDeployer Endpoint named 'QueryDoctorEP' has been deployed from file : /usr/lib/wso2/wso2ei/6.5.0/wso2/tmp/carbonapps/-1234/1572786044978SampleServicesCompositeApplication_1.0.0.car/QueryDoctorEP_1.0.0/QueryDoctorEP-1.0.0.xml
[2019-11-03 18:30:46,651] [EI-Core] INFO - API Initializing API: HealthcareAPI
[2019-11-03 18:30:46,654] [EI-Core] INFO - DependencyTracker API : HealthcareAPI was added to the Synapse configuration successfully - [ Deployed From Artifact Container: SampleServicesCompositeApplication ]
[2019-11-03 18:30:46,656] [EI-Core] INFO - APIDeployer API named 'HealthcareAPI' has been deployed from file : /usr/lib/wso2/wso2ei/6.5.0/wso2/tmp/carbonapps/-1234/1572786044978SampleServicesCompositeApplication_1.0.0.car/HealthcareAPI_1.0.0/HealthcareAPI-1.0.0.xml
[2019-11-03 18:30:46,656] [EI-Core] INFO - ApplicationManager Successfully Deployed Carbon Application : SampleServicesCompositeApplication_1.0.0 {super-tenant}
[2019-11-03 18:30:46,664] [EI-Core] INFO - VFSTransportListener VFS listener started
[2019-11-03 18:30:46,666] [EI-Core] INFO - PassThroughHttpListener Starting Pass-through HTTP Listener...
[2019-11-03 18:30:46,695] [EI-Core] INFO - PassThroughListeningIOReactorManager Pass-through HTTP Listener started on 0.0.0.0:8280
[2019-11-03 18:30:46,696] [EI-Core] INFO - PassThroughHttpSSLListener Starting Pass-through HTTPS Listener...
[2019-11-03 18:30:46,700] [EI-Core] INFO - PassThroughListeningIOReactorManager Pass-through HTTPS Listener started on 0.0.0.0:8243
[2019-11-03 18:30:46,711] [EI-Core] INFO - NioSelectorPool Using a shared selector for servlet write/read
[2019-11-03 18:30:46,783] [EI-Core] INFO - NioSelectorPool Using a shared selector for servlet write/read
[2019-11-03 18:30:46,993] [EI-Core] INFO - TaskServiceImpl Task service starting in STANDALONE mode...
[2019-11-03 18:30:47,028] [EI-Core] INFO - NTaskTaskManager Initialized task manager. Tenant [-1234]
[2019-11-03 18:30:47,148] [EI-Core] INFO - JMXServerManager JMX Service URL : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
[2019-11-03 18:30:47,150] [EI-Core] INFO - StartupFinalizerServiceComponent Server : WSO2 Enterprise Integrator-6.5.0
[2019-11-03 18:30:47,150] [EI-Core] INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 56 sec
[2019-11-03 18:30:47,452] [EI-Core] INFO - CarbonUIServiceComponent Mgt Console URL : https://192.168.0.6:9443/carbon/
[2019-11-03 18:32:42,612] [EI-Core] DEBUG - wire HTTP-Listener I/O dispatcher-1 >> "GET /healthcare/querydoctor/surgery HTTP/1.1[\r][\n]"
[2019-11-03 18:32:42,613] [EI-Core] DEBUG - wire HTTP-Listener I/O dispatcher-1 >> "Host: localhost:8280[\r][\n]"
[2019-11-03 18:32:42,613] [EI-Core] DEBUG - wire HTTP-Listener I/O dispatcher-1 >> "User-Agent: curl/7.65.3[\r][\n]"
[2019-11-03 18:32:42,613] [EI-Core] DEBUG - wire HTTP-Listener I/O dispatcher-1 >> "Accept: */*[\r][\n]"
[2019-11-03 18:32:42,613] [EI-Core] DEBUG - wire HTTP-Listener I/O dispatcher-1 >> "[\r][\n]"
[2019-11-03 18:32:43,056] [EI-Core] INFO - LogMediator message = "Welcome to HealthcareService"
[2019-11-03 18:32:43,070] [EI-Core] INFO - TimeoutHandler This engine will expire all callbacks after GLOBAL_TIMEOUT: 120 seconds, irrespective of the timeout action, after the specified or optional timeout
[2019-11-03 18:32:43,115] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /healthcare/surgery HTTP/1.1[\r][\n]"
[2019-11-03 18:32:43,116] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "activityid: 989ebf9c-dff6-40ec-bf90-478f8f5b4ce1[\r][\n]"
[2019-11-03 18:32:43,116] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Accept: */*[\r][\n]"
[2019-11-03 18:32:43,116] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Host: localhost:9090[\r][\n]"
[2019-11-03 18:32:43,116] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]"
[2019-11-03 18:32:43,117] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2019-11-03 18:32:43,117] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "[\r][\n]"
[2019-11-03 18:32:43,313] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]"
[2019-11-03 18:32:43,313] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Connection: keep-alive[\r][\n]"
[2019-11-03 18:32:43,313] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Length: 412[\r][\n]"
[2019-11-03 18:32:43,313] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Type: application/json[\r][\n]"
[2019-11-03 18:32:43,314] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "[\r][\n]"
[2019-11-03 18:32:43,321] [EI-Core] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> **"[{"name":"thomas collins","hospital":"grand oak community hospital","category":"surgery","availability":"9.00 a.m - 11.00 a.m","fee":7000.0},{"name":"anne clement","hospital":"clemency medical center","category":"surgery","availability":"8.00 a.m - 10.00 a.m","fee":12000.0},{"name":"seth mears","hospital":"pine valley community hospital","category":"surgery","availability":"3.00 p.m - 5.00 p.m","fee":8000.0}]"**
[2019-11-03 18:35:42,864] [EI-Core] INFO - SourceHandler Writer null when calling informWriterError
[2019-11-03 18:35:42,865] [EI-Core] WARN - SourceHandler Connection time out after request is read: http-incoming-1 Socket Timeout : 180000 Remote Address : /127.0.0.1:57096

Cannot see log from Spring Cache and Ehcache

I'm using Spring MVC 4.3.9.Release with Ehcache 2.10.3
#Configuration
#EnableCaching
public class CacheConfig {
#Bean
public CacheManager getEhCacheManager() {
EhCacheCacheManager ehCacheCacheManager = new EhCacheCacheManager(getEhCacheFactory().getObject());
return ehCacheCacheManager;
}
#Bean
public EhCacheManagerFactoryBean getEhCacheFactory() {
EhCacheManagerFactoryBean factoryBean = new EhCacheManagerFactoryBean();
factoryBean.setConfigLocation(new ClassPathResource("ehcache.xml"));
factoryBean.setShared(true);
return factoryBean;
}
}
ehcache.xml
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="true" monitoring="autodetect" dynamicConfig="true">
<cache name="books"
maxEntriesLocalHeap="5000"
maxEntriesLocalDisk="1000"
eternal="false"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="120"
timeToLiveSeconds="180"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
</cache>
</ehcache>
log4j2.xml with trace level config for net.sf.ehcache and org.springframework.cache
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console"/>
</Root>
<Logger name="org.springframework.cache" level="trace" additivity="false">
<appender-ref ref="Console" />
</Logger>
<Logger name="net.sf.ehcache" level="trace" additivity="false">
<appender-ref ref="Console" />
</Logger>
</Loggers>
</Configuration>
My problem:
I can verify cache is working since the method with #Cachable didn't run twice. I see some logs from net.sf.ehcache and org.springframework.cache when I start the project. But these are all logs generated from the 2 packages. Why couldn't I see detailed logs such as cache hit/miss, cache key, cache value...
Initial log when I start my app.
29-Aug-2017 12:40:19.754 INFO [RMI TCP Connection(3)-127.0.0.1] org.springframework.cache.ehcache.EhCacheManagerFactoryBean.afterPropertiesSet Initializing EhCache CacheManager
12:40:19.759 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.ConfigurationFactory - Configuring ehcache from InputStream
12:40:19.775 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.BeanHandler - Ignoring ehcache attribute xmlns:xsi
12:40:19.775 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.BeanHandler - Ignoring ehcache attribute xsi:noNamespaceSchemaLocation
12:40:19.797 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.CacheManager - Creating new CacheManager with Configuration Object
12:40:19.799 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.util.PropertyUtil - propertiesString is null.
12:40:19.813 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.ConfigurationHelper - No CacheManagerEventListenerFactory class specified. Skipping...
12:40:19.870 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.Cache - No BootstrapCacheLoaderFactory class specified. Skipping...
12:40:19.870 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.Cache - CacheWriter factory not configured. Skipping...
12:40:19.870 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.ConfigurationHelper - No CacheExceptionHandlerFactory class specified. Skipping...
12:40:19.893 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.store.MemoryStore - Initialized net.sf.ehcache.store.MemoryStore for books
12:40:19.923 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: LOCAL_OFFHEAP_SIZE
12:40:19.923 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: LOCAL_OFFHEAP_SIZE_BYTES
12:40:19.924 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: LOCAL_DISK_SIZE
12:40:19.924 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: LOCAL_DISK_SIZE_BYTES
12:40:19.924 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: WRITER_QUEUE_LENGTH
12:40:19.924 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: REMOTE_SIZE
12:40:19.924 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Pass-Through Statistic: LAST_REJOIN_TIMESTAMP
12:40:19.937 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: OFFHEAP_GET
12:40:19.937 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: OFFHEAP_PUT
12:40:19.938 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: OFFHEAP_REMOVE
12:40:19.938 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: DISK_GET
12:40:19.938 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: DISK_PUT
12:40:19.938 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: DISK_REMOVE
12:40:19.939 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: XA_COMMIT
12:40:19.939 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: XA_ROLLBACK
12:40:19.939 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: XA_RECOVERY
12:40:19.939 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: CLUSTER_EVENT
12:40:19.939 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.statistics.extended.ExtendedStatisticsImpl - Mocking Operation Statistic: NONSTOP
12:40:19.944 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.Cache - Initialised cache: books
12:40:19.945 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.ConfigurationHelper - CacheDecoratorFactory not configured. Skipping for 'books'.
12:40:19.945 [RMI TCP Connection(3)-127.0.0.1] DEBUG net.sf.ehcache.config.ConfigurationHelper - CacheDecoratorFactory not configured for defaultCache. Skipping for 'books'.

Nginx keeps reopening logs

Here are some informations about nginx running on the cluster:
nginx version: nginx/1.6.0
built by gcc 3.4.5 20051201 (Red Hat 3.4.5-2)
TLS SNI support disabled
configure arguments: --prefix=/home/work/local/nginx --with-http_ssl_module --with-http_realip_module --with-pcre=/home/work/download/pcre-8.35 --with-pcre-jit
The question is that why the Nginx keeps reopening logs about every five minutes? I have checked all the CRON tasks, strangely, there is no CRON which will send reopen signal to Nginx passively.
These are some logs where i tail the error.log of Nginx
2015/08/18 15:42:20 [notice] 17496#0: signal 10 (SIGUSR1) received, reopening logs
2015/08/18 15:42:20 [notice] 17496#0: reopening logs
2015/08/18 15:42:20 [notice] 17497#0: reopening logs
2015/08/18 15:42:20 [notice] 17498#0: reopening logs
2015/08/18 15:42:20 [notice] 17500#0: reopening logs
2015/08/18 15:42:20 [notice] 17503#0: reopening logs
2015/08/18 15:42:20 [notice] 17501#0: reopening logs
2015/08/18 15:42:20 [notice] 17505#0: reopening logs
2015/08/18 15:42:20 [notice] 17504#0: reopening logs
2015/08/18 15:42:20 [notice] 17512#0: reopening logs
2015/08/18 15:42:20 [notice] 17515#0: reopening logs
2015/08/18 15:42:20 [notice] 17509#0: reopening logs
2015/08/18 15:42:20 [notice] 17506#0: reopening logs
2015/08/18 15:42:20 [notice] 17517#0: reopening logs
2015/08/18 15:42:20 [notice] 17507#0: reopening logs
2015/08/18 15:42:20 [notice] 17521#0: reopening logs
2015/08/18 15:42:20 [notice] 17519#0: reopening logs
2015/08/18 15:42:20 [notice] 17511#0: reopening logs
2015/08/18 15:42:20 [notice] 17518#0: reopening logs
2015/08/18 15:42:20 [notice] 17513#0: reopening logs
2015/08/18 15:42:20 [notice] 17510#0: reopening logs
2015/08/18 15:42:20 [notice] 17520#0: reopening logs
2015/08/18 15:47:21 [notice] 17496#0: signal 10 (SIGUSR1) received, reopening logs
2015/08/18 15:47:21 [notice] 17496#0: reopening logs
2015/08/18 15:47:21 [notice] 17498#0: reopening logs
2015/08/18 15:47:21 [notice] 17497#0: reopening logs
2015/08/18 15:47:21 [notice] 17504#0: reopening logs
2015/08/18 15:47:21 [notice] 17503#0: reopening logs
2015/08/18 15:47:21 [notice] 17501#0: reopening logs
2015/08/18 15:47:21 [notice] 17500#0: reopening logs
2015/08/18 15:47:21 [notice] 17518#0: reopening logs
2015/08/18 15:47:21 [notice] 17505#0: reopening logs
2015/08/18 15:47:21 [notice] 17521#0: reopening logs
2015/08/18 15:47:21 [notice] 17520#0: reopening logs
2015/08/18 15:47:21 [notice] 17519#0: reopening logs
2015/08/18 15:47:21 [notice] 17507#0: reopening logs
2015/08/18 15:47:21 [notice] 17509#0: reopening logs
2015/08/18 15:47:21 [notice] 17517#0: reopening logs
2015/08/18 15:47:21 [notice] 17506#0: reopening logs
2015/08/18 15:47:21 [notice] 17515#0: reopening logs
2015/08/18 15:47:21 [notice] 17513#0: reopening logs
2015/08/18 15:47:21 [notice] 17511#0: reopening logs
2015/08/18 15:47:21 [notice] 17512#0: reopening logs
2015/08/18 15:47:21 [notice] 17510#0: reopening logs
Grateful for your answering!
signal 10 (SIGUSR1) received + http://article.gmane.org/gmane.comp.web.nginx.english/181 -> means something triggers log rotating... Nginx is not supporting that itself. Please see if you have logrotate running if so - please verify configuration.

Resources