can't get notification after reconnect to Kaa server - kaa

We have a 1 Kaa (version 0.9) cluster with 3 nodes.
We found sometime the endpoint reconnect to Kaa node and it can't get any notification.
The endpoint had subscribed all topics and we also confirm it in admin website.
after it connect to Kaa and the console just show below message:
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - KaaSync message (zipped=false, encrypted=true) received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.DefaultOperationDataProcessor - Received Sync response: {"requestId": 2, "status": "SUCCESS", "bootstrapSyncResponse": null, "profileSyncResponse": null, "configurationSyncResponse": null, "notificationSyncResponse": {"responseStatus": "NO_DELTA", "notifications": [], "availableTopics": null}, "userSyncResponse": {"userAttachResponse": null, "userAttachNotification": null, "userDetachNotification": null, "endpointAttachResponses": [], "endpointDetachResponses": []}, "eventSyncResponse": {"eventSequenceNumberResponse": null, "eventListenersResponses": [], "events": null}, "redirectSyncResponse": null, "logSyncResponse": null, "extensionSyncResponses": null}
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.transports.DefaultNotificationTransport - Processed notification response.
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.transports.DefaultUserTransport - Processed user response
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
[pool-6-thread-2] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Executing ping task for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - PingResponse message received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
[pool-6-thread-2] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Executing ping task for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - PingResponse message received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
[pool-6-thread-2] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Executing ping task for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - PingResponse message received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
[pool-6-thread-2] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Executing ping task for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - PingResponse message received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
[pool-6-thread-2] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Executing ping task for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - PingResponse message received for channel [default_operation_tcp_channel]
[pool-6-thread-1] INFO org.kaaproject.kaa.client.cnhannel.impl.channels.DefaultOperationTcpChannel - Channel [default_operation_tcp_channel] is reading data from stream using [1024] byte buffer
And then, we try to stop the endpoint and reconnect to Kaa server again.
After connect to Kaa, the old notification be shown.
But, we try to send new notification to this endpoint and it still can't get new notification.
We also found if we disconnect to Kaa server and reconnect it and we can get notification again.
But, endpoint can't get notification condition seems still happen in other endpoints. We are not sure this issue relate to our cluster or not.
Sometimes, we will restart kaa-node service and endpoints will connect to other 2 server at that time.

We seems found the root cause. From the log, we saw the notification should be sent to endpoint successfully. But, it seems the performance very bad and endpoint need a lot of time to get the notification. Finally, we found the root cause should be the swap file of our server. Ours server set the swap file before and it seems impact the performance of sending notification. We try to disable the swap space and send notification and the endpoint can get in reasonable time.
We set the swap cause we found the memory usage of Kaa-node will over 80% of our server in one day before. Currently, we try to use different java gc to replace original G1 gc while starting Kaa-node. And the memory usage seems better than use G1 gc. We will continue to monitor it.

Related

Offline Firestore : Stream closed with status: Status{code=UNAVAILABLE

I am getting those error messages when I am using my App offline (it is working when I am online):
W/Firestore(23675): (24.0.1) [WatchStream]: (da69cba) Stream closed
with status: Status{code=UNAVAILABLE, description=End of stream or
IOException, cause=null}. W/Firestore(23675): (24.0.1) [WriteStream]:
(9780b0d) Stream closed with status: Status{code=UNAVAILABLE,
description=End of stream or IOException, cause=null}.
W/Firestore(23675): (24.0.1) [WriteStream]: (9780b0d) Stream closed
with status: Status{code=UNAVAILABLE, description=Unable to resolve
host firestore.googleapis.com, cause=java.lang.RuntimeException:
java.net.UnknownHostException: Unable to resolve host
"firestore.googleapis.com": No address associated with hostname
Here is my main.dart :
await Firebase.initializeApp();
FirebaseFirestore.instance.settings = Settings(cacheSizeBytes: Settings.CACHE_SIZE_UNLIMITED);
FirebaseFirestore.instance.settings = Settings(persistenceEnabled: true);
What did I miss ? (I have the last version of flutter and cloud_firestore package)
Is it because I am in debug mode using Android Emulator ?
It could be a server error or it could be related to this issue on Github.
As you can see the issue on Github is still open and under investigation, if you can update to the latest version of flutter.
For the UNAVAILABLE Firebase ErrorCode documentation says:
UNAVAILABLE (HTTP error code = 503) The server is overloaded. The server couldn't process the request in time. Retry the same request, but you must:
- Honor the Retry-After header if it is included in the response from the FCM Connection Server.
- Implement exponential back-off in your retry mechanism. (e.g. if you waited one second before the first retry, wait at least two second before the next one, then 4 seconds and so on). If you're sending multiple messages, delay each one independently by an additional random amount to avoid issuing a new request for all messages at the same time. Senders that cause problems risk being denylisted.
Check here for more about Error codes
Also,basic but worth a test, make sure that the device connection is stable with no drops.

gRPC client can't connect to server Failed parsing HTTP/2, only on my computer

I'm trying to connect to a server from my client using gRPC, but connection always fails only on my pc(macbook pro). My teammate tried with the exact same code, and it works perfectly fine. The following is the error messages from each client and server. We are using protobuf 3, python 3.9. Can anyone give me some hint? Thank You.
Client Error Message
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Failed parsing HTTP/2"
debug_error_string = "{"created":"#1626678822.089372000","description":"Error received from peer ipv4:10.113.66.145:9390", "file":"src/core/lib/surface/call.cc", "file_line":1067,"grpc_message":"Failed parsing HTTP/2","grpc_status":14}"
Server Error Message
[07/19 01:29:32 cctv_service]: Session Connected
E0719 01:29:32.744434791 14615 parsing.cc:302] Unknown frame type 71
I0719 01:29:32.744519637 14615 chttp2_transport.cc:812] W:0x7f3364002ae0 SERVER [ipv4:10.25.211.173:50662] state IDLE -> WRITING [CLOSE_FROM_API]
I0719 01:29:32.744554230 14615 chttp2_transport.cc:812] W:0x7f3364002ae0 SERVER [ipv4:10.25.211.173:50662] state WRITING -> WRITING [begin write in current thread]
Updated MacOS 11.4 to 11.5, problem never appeared again.

How to keep grpc-js client connection open (alive) during inactive times?

I have a grpc server streaming RPC that communicates with the client. The client throws an error when it does not receive communication from the server. The error is:
Error: 13 INTERNAL: Received RST_STREAM with code 2
at Object.callErrorFromStatus (/app/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:49)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
at Http2CallStream.outputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:116:74)
at Http2CallStream.maybeOutputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:155:22)
at Http2CallStream.endCall (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:141:18)
at ClientHttp2Stream.stream.on (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:410:22)
at ClientHttp2Stream.emit (events.js:198:13)
at emitCloseNT (internal/streams/destroy.js:68:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Emitted 'error' event at:
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:28)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
[... lines matching original stack trace ...]
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
I have tried searching for solutions and the only "solution" that kept the connection open was to manually ping the client every 10 seconds using setInterval. I read that grpc is supposed to keep the connections open for you and even reconnect if it is lost in this article.
As mentioned above, KeepAlive provides a valuable benefit: periodically checking the health of the connection by sending an HTTP/2 PING to determine whether the connection is still alive.
The way I set up the client is below
var grpc = require('#grpc/grpc-js');
require('dotenv').config({path:'/app/.env'});
var responderProto = grpc.loadPackageDefinition(packageDefinition).responder;
var client = new responderProto.ResponderService(process.env.GRPC_HOST_AND_PORT,
grpc.credentials.createInsecure(),
{
"grpc.http2.max_pings_without_data" : 0,
"grpc.keepalive_time_ms": 10000,
"grpc.keepalive_permit_without_calls" : 1
});
My understanding was that the "grpc.keepalive_time_ms" is supposed to ping the server every x ms
Am I doing something wrong or missing or misunderstanding something essential?

Producer clientId=producer-2 Connection to node -1 could not be established. Broker may not be available

I use SpringBoot as a consumer of Kafka, and the program accepts messages from Kafka, but the log contains the following information
[kafka-producer-network-thread | producer-2] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-2] Connection to node -1 could not be established. Broker may not be available.
Please help me to solve this problem

Apache Camel Netty Socket

I want to use apache camel netty connection in client mode. And also this client is not in syncrionized mode. I provided following configuration to achive this but appache created two connection to server one for receving message and one for replying to it. how we can use netty connector in this mode.
from("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true");
and in Hercules Utitly i see two connection for this request processing
11:00:51 AM: 127.0.0.1 Client connected
11:00:51 AM: 127.0.0.1 Client connected
So this is what you want right?
"after receiving request from server. i want to push that in a MQ and wait on other MQ for processed response. so when packet is processed and available in MQ i want to use same connection to transmit response to socket".
So first thing is to probably agree on some requirements. If you need to send a response back i.e. a client is waiting to hear back regarding the request it sent, then it should be synchronous communication and not asynchronous.
So you can then simply write:
from("netty4:tcp://localhost:7000?sync=true&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("ACTIVE_MQ");
Off course in the active mq part you need to set the reply to and time out so that if you don't get a response in time it times out and you notify the client with some good error message.
What will happen is that the message is received, and sent to an active mq queue with the appropiate reply to properties. If the message is received, the response is sent back over the same connection to the client.
I would advise you to read upon on the JMS request/reply in Camel as it will help you to setup the active mq part.
http://camel.apache.org/jms.html

Resources