I asked a question two weeks ago. I deployed 3 Kaa clusters follow the kaa Cluster setup guide document.The databases are MongoDB and MariaDB. When I run C Client(I make a code change that every client can login in three kaa servers in random) firstly, it can login in kaa server and send or receive event. But after I stop the Client and start it secondly, it can not send or receive event.
Then I try to do it again and again. But almost it is bad.Now I am sure it is a bug.Since then, I've done a lot of testing.Java Client ,Android Client and C Client all would appear the question that can't send and receive event in Kaa cluster sometimes.The databases are MongoDB and MariaDB.The C Client Log , Java Client Log and kaa Server Log in here.Hope that the information can play a positive role in solving the problem.
I continued to reproduce the problem and analyze.The reason is that when the client re login, its cluster endpoint information didn't update in other nodes.So when client want to send event,but its cluster endpoint information is still before, there are no address can be send.This bug may be caused by the event related code under the cluster.I hope Kaa will pay attention to this issue, because it affects the entire Kaa application development.
I have a way to fix the bug.Firstly I will describe the mechanism of sending information under kaa Cluster.Hope the information that I understood the information is correct.If there are 3 kaa cluster K1,K2,K3 and client A.The Client A login in K1 at first, Then cluster K1,K2,K3 have cluster endpoint infos about Client A .(K1 has local cluster endpoint infos about Client A,K2,K3 have remote cluster endpoint infos about Client A).Then stop and restart login in K2.K2 will send a message to K1,K3 to update the cluster endpoint infos about Client A,then K1,K3 will report all cluster endpoint infos to K2 and K2 will update the cluster endpoint infos about Client A at local.
The bug is occurred in K1 and K3 do not update the cluster endpoint infos about Client A at once and cause K2 do not update the cluster endpoint infos about Client A also.
So my solution is when Client A restart login in K2 and K2 send a message to K1 and K2 ,remove the old cluster endpoint infos about Client A and add new one. Or
Traversal all cluster endpoint infos(local infos and remote infos) and update cluster endpoint infos about Client A.I hope kaa team can verify the solution and fix the bug.
The Latest news is in Event can not send and receive in Kaa cluster that created by me.
Related
As far as I can tell, Firestore uses protocol buffers when making a connection from an android/ios app. Out of curiosity I want to see what network traffic is going up and down, but I can't seem to make charles proxy show any real decoded info. I can see the open connection, but I'd like to see what's going over the wire.
Firestores sdks are open source it seems. So it should be possible to use it to help decode the output. https://github.com/firebase/firebase-js-sdk/tree/master/packages/firestore/src/protos
A few Google services (like AdMob: https://developers.google.com/admob/android/charles) have documentation on how to read network traffic with Charles Proxy but I think your question is, if it’s possible with Cloud Firestore since Charles has support for protobufs.
The answer is : it is not possible right now. The firestore requests can be seen, but can't actually read any of the data being sent since it's using protocol buffers. There is no documentation on how to use Charles with Firestore requests, there is an open issue(feature request) on this with the product team which has no ETA. In the meanwhile, you can try with the Protocol Buffers Viewer.
Alternatives for viewing Firestore network traffic could be :
From Firestore documentation,
For all app types, Performance Monitoring automatically collects a
trace for each network request issued by your app, called an HTTP/S
network request trace. These traces collect metrics for the time
between when your app issues a request to a service endpoint and when
the response from that endpoint is complete. For any endpoint to which
your app makes a request, Performance Monitoring captures several
metrics:
Response time — Time between when the request is made and when the response is fully received
Response payload size — Byte size of the network payload downloaded by the app
Request payload size — Byte size of the network payload uploaded by the app
Success rate — Percentage of successful responses compared to total responses (to measure network or server failures)
You can view data from these traces in the Network requests subtab of
the traces table, which is at the bottom of the Performance dashboard
(learn more about using the console later on this page).This
out-of-the-box monitoring includes most network requests for your app.
However, some requests might not be reported or you might use a
different library to make network requests. In these cases, you can
use the Performance Monitoring API to manually instrument custom
network request traces. Firebase displays URL patterns and their
aggregated data in the Network tab in the Performance dashboard of the
Firebase console.
From stackoverflow thread,
The wire protocol for Cloud Firestore is based on gRPC, which is
indeed a lot harder to troubleshoot than the websockets that the
Realtime Database uses. One way is to enable debug logging with:
firebase.firestore.setLogLevel('debug');
Once you do that, the debug output will start getting logged.
Firestore use gRPC as their API, and charles not support gRPC now.
In this case you can use Mediator, Mediator is a Cross-platform GUI gRPC debugging proxy like Charles but design for gRPC.
You can dump all gRPC requests without any configuration.
For decode the gRPC/TLS traffic, you need download and install the Mediator Root Certificate to your device follow the document.
For decode the request/response message, you need download proto files which in your description, then configure the proto root in Mediator follow the document.
I have a Single Page Azure Web App that uses Signalr(Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final) to broadcast events(login, logout, department creation etc) to connected clients.
I also scale my application to several instances at peak times and i use Redis Cache Backplane (Microsoft.AspNetCore.SignalR.Redis" Version="1.0.0-alpha2-final) to distribute event broadcast messages to all connected clients
i use angular front end (#aspnet/signalr-client": "^1.0.0-alpha2-final)
on azure, i enabled diagnostic log to log information and error messages.
the above works fine but when i scale up the application, it is difficult to trace information or error messages as i have to look through up to 10 instance application logs to find information or error;
my question: How do i ensure redis cache logs error messages or information on all available instances rather than on instances where client is connected; how do i know if a client has missed out on event broadcast message? how do i ensure signalr sever/hub logs all messages on all application log instances?
thank you in advance
The best way to do this might be to use the redis-cli, run the MONITOR command and then pipe that to a file (or somewhere that can store the logs).
We understand that there are port tear-down during transactions and different ports may be used when sending messages over to the counterparties. When a node goes down, the messages are still sent but they are being queued in the MQ, is there a recommended way how could we monitor these transactions/messages?
Unfortunately, you can't currently monitor these messages.
This is because Artemis does not store its queued messages in a human-readable/queryable format. Instead, the queued messages are stored in the form of a high-performance journal that contains a lot of information that is required in case the message queue's state needs to be restored from a hard bounce.
I approached this by finding the documents here: https://docs.corda.net/node-administration.html#monitoring-your-node
where it illustrates Corda flow metrics visualized using hawtio.
I just needed to download and startup hawt.io and connect it to any ( or the specified node PID ) net.corda.node.Corda and by going to the JMX tab we could see the messages in queue.
How long does kaa-client stores the event on the device that it has to send to the other clients, in case it is not able to successfully deliver in several attempts during the kaa-node server outage.
Is there a way I can set timeout in kaa-client untill it has to retry to send event on failure attempts?
Thanks
-Rizwan
The behaviour of the Kaa SDK on the Kaa client side in case of Operations server outage or inability to communicate depends on the failover model used on the client side. The default failover mechanism depends on the Kaa SDK type and platform.
See Kaa Documentation for more information.
I have ever implemented the Notification service based on RabbitMQ before.
And recently, I am interesting in the OpenStack Notification Service, Marconi.
But I am not sure that how can a client listen to a queue.
I mean a client would be notified if there is a message being pushed into the queue.
Is there any example or tutorial go through the Publisher/Subscriber pattern?
Thanks.
The Marconi project (API v1) does not currently support Push technology, including long-polling. Depending on how your subscriber processes messages that appear in the queue, you will need to poll the service at an appropriate interval using either the List Messages or Claim Messages requests.
Keep in mind that polling requests may count towards the rate limits for the service, even when no messages are in the queue. Rate limits are provider-specific.