Hyperledger 1.4.4: Private data is not purged - collections

I have a test network with solo orderer and one private data collection. BlockToLive is set to 3. I started to add new entries with putPrivateData and putState. Initially I observed, that the older private entries were purged. However, after a while this is not a case anymore. Old entries stay in private data collection. With this command
peer channel fetch newest -o orderer:7050 -c exchange-channel last.block `
I am able to see that the block number changes and blockToLive condition should actually be fulfilled.
What could be the reason? Which logs should be checked?

Related

Does firebase guarantee all events will be received, or just that state will be consistent? [duplicate]

Do event listeners guarantee that all data ever written to a path will be delivered to the client eventually?
For instance if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
What would happen in this situation: client A pushes move 1 to game1/user1/move_data which client B is listening on; client A then immediately pushes another move updating the value at game1/user1/move_data.
Will the listening client be guaranteed to receive all moves pushed?
Currently I have a system that creates a new path per move and then I am calling single listeners on each move as each client reaches that move in it's state. It doesn't seem efficient as if the client A receives the most recent move that client B has made then client A begins listening on a path that doesn't exist yet.
The below quotes are from this link: https://firebase.google.com/docs/database/admin/retrieve-data
"The value event is used to read a static snapshot of the contents at a given database path, as they existed at the time of the read event. It is triggered once with the initial data and again every time the data changes. The event callback is passed a snapshot containing all data at that location, including child data. In the code example above, value returned all of the blog posts in your app. Everytime a new blog post is added, the callback function will return all of the posts."
The part about as they existed at the time of the read event causes me to think that if a listener is on a path then the client will receive all values ever on that path eventually.
There is also this line from the guarantees section which I am struggling to decipher:
"Value events are always triggered last and are guaranteed to contain updates from any other events which occurred before that snapshot was taken."
I am working with a language that does not have a Google based sdk and am asking this question, so I can further assess Firebases' suitability for my uses.
Firebase Realtime Database performs state synchronization. If a client is listening to data in a location, it will receive the state of that data. If there are changes in the data, it will receive the latest state of that data.
...if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
If there are multiple updates before the Firebase server has a chance to send the state to a listener, it may skip some intermediate values. So there is no guarantee that your client will see every state change, there is just a guarantee that it will eventually see the latest state.
If you want to ensure that all clients (can) see all state changes, you should store the state changes themselves in the database.
try to this code to get update value from firebase database:-
mFirebaseInstance = FirebaseDatabase.getInstance();
mFirebaseDatabase = mFirebaseInstance.getReference();
mFirebaseDatabase.child("new_title").setValue("Realtime Database");
mFirebaseDatabase.child("new_title").addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
String appTitle = dataSnapshot.getValue().toString();
Log.e("Hey", appTitle);
title.setText(appTitle);
}
#Override
public void onCancelled(DatabaseError error) {
// Failed to read value
Log.e("Hey", "Failed to read app title value.", error.toException());
}
});

Kafka Listeners stop reading from topics after a few hours

An app I have been working on has started causing issues in our staging and production environment that are seemingly due to Kafka listeners no longer reading anything from their assigned topics after a few hours from the app starting.
The app is running in a cloud foundry environment and it has 13 #KafkaListener, reading from multiple topics based on their given pattern. The amount of topics is equal (each user on the app creates its own topic for each of the 13 listeners using the pattern). Topics have 3 partitions. Auto-scaling is also used, with a minimum of 2 instances of the app running at the same time. One of the topics is under heavier load than the others, receiving between 1 to 200 messages each second. The processing time for each message is short, as we receive batches and the processing part only proceeds to write the batch to a DB.
The current issue is, as stated, that it works for a while after starting and then suddenly the listeners are no longer picking up messages. With no apparent error or warning in the logs. A temporary endpoint was created where KafkaListenerEndpointRegistry is used to look at the Listener Containers, and all of them seem to be running and have proper partitions assigned. Doing a .stop() and .start() on the containers leads to one additional batch of messages being processed, and then nothing else.
The following are the configs used:
#Bean
public ConsumerFactory<String, String> consumerFactory(){
return new DefaultKafkaConsumerFactory<>(kafkaConfig.getConfiguration());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(5000);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
}
The kafkaConfig sets the following settings:
PARTITION_ASSIGNMENT_STRATEGY_CONFIG: RoundRobinAssignor
MAX_POLL_INTERVAL_MS_CONFIG: 60000
MAX_POLL_RECORDS_CONFIG: 10
MAX_PARTITION_FETCH_BYTES_CONFIG: Integer.MAX_VALUE
ENABLE_AUTO_COMMIT_CONFIG: false
METADATA_MAX_AGE_CONFIG: 15000
REQUEST_TIMEOUT_MS_CONFIG: 30000
HEARTBEAT_INTERVAL_MS_CONFIG: 15000
SESSION_TIMEOUT_MS_CONFIG: 60000
Additionally, each listener is in its own class and has the listen method as follows:
#KafkaListener(id="<patternName>-container", topicPattern = "<patternName>.*", groupId = "<patternName>Group")
public void listen(#Payload List<String> payloads,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topics,
Acknowledgement acknowledgement){
//processPayload...
acknowledgement.acknowledge();
}
The spring-kakfa version is 2.7.4.
Is there an issue with this config that could solve the issue? I have recently tried multiple changes with no success, changing these config settings around, moving the #KafkaListener annotation at class level, restarting the Listener Containers when they stop reading, and even having all the processing on the messages be done asynchronously and acknowledging the messages the moment they are picked up by the listener method. There were no errors or warning logs, and I wasn't able to see anything helpful on debug logging due to the amount of messages sent each second. We also have another app running the same settings in the same environments, but only 3 listeners (different topic patterns), where this issue does not occur. It is under a similar load, as the messages received by those 3 listeners are being output to the topic causing the large load on the app with the problem.
I would very much appreciate any help or pointers to what else I can do, since this issue is blocking us heavily in our production. Let me know if I missed something that could help.
Thank you.
Most problems like this are due to the listener thread being stuck in user code somplace; take a thread dump when this happens to see what the threads are doing.

Keep messages when I use the Messenger component with Doctrine

I am using the Messenger component configured with Doctrine to store the messages in a database table.
I see that in the default configuration the table "messenger_messages" is automatically created. In it there is a "delivered_at" field.
I want to keep all the messages that are dispatched but when the messages are consumed the corresponding records are automatically deleted.
When I run the process via php bin/console messenger:consume async -vv I see that a timestamp is written to the "delivered_at" field but then the entire record is deleted.
Is there a way that the records are not erased and that the date and time of sending the message is recorded?
Giving an answer to the original question and kind of ignoring the following clarifications:
Register a new transport that should hold the sent messages (named 'sent' here):
# config/packages/messenger.yaml
framework:
messenger:
transports:
sent: 'doctrine://default?queue_name=sent'
Then create a new EventSubscriber that forwards the sent messages to the 'sent' transport:
<?php
namespace App\EventSubscriber;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\Messenger\Event\WorkerMessageHandledEvent;
use Symfony\Component\Messenger\Transport\Sender\SenderInterface;
class MessengerMessageConsumedSubscriber implements EventSubscriberInterface
{
private SenderInterface $sentSender;
public function __construct(SenderInterface $sentSender)
{
$this->sentSender = $sentSender;
}
public static function getSubscribedEvents(): array
{
return [
WorkerMessageHandledEvent::class => [
['onWorkerMessageHandled', 10],
]
];
}
public function onWorkerMessageHandled(WorkerMessageHandledEvent $event)
{
$this->sentSender->send($event->getEnvelope());
}
}
Hint the constructor to pick the appropriate sender for the 'sent' transport:
# config/services.yaml
services:
App\EventSubscriber\MessengerMessageConsumedSubscriber:
arguments: ['#messenger.transport.sent']
This way the messages (well, actually copies of your messages) will be kept and messenger_messages.created_at will hold the time of sending.
I agree with yivi though in that you should probably NOT keep the messages in messenger_messages but instead log the data somewhere else...
(Verified on symfony/messenger v5.4.7)
It may not express the problem correctly. My application dispatches emails using the Messenger component.
Every email that is dispatched by the application is audited in a file. I can know the amount of mail that the application sends in a period of time.
However, the audited number is not real. The application counts everything that is dispatched. It does not count those that actually reach their destination.
Messenger processes the queue and does not know if the mail is sent by the Mail Server. Just dispatch.
What happen? Emails are obtained from an HTML form. Malformed domains are counted by the application and by Messenger as sent emails.
What I want to obtain is an indicator of how many emails the Mail Server has successfully processed.
I suppose that the solution to my problem is not through the application or the Messenger component, but rather by obtaining some kind of audit from the Mail Server itself.
I tried what Jakumi suggested but the trigger captures all the messages that get to the queue. Even malformed domains like foo#hotmai or bar#aaa.com. The count in this table matches my audit file that records sent emails.
My problem is to count the effectively sent.
Thank you very much for the comments and suggestions.
PS: I apologize for my English. I have used Google's translation services. Wait you understand.

.set / .update collection or document, but only locally?

I came across unique use case where following feature will come in very useful.
In essence I have components that are listening for specific changes in a document, my security rules are set up in a way where reads are allowed, but all writes are disabled as database updates are only possible through cloud function, hence I was researching the docs to find if it is possible to do something like this.
/**
* This update should only happen locally / offline in order to update state of components
* that are listening to changes in this document
*/
myDocumentRef.update({ name: 'newname' });
/**
* Then cloud function is called that performs validation and if correct
* updates document with new data (same as one we set offline above).
* Once that is done listening components will receive new data. If unsuccessful
* we would set document to old data (again offline) and show error.
*/
await myCloudFunction();
Hence my question: is it possible to perform that update (and only update) locally / offline?
This is basically "optimistic update" flow utalising firebase as global store of sorts.
Firestore does not provide a concept of "local only" updates. All writes will be synchronized with the server at the earliest convenience. If a write eventually fails, the promise returned by the API call (if still in memory) will be rejected. If the sync happens after the app restarts, the promise is lost, and you have no way of knowing if the write succeeded. Local writes that eventually fail will be reverted in the local cache.
What you will have to do instead is write your data in such a way that the local listener can tell the stage of the write, possibly by writing metadata into the document to indicate that stage, cooperating with the Cloud Function on keeping that up to date.

How does the value event work on updates with Firebase listeners?

Do event listeners guarantee that all data ever written to a path will be delivered to the client eventually?
For instance if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
What would happen in this situation: client A pushes move 1 to game1/user1/move_data which client B is listening on; client A then immediately pushes another move updating the value at game1/user1/move_data.
Will the listening client be guaranteed to receive all moves pushed?
Currently I have a system that creates a new path per move and then I am calling single listeners on each move as each client reaches that move in it's state. It doesn't seem efficient as if the client A receives the most recent move that client B has made then client A begins listening on a path that doesn't exist yet.
The below quotes are from this link: https://firebase.google.com/docs/database/admin/retrieve-data
"The value event is used to read a static snapshot of the contents at a given database path, as they existed at the time of the read event. It is triggered once with the initial data and again every time the data changes. The event callback is passed a snapshot containing all data at that location, including child data. In the code example above, value returned all of the blog posts in your app. Everytime a new blog post is added, the callback function will return all of the posts."
The part about as they existed at the time of the read event causes me to think that if a listener is on a path then the client will receive all values ever on that path eventually.
There is also this line from the guarantees section which I am struggling to decipher:
"Value events are always triggered last and are guaranteed to contain updates from any other events which occurred before that snapshot was taken."
I am working with a language that does not have a Google based sdk and am asking this question, so I can further assess Firebases' suitability for my uses.
Firebase Realtime Database performs state synchronization. If a client is listening to data in a location, it will receive the state of that data. If there are changes in the data, it will receive the latest state of that data.
...if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
If there are multiple updates before the Firebase server has a chance to send the state to a listener, it may skip some intermediate values. So there is no guarantee that your client will see every state change, there is just a guarantee that it will eventually see the latest state.
If you want to ensure that all clients (can) see all state changes, you should store the state changes themselves in the database.
try to this code to get update value from firebase database:-
mFirebaseInstance = FirebaseDatabase.getInstance();
mFirebaseDatabase = mFirebaseInstance.getReference();
mFirebaseDatabase.child("new_title").setValue("Realtime Database");
mFirebaseDatabase.child("new_title").addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
String appTitle = dataSnapshot.getValue().toString();
Log.e("Hey", appTitle);
title.setText(appTitle);
}
#Override
public void onCancelled(DatabaseError error) {
// Failed to read value
Log.e("Hey", "Failed to read app title value.", error.toException());
}
});

Resources