Here's my scenario:
A modal fires that sends a message to nservicebus, this modal can fire x times, BUT I only need to send the latest message. I can do this using multiple sagas (1 per message) however for cleanliness I want to do it in 1 saga.
Here's my Bus.Send
busService.Send(new PendingMentorEmailCommand()
{
PendingMentorEmailCommandId = mentorshipData.CandidateMentorMenteeMatchID,
MentorshipData = mentorshipData,
JobBoardCode = Config.JobBoardCode
});
Command Handler:
public void Handle(PendingMentorEmailCommand message)
{
Data.PendingMentorEmailCommandId = message.PendingMentorEmailCommandId;
Data.MentorshipData = message.MentorshipData;
Data.JobBoardCode = message.JobBoardCode;
RequestTimeout<PendingMentorEmailTimeout>(TimeSpan.FromSeconds(PendingMentorEmailTimeoutValue));
}
Timeout:
public void Timeout(PendingMentorEmailTimeout state)
{
Bus.Send(new PendingMentorEmailMessage
{
PendingMentorEmailCommandId = Data.PendingMentorEmailCommandId,
MentorshipData = Data.MentorshipData,
JobBoardCode = Data.JobBoardCode
});
}
Message handler:
public void Handle(PendingMentorEmailMessage message)
{
ResendPendingNotification(message);
}
Inside my Resend method, I need to send an email based on a check...
// is there another (newer) message in the queue?
if (currentMentorShipData.DateMentorContacted == message.MentorshipData.DateMentorContacted)
CurrentMentorShipData is a database pull to get the values at the time of the message.
So I run message one at 10:22 and expect it to fire at 10:25 when I do nothing, however when I send a second message at 10:24, I only want 1 message to fire at 10:27 (the updated one), and nothing to fire at 10:25 because my if condition should fail at 10:25. I'm thinking what's happening is the Saga Data object is getting overridden by the 2nd message causing both messages to fire with DateMentorContacted = 10:24 on the message on the 1st and 2nd message, so my question is how can I persist each message's data individually?
Let me know if I can explain anything else, I'm new to nServiceBus & have tried to provide as much detail as possible.
Hearing this statement "I only need to send the latest message", I assume that that would be true for a given application-specific ID (maybe CandidateMentorMenteeMatchID in your case).
I would use that ID as a correlation ID in your saga so that you end up with one saga instance per ID.
Next, I'd have the saga itself filter out the unnecessary message sending.
This can be done by having a kind of sequence number that you store on the saga and pass back in the timeout data. Then, in your timeout handler, you can compare the sequence number currently on the saga against that which came in the timeout data, which would tell you if another message had been received during the timeout. Only if the sequence numbers match would you send the message which would ultimately cause the email to be sent.
Related
Let's assume the following RecordInterceptor to simply return a copy of the received consumer record.
class CustomRecordInterceptor : RecordInterceptor<Any, Any> {
override fun intercept(record: ConsumerRecord<Any, Any>): ConsumerRecord<Any, Any>? {
return with(record) {
ConsumerRecord(
topic(),
partition(),
offset(),
timestamp(),
timestampType(),
checksum(),
serializedKeySize(),
serializedValueSize(),
key(),
value(),
headers(),
leaderEpoch())
}
}
}
With such an interceptor in place, we experience lost records with the following Kafka listener.
Note: record is the result returned by the interceptor.
#KafkaListener(topics = ["topic"])
fun listenToEvents(
record: ConsumerRecord<SpecificRecordBase, SpecificRecordBase?>,
ack: Acknowledgment
) {
if (shouldNegativelyAcknowledge()) {
ack.nack(2_000L)
return
}
processRecord(record)
ack.acknowledge()
}
Whenever shouldNegativelyAcknowledge() is true, we would expect that record to be reprocessed by the listener after > 2 seconds. We are using ackMode = MANUAL.
What we see however is that after a while the skipped record was not reprocessed by the listener: processRecord was never invoked for that record. After a while, the consumer group has a lag of 0.
While debugging, we found this code block in KafkaMessageListenerContainer.ListenerConsumer#handleNack:
if (next.equals(record) || list.size() > 0) {
list.add(next);
}
next is the record after the interceptor treatment (so it's the copy of the original record)
record is the record before the interceptor treatment
Note that next and record can never be equal because ConsumerRecord does not override equals.
Could this be the cause for unexpectedly skipped records, maybe a bug even?
Or is it a misuse of the record interceptor to return a different ConsumerRecord object, not equal to the original?
It's a bug and it does explain why the remaining records are not sent to the listener - please open an issue on GitHub
https://github.com/spring-projects/spring-kafka/issues
Is there any way to pause firestore listener without removing it?
I have multiple firebase listeners, some are dependent on other, that changes or start other listeners on data change. Lets say my first listener starts a second listener its onSnapshot. First listener started on useEffect. For certain condition I may not want to change the second listener, so I need to discard data change update from first listener.
If condition met (button click), I discard data changes on first listener for a few moments. Currently I'm doing this using a boolean with useRef. My react app is working fine, with dependant listeners like this. I could remove the listener but I do not want to remove and recreate the listener.
I was wondering if there is a pausing mechanism or method available for any listener. I think it will save a tiny read cost if there was such a method because I'm not using that data sent onSnapshot.
Code example:
useEffect(() => {
let firstListener, secondListener;
//console.log("useEffect...");
function ListenerFunc(p) {
secondListener = await firestore
.collection("test")
.doc(p)
.onSnapshot((doc) => {
//console.log("Current data: ", doc.data());
//Need to discard unwanted change here.
//Changing it on button click for a 2 seconds then it changes back to : pauser.current = false.
if (pauser.current) {
console.log("paused for a moment.");
//pauser.current = false;
return;
}
else {
//update.
}
})
}
firstListener = firestore
.collection("test")
.doc("tab")
.onSnapshot((doc) => {
//console.log("Current data: ", doc.data());
var p = doc.data().p; //get variable p
ListenerFunc(p);
});
// cleanup.
}
Unfortunately this is not possible. If you need to stop listening for changes, even temporarily, you have to detach your listener and attach a new one when you want to start listening again, there is no pause mechanism for listeners.
You could open a Feature Request in Google's Issue Tracker if you'd like so that the product team can consider this, but given that this has already been proposed in this GitHub Feature Request for the IOS SDK and it was rejected I don't see this changing anytime soon.
I have an observer that tracks questions and answers of a command line interface. What I would like to do is inject an error into the observer given a certain event in my code in order to terminate the observer and its subscription downstream. It is unknown at what time it runs.
I've tried throwing errors from a merge of a subject and the observable but I cannot seem to get anything out of it.
Here is the relevant code:
this.errorInjector$ = new Subject<[discord.Message, MessageWrapper.Response]>();
....
this.nextQa$ = merge(
nextQa$,
this.errorInjector$.pipe(
tap((): void => {
throw (new Error('Stop Conversation called'));
}),
),
);
// start conversation
Utils.logger.trace(`Starting a new conversation id '${this.uuid}' with '${this.opMessage.author.username}'`);
}
getNextQa$(): Observable<[discord.Message, MessageWrapper.Response]> {
return this.nextQa$;
}
stopConversation(): void {
this.errorInjector$.next(
null as any
);
}
The this.nextQa$ is merged with the local nextQa$ and the errorInjector$. I can confirm that stop conversation is being called and downstream is receiving this.nextQa$ but I am not seeing any error propagate downstream when I try to inject the error. I have also tried the this.errorInjector.error() method and the map() operator instead of tap(). For whatever reason I cannot get the two streams to merge and to throw my error. To note: this.nextQa$ does propagate errors downstream.
I feel like I am missing something about how merge or subjects work so any help or explanation would be appreciated.
EDIT
Well I just figured out I need a BehaviorSubject instead of a regular subject. I guess my question now is why do I need a BehaviorSubject instead of a regular Subject just to throw an error?
EDIT 2
BehaviorSubject ALWAYS throws this error which is not what I want. It's due to the nature of its initial emission but I still don't understand why I can't do anything with a regular subject in this code.
First of all if you want subject to work you will have to subscribe before the error anyting is emitted. So there is a subscription sequence problem within your code. If you subscribe immediately after this.nextQa$ is created you shouldn't miss the error.
this.nextQa$ = merge(
nextQa$,
this.errorInjector$.pipe(
tap((): void => {
throw (new Error('Stop Conversation called'));
}),
),
);
this.nextQa$.subscribe(console.log,console.error)
The problem is getting the object with the stopConversation(): void from the dictionary object I have. The this object is defined and shows errorInjector$ is defined but the debugger tells me that errorInjector$ has become undefined when I hover over the value. At least that's the problem and I'll probably need to ask another question on that.
I have a method that checks for all unread messages belonging to a user. When the app loads, this number appears next to the "Messages" drop down. In Meteor, how would I update this count or variable for when a new message comes in or when the user reads an unread message? Pretty much I needs the method to send down the new count anytime a message status changes without refreshing the app itself.
I'm familiar with the Tracker.autorun functionality but I don't think it'll help with this situation. What's the best practice for approaching this?
Use Publish/Subscribe. It is always reactive. If you do not want to have all unread messages sent to the client straight away and counted there, you create a custom collection that justs count the number of unread messages and publishes that count. Look at the example a bit down in the linked page that starts with
// server: publish the current size of a collection
This is exactly your use case.
I have exactly this setup for new messages. In my header I have:
<li>Messages <span class="counter">{{Messages.count}}</span></li>
And then I have a helper that returns the cursor:
Template.header.helpers({
Messages: function(){ return Messages.find(); }
});
In the old days, before David Weldon set me straight I used to have a helper to return the count, now I just refer to the count directly in the blaze html template.
Now, in this approach I'm subscribing to the Messages collection so that new messages are transmitted to the client and can then be counted locally. This is on the assumption that they are going to be read soon. If you want to avoid this step then you should probably publish a Stats collection or include a stats key in the user object so that just the count itself can be synced via pub-sub.
You can just have a field like read, and update like:
Method for marking one message as read:
markRead: function(messageId){
Messages.update(messageId, {
$set: {
read: true //this needs to be set to false when its inserted
}
})
}
Bulk update method (assuming all messages have receiverId saved):
markAllRead: function(){
Messages.update({receiver: Meteor.userId(), read:false}, {
$set: {
read: true
}
}, {multi: true})
}
You can count read:false ones to retrieve count and you don't have to write anything else
Helper:
count: function(){
//even if your publish/subscribe is correct, the count we want is from messages that are not read and the receiver is current user.
return Messages.find({receiver: Meteor.userId(), read: false }).count();
}
Event:
'click .elementClass': function(){
//both users see the messages and they can both click. We want to update the right message for the right user. Otherwise, the other user can mark the message as read when the receiver is the other user which they shouldn't be able to do. You can do a simple check on the client side, and another check in the method if necessary.
if(this.receiver === Meteor.userId()){
Meteor.call('markAsRead', this._id)
}
}
Let me know if it solves your problem/answers all your questions.
I have a fairly simple Meteor application.
I tried to send newsletter to about 3000 users in my list and things went wrong. A random set of users got multiple emails (between 41 to 1).
I shut the server down as soon I noticed this behavior. around 1300 emails were sent to 210 users. I am trying to figure out what happened and why.
Here is the code flow:
SendNow (client clode) --> SendNow (server method) --> populateQue (server function) --> processQue(server function) --> sendEmails (server method)
Client side code :
'click .sendNow': function(){
/* code that forms data object */
Meteor.call('sendNow',data);
}
Server code : server/method.js
Meteor.methods({
'sendNow' : function(data){
if(userWithPermission()){
var done = populateQue(data);
if(done)
processQue();
return {'method':'sendNow','status':'ok'}
},
'sendEmails': function(data){
this.unblock();
var result = Mandrill.messages('send', data);// using external library
SentEmails.insert(data);//Save sent emails in a collection
}
});
Function on server : server/utils.js
populateQue = function(data) {
/* code to get all users in to array */
MessageQue.remove();//Remove all documents from the Que
for (var i=0; i<users.length; i++) {
MessageQue.insert({userId: users[i]._id});
}
return true;
}
processQue = function(){
var messageQue = MessageQue.find({}).fetch();
for(i=0; i < messageQue.length; i++){
Meteor.call('sendEmails', data);
MessageQue.remove({_id: messageQue[i]._id});//Remove sent emails from the Que
}
}
My first hunch was MessageQue got messed up as I am removing items while processQue is using it but i was wrong. I am unable to simulate this behavior again after few tests
Test 1 : replaced Mandrill.message('send',data) with Meteor._sleepForMs(1000); - Only one email/person was seen in SentEmails collection.
Test 2 : Put Mandrill in Test mode (had to use different API key) and re ran the code with couple of log statements. - Only one email/person was seen in SentEMails and also in Mandrill's interface.
It's definitely not external library. its somewhere in my code or in the way I understood meteor to work.
Only one thing I noticed is an error occurred while accessing SentEmails collection through another view code. I have a view that displays SentEmails on the client with date as filter.
Here is the error :
Exception from sub sentEmailDocs id 9LTq6mMD4xNcre4YX Error:
Exception while polling query
{
"collectionName":"sent_emails",
"selector":{"date":{"$gt":"2015-07-09T05:00:00.000Z","$lt":"2015-07-11T05:00:00.000Z"}},
"options":{"transform":null,"sort":{"date":-1}}
}:
Runner error: Overflow sort stage buffered data usage of 33565660 bytes exceeds internal limit of 33554432 bytes
Is this the smoking gun? Would this have caused the random behavior?
I have put couple checks to prevent this from happening but I am puzzled on what might have caused and why? I will be happy to provide more information. Thanks in advance to who ever is willing to spend few mins on this.
Shot in the dark here, but the remove method takes an object, otherwise it doesn't do anything. MessageQue.remove() probably didn't clear the queue. You need MessageQue.remove({}). Test the theory by doing an if (MessageQue.find().count() > 0)... after the remove.
If you're set on having a separate collection for the queue, and I'm not saying that's a bad thing, I'd set the _id to be the userId. That way you can't possibly send someone the same message twice.