How to send signal to particular instance along with details? - alfresco

I have 2 process P and Q where I am trying to throw signal (globally)from process P and catching signal at Q. When there are multiple instances, for example: In process P -process instances P1, P2 and in process Q -process instances Q1 and Q2.
If I throw signal from one process instance that will be caught by multiple instances of other process.
For example, If I throw signal from Q1 that will be caught by all the instances of P [P1,P2].
I tried following ways:
1] RuntimeService.signalEventReceived(String signalName, String executionId);
With this method, I can invoke particular instance but, was not able to pass the details[data].
2]
POST runtime/signals
{
"signalName": "My Signal",
"tenantId" : "execute",
"async": true,
"variables": [
{"name": "testVar", "value": "This is a string"}
]
}
With this api, I was able to get details[data] by passing variables, but was not able to invoke particular process instance.
Is there any way to send signal to particular instance along with details?
Thanks & Regards,
Shilpa V Kulkarni

You can try to use this method
void signalEventReceived(String signalName, String executionId, Map<String, Object> processVariables);

Related

Examples on spring kafka batch processing with filter strategy and manual commit

i am planning to do batch processing using spring kafka batch listener. I am looking for few samples for these 2 scenarios.
How do we implement filter record strategy with batch processing? UPDATE : From the document - " In addition, a FilteringBatchMessageListenerAdapter is provided, for when you use a batch message listener." is not clear. I did not see any container factory method to set this filterbatchmessagelisteneradapter object or filter implementation.
HERE is my code for batch listener filter strategy :
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
configurer.configure(factory, kafkaConsumerFactory);
factory.setBatchListener(true);
factory.setAckDiscarded(true);
factory.setRecordFilterStrategy(new RecordFilterStrategy<Object, Object>() {
#Override
public boolean filter(ConsumerRecord<Object, Object> consumerRecords) {
//log.info("Retrieved the record {} from the partition {} with offset {}", consumerRecord.value(), consumerRecord.partition(), consumerRecord.offset());
return true;
}
});
return factory;
}
How can we do a manual offset commit, once we retrieve the batch of messages in the consumer and all got processed. During batch process if any failure comes, just want to push that message to error topic.But finally I would like to commit entire batch at a time .
Now other question I came to mind is how the above scenario works with a single consumer and with multiple consumers.
Let’s say case 1 : single consumer
Let’s say we have a topic with 5 partitions . When we subscribe to that topic, we assume we got 100 messages from the topic in which each partition has 20 messages. If we want to commit these message offset, does the acknowledgment object hold each partition and last offset of the last message?
Case2: multiple consumers
With the same input as mentioned in case1, If we enable the equal no of consumers with partition count, does the ack object hold partition and last message offset?
Can you please help me on this?
See FilteringBatchMessageListenerAdapter https://docs.spring.io/spring-kafka/docs/current/reference/html/#filtering-messages
The simplest way to do handle exceptions with a batch is to use a RecoveringBatchErrorHandler with a DeadLetterPublishingRecoverer. Throw a BatchListenerFailedException to indicate which record in the batch failed; the offsets for the successful records are committed and the remaining records (including the failed one) will be redelivered until retries (if configured) are exhausted, when the failed record will go to the dead letter topic and the rest will be redelivered.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
Yes, when the batch is acknowledged, the latest offset (+1) for each partition in the batch is committed.
If you have multiple consumers, the partitions are distributed across those consumers.

How do I throw an error asynchronously in RxJs?

I have an observer that tracks questions and answers of a command line interface. What I would like to do is inject an error into the observer given a certain event in my code in order to terminate the observer and its subscription downstream. It is unknown at what time it runs.
I've tried throwing errors from a merge of a subject and the observable but I cannot seem to get anything out of it.
Here is the relevant code:
this.errorInjector$ = new Subject<[discord.Message, MessageWrapper.Response]>();
....
this.nextQa$ = merge(
nextQa$,
this.errorInjector$.pipe(
tap((): void => {
throw (new Error('Stop Conversation called'));
}),
),
);
// start conversation
Utils.logger.trace(`Starting a new conversation id '${this.uuid}' with '${this.opMessage.author.username}'`);
}
getNextQa$(): Observable<[discord.Message, MessageWrapper.Response]> {
return this.nextQa$;
}
stopConversation(): void {
this.errorInjector$.next(
null as any
);
}
The this.nextQa$ is merged with the local nextQa$ and the errorInjector$. I can confirm that stop conversation is being called and downstream is receiving this.nextQa$ but I am not seeing any error propagate downstream when I try to inject the error. I have also tried the this.errorInjector.error() method and the map() operator instead of tap(). For whatever reason I cannot get the two streams to merge and to throw my error. To note: this.nextQa$ does propagate errors downstream.
I feel like I am missing something about how merge or subjects work so any help or explanation would be appreciated.
EDIT
Well I just figured out I need a BehaviorSubject instead of a regular subject. I guess my question now is why do I need a BehaviorSubject instead of a regular Subject just to throw an error?
EDIT 2
BehaviorSubject ALWAYS throws this error which is not what I want. It's due to the nature of its initial emission but I still don't understand why I can't do anything with a regular subject in this code.
First of all if you want subject to work you will have to subscribe before the error anyting is emitted. So there is a subscription sequence problem within your code. If you subscribe immediately after this.nextQa$ is created you shouldn't miss the error.
this.nextQa$ = merge(
nextQa$,
this.errorInjector$.pipe(
tap((): void => {
throw (new Error('Stop Conversation called'));
}),
),
);
this.nextQa$.subscribe(console.log,console.error)
The problem is getting the object with the stopConversation(): void from the dictionary object I have. The this object is defined and shows errorInjector$ is defined but the debugger tells me that errorInjector$ has become undefined when I hover over the value. At least that's the problem and I'll probably need to ask another question on that.

How to run contract verification on a filtered transaction?

Assume a node is given a filtered transaction containing a few states, while some have been excluded out. How can the node run the smart contract verify function on the states that included in the transaction? Am trying to achieve something similar to ledgerTransaction.verify()
As of Corda 3, you cannot run the remaining states' verify methods, since the verify method requires a LedgerTransaction.
Instead, you would have to retrieve the states from the FilteredTransaction, and provide your own checking logic. For example:
val inputStateRefs = filteredTransaction.inputs
val inputStateAndRefs = inputStateRefs.map { inputStateRef ->
serviceHub.toStateAndRef<TemplateState>(inputStateRef)
}
inputStateAndRefs.forEach { inputStateAndRef ->
val state = inputStateAndRef.state
// TODO: Checking...
}

How to make multiple API request with RxJava and combine them?

I have to make N REST API calls and combine the results of all of them, or fail if at least one of the calls failed (returned an error or a timeout).
I want to use RxJava and I have some requirements:
Be able to configure a retry of each individual api call under some circumstances. I mean, if I have a retry = 2 and I make 3 requests each one has to be retried at most 2 times, with at most 6 requests in total.
Fail fast! If one API calls have failed N times (where the N is the configuration of the retries) it doesn't mater if the remaining requests hasn't ended, I want to return an error.
If I wish to make all the request with a single Thread, I would need an async Http Client, wouldn't?
Thanks.
You could use Zip operator to zip all request together once they ends and check there if all of them were success
private Scheduler scheduler;
private Scheduler scheduler1;
private Scheduler scheduler2;
/**
* Since every observable into the zip is created to subscribeOn a different thread, it´s means all of them will run in parallel.
* By default Rx is not async, only if you explicitly use subscribeOn.
*/
#Test
public void testAsyncZip() {
scheduler = Schedulers.newThread();
scheduler1 = Schedulers.newThread();
scheduler2 = Schedulers.newThread();
long start = System.currentTimeMillis();
Observable.zip(obAsyncString(), obAsyncString1(), obAsyncString2(), (s, s2, s3) -> s.concat(s2)
.concat(s3))
.subscribe(result -> showResult("Async in:", start, result));
}
private Observable<String> obAsyncString() {
return Observable.just("Request1")
.observeOn(scheduler)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "Hello");
}
private Observable<String> obAsyncString1() {
return Observable.just("Request2")
.observeOn(scheduler1)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> " World");
}
private Observable<String> obAsyncString2() {
return Observable.just("Request3")
.observeOn(scheduler2)
.doOnNext(val -> {
System.out.println("Thread " + Thread.currentThread()
.getName());
})
.map(val -> "!");
}
In this example we just concat the results, but instead of do that, you can check the results and do your business logic there.
You can also consider merge or contact also.
you can take a look more examples here https://github.com/politrons/reactive
I would suggest to use an Observable to wrap all the calls.
Let's say you have your function to call the API:
fun restAPIcall(request: Request): Single<HttpResponse>
And you want to call this n times. I am assuming that you want to call them with a list of values:
val valuesToSend: List<Request>
Observable
.fromIterable(valuesToSend)
.flatMapSingle { valueToSend: Request ->
restAPIcall(valueToSend)
}
.toList() // This converts: Observable<Response> -> Single<List<Response>>
.map { responses: List<Response> ->
// Do something with the responses
}
So with this you can call the restAPI from the elements of your list, and have the result as a list.
The other problem is the retries. You said you wanted to retry when an individual cap is reached. This is tricky. I believe there is nothing out of the box in RxJava for this.
You can use retry(n) where you can retry n times in total, but that
is not what you wanted.
There's also a retryWhen { error -> ... } where you can do
something given an exception, but you would know what element throw
the error (unless you add the element to the exception I think).
I have not used the retries before, nevertheless it seems that it retries the whole observable, which is not ideal.
My first approach would be doing something like the following, where you save the count of each element in a dictionary or something like that and only retry if there is not a single element that exceeds your limit. This means that you have to keep a counter and search each time if any of the elements exceeded.
val counter = valuesToSend.toMap()
yourObservable
.map { value: String ->
counter[value] = counter[value]?.let { it + 1 }?: 0 // Update the counter
value // Return again the value so you can use it later for the api call
}
.map { restAPIcall(it) }
// Found a way to take yourObservable and readd the element if it doesn't exceeds
// your limit (maybe in an `onErrorResumeNext` or something). Else throw error

Sending multiple messages in the same saga

Here's my scenario:
A modal fires that sends a message to nservicebus, this modal can fire x times, BUT I only need to send the latest message. I can do this using multiple sagas (1 per message) however for cleanliness I want to do it in 1 saga.
Here's my Bus.Send
busService.Send(new PendingMentorEmailCommand()
{
PendingMentorEmailCommandId = mentorshipData.CandidateMentorMenteeMatchID,
MentorshipData = mentorshipData,
JobBoardCode = Config.JobBoardCode
});
Command Handler:
public void Handle(PendingMentorEmailCommand message)
{
Data.PendingMentorEmailCommandId = message.PendingMentorEmailCommandId;
Data.MentorshipData = message.MentorshipData;
Data.JobBoardCode = message.JobBoardCode;
RequestTimeout<PendingMentorEmailTimeout>(TimeSpan.FromSeconds(PendingMentorEmailTimeoutValue));
}
Timeout:
public void Timeout(PendingMentorEmailTimeout state)
{
Bus.Send(new PendingMentorEmailMessage
{
PendingMentorEmailCommandId = Data.PendingMentorEmailCommandId,
MentorshipData = Data.MentorshipData,
JobBoardCode = Data.JobBoardCode
});
}
Message handler:
public void Handle(PendingMentorEmailMessage message)
{
ResendPendingNotification(message);
}
Inside my Resend method, I need to send an email based on a check...
// is there another (newer) message in the queue?
if (currentMentorShipData.DateMentorContacted == message.MentorshipData.DateMentorContacted)
CurrentMentorShipData is a database pull to get the values at the time of the message.
So I run message one at 10:22 and expect it to fire at 10:25 when I do nothing, however when I send a second message at 10:24, I only want 1 message to fire at 10:27 (the updated one), and nothing to fire at 10:25 because my if condition should fail at 10:25. I'm thinking what's happening is the Saga Data object is getting overridden by the 2nd message causing both messages to fire with DateMentorContacted = 10:24 on the message on the 1st and 2nd message, so my question is how can I persist each message's data individually?
Let me know if I can explain anything else, I'm new to nServiceBus & have tried to provide as much detail as possible.
Hearing this statement "I only need to send the latest message", I assume that that would be true for a given application-specific ID (maybe CandidateMentorMenteeMatchID in your case).
I would use that ID as a correlation ID in your saga so that you end up with one saga instance per ID.
Next, I'd have the saga itself filter out the unnecessary message sending.
This can be done by having a kind of sequence number that you store on the saga and pass back in the timeout data. Then, in your timeout handler, you can compare the sequence number currently on the saga against that which came in the timeout data, which would tell you if another message had been received during the timeout. Only if the sequence numbers match would you send the message which would ultimately cause the email to be sent.

Resources