I am working on a WebSocket application that uses React/Redux stack with Sage as the side-effects handler. Saga dispatches events on incoming WebSocket messages and specific slice reducers either
1. Handle the message and update the state
2. Reply to Websocket message immediately - based on the current state
3. Raise an error - Invalid message
I would like to contain the business logic in reducers, hence would like the reducers to raise WebSocket send events or set an error state, rather than splitting these 2 to Saga handler. Strategy I am looking at is to have global states state.outgoingMessages and state.errors where any slice reducer can set these states. If an outgoingMessage is set the sage middleware would on resumption would write back to the WebSocket, if an error state is set an Error Component would render this.
I don't think I am violating any reducer/saga rules, however I have the dilemma of passing down a common state to reducers. There being quite a bit of discussion around this topic
https://redux.js.org/docs/recipes/reducers/BeyondCombineReducers.html#sharing-data-between-slice-reducers
provides a recipe of passing down additional sate. I can have a higher order reducer and pass down additional states outgoingMessages and error to slice reducers, would this be an anit-pattern, or is there a better way to handle the problem altogether?
Related
I am considering using deferred messages as a delay/timeout alerting mechanism within my Saga. I would do this by sending a deferred message at the same time as sending a message to a handler to do some time-consuming work.
If that handler fails to publish a response within the deferred timespan the Saga is alerted to the timeout/delay via the deferred message arrival. This is different to a handler failure as its still running, just slower than expected.
The issue comes if everything runs as expected, its possible that the Saga will complete all of its steps and you'd find many deferred messages waiting to be delivered to a Saga that no longer exists. Is there a way to clean up the deferred messages you know are no longer required?
Perhaps there is a nicer way of implementing this functionality in Rebus?
Once sent, deferred messages cannot be cancelled.
But Rebus happens to ignore messages that cannot be correlated with a saga instance, and the saga handler does not allow for that particular message type to initiate a new saga, so if the saga instance is gone, the message will simply be swallowed.
That's the difference between using IHandleMessages<CanBeIgnored> and IAmInitiatedBy<CannotBeIgnored> on your saga. 🙂
I stumbled over Handling duplicate messages using the Idempotent consumer pattern :
Similar, but slightly different is the Transactional Inbox Pattern which acknowledges the kafka message receipt after the transaction INSERT into messages (no business transaction) concluded successfully and having a background polling to detect new messages in this table and trigger the real business logic (i.e. the message listener) subsequently.
Now I wonder, if there is a Spring magic to just provide a special DataSource config to track all received messages and discard duplicated message deliveries?
Otherwise, the application itself would need to take care to ack the kafka message receipt, message state changes and data cleanup of the event table, retry after failure and probably a lot of other difficult things that I did not yet thought about.
The framework does not provide this out of the box (there is no general solution that will work for all), but you can implement it via a filter, to avoid putting this logic in your listener.
https://docs.spring.io/spring-kafka/docs/2.7.9/reference/html/#filtering-messages
In below scenario, what would be the bahavior of Axon -
Command Bus recieved the command
It creates an event
However messaging infra is down (say kafka)
Does Axon has re-queing capability for event or any other alternative to handle this scenario.
If you're using Axon, you know it differentiates between Command, Event and Query messages. I'd suggest to be specific in your question which message type you want to retry.
However, I am going to make the assumption it's about events, as your stating Kafka.
If this is the case, I'd highly recommend reading the reference guide on the matter, as it states how you can uncouple Kafka publication from actual event storage in Axon.
Simply put, use a TrackingEventProcessor as the means to publish events on Kafka, as this will ensure a dedicate thread is used for publication instead of the same thread storing the event. Added, the TrackingEventProcessor can be replayed, thus "re-process" events.
Redux-saga middleware gives us the feeling as if it runs on a separate thread. When it is told to wait for a certain action to be dispatched by the saga(generator function), it suspends the saga until the action of interest is dispatched. Single js runtime is single threaded, how saga middleware waits for an action to be dispatched and at the same time not block everything else?
Waiting for actions works like this:
For any take() effect redux-saga middleware makes an entry in takers array. An entry contains the pattern and the suspended generator.
On any action dispatch the middleware checks the action against the takers array. Matching generators are scheduled to be run.
This is asynchronous waiting that doesn't involve blocking anything.
As stated in the ReactiveX Introduction - Observables Are Less Opinionated
ReactiveX is not biased toward some particular source of concurrency or asynchronicity. Observables can be implemented using thread-pools, event loops, non-blocking I/O, actors (such as from Akka), or whatever implementation suits your needs, your style, or your expertise. Client code treats all of its interactions with Observables as asynchronous, whether your underlying implementation is blocking or non-blocking and however you choose to implement it.
I am not getting the part - "whether your underlying implementation is blocking or non-blocking".
Can you explain more of it? Or some example code to explain this?
Observable.fromCallable(() -> doSomeReallyLongNetworkRequest())
.subscribe(data -> {
showTheDataOnTheUI(data);
});
Where do you think the doSomeReallyLongNetworkRequest() will run (thread-wise)?
Well if you will run this code at main thread, the network call will run at main thread!
"Observables Are Less Opinionated" means that the multi threading is abstracted away from the actual work. the Subscriber don't know (and don't need to), where the Observable will run, it can run on thread-pool, event loop, or it can even run in blocking fashion.
That's why all Observable interaction happen with async API.
While putting it this way seems like a drawback, the opposite is true, that means that you have greater control where each part of your code is run, without exposing the operation itself and the code that react to Observable emission to this knowledge.
This is done using the Schedulers mechanism in RxJava, together with subscribeOn()/observeOn() operators:
Observable.fromCallable(() -> doSomeReallyLongNetworkRequest())
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(data -> {
showTheDataOnTheUI(data);
});
now you told the Observable to perform the subscription code (doSomeReallyLongNetworkRequest()) to run on IO Schdeuler that will create a dedicated thread for the network request, and on the other side, you told the Observable to notify the about emissions (onNext()) Subscriber (showTheDataOnTheUI(data)) at main thread (sorry for android specifics).
With this approach you have very powerful mechanism to determine where and how both operations will work and where notifications will be fired, and very easily ping pong between different threads, this great power comes because of the async API, plus the abstraction of threads away to dedicated operators and Scheduler mechanism.
UPDATE: further explanation:
Client code treats all of its interactions with Observables as
asynchronous
Client code here means any code that interacts with the Observable, the simple example is the Subscriber which is the client of the Observable, as of the composable nature of Observable, you can get an Observable from some API without knowing exactly how its operate so :
Observable.fromCallable(() -> doSomeReallyLongNetworkRequest())
can be encapsulate with some service API as Observable<Data>, and when Subscriber interacts with it, it's happen with async fashion using the Observable events onNext,onError,onComplete.
whether your underlying implementation is blocking or non-blocking and
however you choose to implement it.
"underlying implementation" refers to the operation the Observable do, it can be blocking work like my example of network call, but it can also be a notifications from the UI (clicks events), or update happens at some external module, all of which are non-blocking implementation, and again as of its async API nature the Subscribe shouldn't care, it just need to react to notifications without worrying about where and how the implementation (Observable body) will act.