I'm using redux-thunk and starting to see some limitations.
Suppose I have actions PUSHER_CONNECT, PUSHER_CONNECTED, PUSHER_DISCONNECTED, PUSHER_LISTEN_TO_CHANNEL, PUSHER_MESSAGE_RECEIVED, etc. The state would have something simple that indicates the connection status.
{ connection: 'connected' } // OR
{ connection: 'disconnected' }
How would I be able to truly travel back and forth between these 2: PUSHER_CONNECTED, PUSHER_DISCONNECTED since the pusher connection is still living somewhere. I was thinking of keeping the pusher object and related objects in the state and if it's PUSHER_DISCONNECTED, set them to null. But there's no guarantee those objects are immutable.
Another thought is that, I would add a check for PUSHER_MESSAGE_RECEIVED: if state.connection !== 'connected' then don't push the new message... simulating that it's a "real" disconnection. Similarly, add a check for PUSHER_CONNECT, if the pusher object is there and connected, don't re-connect but just change the state to {connection: 'connecting'}
How would you approach this?
You can subscribe to the store and manage something like a “two way binding”.
In the listener, check the status, and if it contradicts the store state, execute the necessary effects. Be careful not to get into an infinite loop there.
This is the approach https://github.com/reactjs/react-router-redux takes for synchronizing the URL bar on time travel.
Related
I want my Axon replay events, not all but partially.
A full replay is up and running but when i want a partially replay i need a TrackingToken startPosition for the method resetTokens(), my problem is how to get this token for the partial replay?
I tried with GapAwareTracingToken but this does not work.
public void resetTokensWithRestartIndexFor(String trackingEventProcessorName, Long restartIndex) {
eventProcessingConfiguration
.eventProcessorByProcessingGroup(trackingEventProcessorName, TrackingEventProcessor.class)
.filter(trackingEventProcessor -> !trackingEventProcessor.isReplaying())
.ifPresent(trackingEventProcessor -> {
// shutdown this streaming processor
trackingEventProcessor.shutDown();
// reset the tokens to prepare the processor with start index for replay
trackingEventProcessor.resetTokens(GapAwareTrackingToken.newInstance(restartIndex - 1, Collections.emptySortedSet()));
// start the processor to initiate the replay
trackingEventProcessor.start();
});
}
When i use the GapAwareTrackingToken then i get the exception:
[] - Resolved [java.lang.IllegalArgumentException: Incompatible token type provided.]
I see that there is also a GlobalSequenceTrackingToken i can use, but i don't see any documentatieon about when these can/should be used.
The main "challenge" when doing a partial reset, is that you need to be able to tell where to reset to. In Axon, the position in a stream is defined with a TrackingToken.
The source that you read from will provide you with such a token with each event that it provides. However, when you're doing a reset, you probably didn't store the relevant token while you were consuming those events.
You can also create tokens using any StreamableMessageSource. Generally, this is your Event Store, but if you read from other sources, it could be something else, too.
The StreamableMessageSource provides 4 methods to create a token:
createHeadToken - the position at the most recent edge of the stream, where only new events will be read
createTailToken - the position at the very beginning of the stream, allowing you to replay all events.
createTokenAt(Instant) - the most recent position in the stream that will return all events created on or after the given Instant. Note that some events may still have a timestamp earlier than this timestamp, as event creation and event storage isn't guaranteed to be the same.
createTokenSince(Duration) - similar to createTokenAt, but accepting an amount of time to go back.
So in your case, createTokenAt should do the trick.
I have my reducer with a starting state of an empty array:
folderReducer(state:Array<Folder> = [], action: Action)
I'd like to populate the starting state, so when I do
store.subscribe(s => ..)
The first item I get comes from the database. I assume the way of doing this is with ngrx/effects, but I'm not sure how.
Your store always has the initial state, that you define in the reducer-function. The initial states main purpose is to ensure that the application is able to start up and not run into any null-pointer-exceptions. And also it sets up your application to start making the first api-calls ect. - so you can think of it as a technical initial state.
If you want to fill your store with api-data on the startup, you would do that on the same way that you add/modify data during any other action - just that the action of "initially loading data" is not triggered by some user-interaction but through:
either when your root-component loads
or as part of a service in the constructor
In case you want to prevent specific components from showing anything until your API-call is done, you would have to adjust the display-components to display or hide data based on your state (e.g. by implementing a flag in your satet initialDataLoaded).
A dynamic initial state is now supported, see: https://github.com/ngrx/platform/blob/master/docs/store/api.md#initial-state-and-ahead-of-time-aot-compilation
Also see: https://github.com/ngrx/platform/issues/51
I would only do this if the database is local, otherwise the request will hold up loading of the application.
Is it possible to stop (kill) asynchronous Call?
In my app I have at client side sth like:
Meteor.call('doCalculation', function(err, result) {
//do sth with result
});
'doCalculation' may take long time (this is ok) I dont want user to start new call when he/she has already one running call, I want to allow user to stop current call and submit new one. How correctly do this?
The only idea I have is to communicate between client and server using mongo. In some place in 'doCalculation' function I can observe some mongo document/collection and based on this do sth in the function (e.g. call exception). Do you have any better ideas?
You can use a semaphore for this purpose. When the semaphore is 1, requests are allowed to be sent. When the semaphore is 0, requests are not allowed to be sent. The semaphore should be 1 by default and just before you send the request, you need to set it to 0. When a response is successful, you set the semaphore back to 1.
As about the timeout: You could use a time out using setTimeout after sending the request, like this:
if (semaphore) {
var isTimedOut = false;
var isSuccess = false;
semaphore = 0; //No need to use var keyword, as this should be declared outside of this scope
Meteor.call('doCalculation', function(err, result) {
isSuccess = true;
//do sth with result
});
setTimeout(function() {
if (!isSuccess) {
isTimeout = true;
//do something else, to handle the time out state
}
}, 10000);
}
This is tricky, because you cannot generally set timeouts from the client's point of view. You don't need to, for a bunch of architectural reasons. The most important thing is that if you lose network connectivity or the server crashes (two cases timeouts are designed to manage), the client is aware immediately because it is disconnected. You can use Meteor.status().connected if this happens often.
It sounds like you're running a long calculation on the server. My suggestion is to return a calculationId immediately, and then update a collection with progress, e.g., CalculationProgresses.update(calculationId, {$set: {progress: currentProgress}}) as you calculate. Your UI can then update the progress reactively, in the most convenient way possible.
Note, that when you do run long calculations on the server, you need to occasionally "yield," giving the chance for other work to happen. Node, on which Meteor is based, is tricky for long calculations if you don't master this notion of yielding. In Meteor, you can yield easily by updating a collection (e.g., your progress collection). This will solve lots of problems you're probably experiencing as you write your application.
i think you need a server-side solution for this. if you go with a client-side solution, you don't handle 2 cases:
the user reloads their browser
the user uses 2 browsers
i would create these methods:
isCalculationActive() -- this checks if the user already has a calculation active. on the server, you can either keep that fact in memory or write it to the db. on the client, if this returns false, then you can proceed to call doCalculation(). if true, you can give the user a popup or alert or something to ask if they want to cancel and proceed.
doCalculation() -- this cancels any outstanding calculation by that user and starts a new one.
with these implemented, the user can reload their browser w/o affecting either the running calculation or correct behavior. and if they try a 2nd browser, everything should still work as expected.
if you want to give the user the option to simply stop the job and not start a new one, then you can simply create:
cancelCalculation() -- this cancels any outstanding calculation by that user.
I have a problem that I can solve reasonably easy with classic imperative programming using state: I'm writing a co-browsing app that shares URL's between several nodes. The program has a module for communication that I call link and for browser handling that I call browser. Now when a URL arrives in link i use the browser module to tell the
actual web browser to start loading the URL.
The actual browser will trigger the navigation detection that the incoming URL has started to load, and hence will immediately be presented as a candidate for sending to the other side. That must be avoided, since it would create an infinite loop of link-following to the same URL, along the line of the following (very conceptualized) pseudo-code (it's Javascript, but please consider that a somewhat irrelevant implementation detail):
actualWebBrowser.urlListen.gotURL(function(url) {
// Browser delivered an URL
browser.process(url);
});
link.receivedAnURL(function(url) {
actualWebBrowser.loadURL(url); // will eventually trigger above listener
});
What I did first wast to store every incoming URL in browser and simply eat the URL immediately when it arrives, then remove it from a 'received' list in browser, along the lines of this:
browser.recents = {} // <--- mutable state
browser.recentsExpiry = 40000;
browser.doSend = function(url) {
now = (new Date).getTime();
link.sendURL(url); // <-- URL goes out on the network
// Side-effect, mutating module state, clumsy clean up mechanism :(
browser.recents[url] = now;
setTimeout(function() { delete browser.recents[url] }, browser.recentsExpiry);
return true;
}
browser.process = function(url) {
if(/* sanity checks on `url`*/) {
now = (new Date).getTime();
var duplicate = browser.recents[url];
if(! duplicate) return browser.doSend(url);
if((now - duplicate_t) > browser.recentsExpiry) {
return browser.doSend(url);
}
return false;
}
}
It works but I'm a bit disappointed by my solution because of my habitual use of mutable state in browser. Is there a "Better Way (tm)" using immutable data structures/functional programming or the like for a situation like this?
A more functional approach to handling long-lived state is to use it as a parameter to a recursive function, and have one execution of the function responsible for handling a single "action" of some kind, then calling itself again with the new state.
F#'s MailboxProcessor is one example of this kind of approach. However it does depend on having the processing happen on an independent thread which isn't the same as the event-driven style of your code.
As you identify, the setTimeout in your code complicates the state management. One way you could simplify this out is to instead have browser.process filter out any timed-out URLs before it does anything else. That would also eliminate the need for the extra timeout check on the specific URL it is processing.
Even if you can't eliminate mutable state from your code entirely, you should think carefully about the scope and lifetime of that state.
For example might you want multiple independent browsers? If so you should think about how the recents set can be encapsulated to just belong to a single browser, so that you don't get collisions. Even if you don't need multiple ones for your actual application, this might help testability.
There are various ways you might keep the state private to a specific browser, depending in part on what features the language has available. For example in a language with objects a natural way would be to make it a private member of a browser object.
Is there a way to make synchronous calls using RemoteObject in Flex?
All IO in Flex is asynchronous. The typical pattern to deal with this is to use an AsyncResponder. For instance:
var t:AsyncToken = remoteObject.methodCall();
t.addResponder(new AsyncResponder(resultEvent, faultEvent));
think twice when u want it to be synchronous.
Do u know what synchronous mean? it will FREEZE your application until it receive data. Unless u are pretty sure that your remote calling can receive return value immediately (super fast network connection).
if your function call depends on each other, i would suggest you implement a state machine. e.g.
after 1st async call, your state becomes STATE_1, and your next function call will check on this state variable, to decide next move (ignore the current call or carry on).
my 2 cents.
If you want synchronous behavior, just add a wait after you make the call.
EDIT: I've added code for the chaining behavior I was talking about. Just replace the result handler each subsequent time you call the remoteObject.
...
remoteObject.function1(...);
...
private var resultHandler1(event:ResultEvent):void
{
...
remoteObject.removeEventListener(resultHandler1);
remoteObject.addEventListener(ResultEvent.RESULT, resultHandler2);
remoteObject.function2(...);
}
private var resultHandler2(event:ResultEvent):void
{
...
}
I achieved the same in two ways: First, as said above the use of state machines. It may get tricky at times. Second, the use of command queues - I think this is the best way to do it... but the downside is that the UI may not be very reflective in this time.
you should perhaps try and make one request with with all the data u want to be recieved synchronous and then make the different classes that need data listen to the correct data for that class.
ex:
// request
remoteobject.GetData();
// on received request
private function receivedData(evt:ResultEvent):void
{
for each (var resultobject:ResultObjectVO in evt.result)
{
var eventModel:Object;
var event:DataEvents = new DataEvents(resultobject.ResultType);
event.data = eventModel;
eventdispatcher.dispatchEvent(event);
}
}
Something like this. Hopes this helps.
No, why would you wish to do that anyway.
Flex makes things asynchronous so that the user isn't forced to sit and wait while data is coming back.
It would be a very poor user expereince if each time an app requested data the user had to wait on it coming back before anything else could happen.
from comment
No you don't need synchronus behaivour. If you're making say 2 calls and call 2 comes in before call 1, but 2 relies on the data inside 1 then you're left with either don't fire off event 2 till 1 comes back (this will slow down your app - much like synchronus events) or implement a way to check that event 1 has come back in event 2's handler (there are many ways you could do this).
If you're firing off many events then why not have a wrapper class of some description that tracks your events and doesn't do anything on the responses until all events are back.
You can use the AsyncToken to keep track of individual requests, so if you are firing of loads at once then you can find out exaclty whats come back and whats not.
You all are somehow mistaken or not using flex from adobe, if you send 2 calls to the server, no matter if each has an individual resquestObject the second one will ONLY be returned after the first one finish, even if the second one takes 1 milisecond to process. Just try the fibonnaci 1/40 example.
Maybe if you call a synchronous XMLHttpRequest calling JavaScript on Flex, you can do this.