I'm building an library with Redux that involves using a timer. I have an action creator that dispatches a START_TIMER event and should also should also call start on a timer object. The code looks like this:
// thunk action creator
const startTimer = () => (dispatch, getState) => {
if (!getState().timer.isRunning)
externalTimerObject.start()
dispatch({
type: 'START_TIMER'
})
}
There are two issues I'm trying to solve:
If I want to log my actions to a database or localStorage so that I can replay them to get to a consistent app state then even if rootState.timer.isRunning is true, my timer object will not be running.
The conditional if (!getState().timer.isRunning) requires that I know where in the root state timer is mounted. Since I'm building this as a library, I can't assume that timer is always going to be mounted directly onto the root state.
If I want to log my actions to a database or localStorage so that I can replay them to get to a consistent app state then even if rootState.timer.isRunning is true, my timer object will not be running.
I think that this is actually correct by design. When you reproduce a recorded log, you want everything to happen exactly as it happened before in terms of the produced actions.
For example, rather than fire off the real AJAX requests from your computer, while replaying actions, you’ll probably want to replay the recorded AJAX responses that were dispatched during that user session in the past.
I think timer falls into the same category: from the Redux point of view, action history describes what happened “as a result” of side effects, and replaying the actions should be enough to get your app into the same state even if those side effects did not actually fire again.
The conditional if (!getState().timer.isRunning) requires that I know where in the root state timer is mounted. Since I'm building this as a library, I can't assume that timer is always going to be mounted directly onto the root state.
If you’re building a library, you also probably shouldn’t depend on the thunk middleware being available. It seems like you depend on it in your action creator. It is hard to say more without understanding your exact use case.
Related
I have an application where the web ui (react) Cloud Function then runs and updates a completion value in the database.
How can I show a 'progress' indication and take it down when the Cloud Function has completed?
I initially thought I would do something like this pseudo code with Promises:
return updateDatabaseToTriggerFunctionExec()
.then listenForFunctionDoneEvent()
.then return Promise.resolve();
However, I'm not sure how to know when the function has finished and updated a value. What is the recommended way to detect when a triggered Cloud Function has completed?
You'll have to implement something like a command-response model using the database as a relay where you push commands into a location and the function pushes results out that can be listened to by the client that issued the command. The thing that makes this work is the fact that the locations of the commands and responses and known between the client and server, and they have a common knowledge of the push id that was generated for the client command.
I go over this architecture a bit during my session at Google I/O 2017 where I build a turn-based game with Firebase.
An alternative is to use a HTTP function instead, which has a more clearly-defined request-response cycle.
I'm just getting started with Meteor and I have a REST API hooked up with publish / subscribe that can periodically update per client. How do I run this behavior once globally and only refresh as long as a client is connected?
My first use case is periodically refreshing content while clients are active. My second use case is having some kind of global lock to make sure a task is only happening once at a time. I'm trying to use Meteor to make a deployment UI and I only want 1 deployment to happen at once.
publish/subscribe will work automatically only when clients are connected. However, do not put any functionality that you want to control amount of execution times in publish or subscribe functions. They might run arbitrary amount of times.
If you want some command to be executed by any client use Meteor.methodss on server side, and call it explicitly with Meteor.call from client template event.
To make sure that only one deployment happens at any given time, simplest way would be to create another collection, called for example, CurrentDeployments.And any time deployment script function in Meteor.methods is executed, check with CurrentDeployments.findOne if there are ongoing deployment or not, and only call new one if none is running.
As a side bonus, subscribe to CurrentDeployments in client, to disable 'deploy' button in case one is already running.
New to firebase and trying to understand how things work. I have an android app and plan to use the offline support and I'm trying to figure out whether or not I need to use callbacks. When I make a call like:
productNode.child("price").setValue(product.price)
Does that call to setValue happen synchronously on the main thread and the sync to the cloud happens asynchronously? Or does both execute asynchronously on a background thread?
The Firebase client immediately updates its local copy of the data with the new value. As part of this it fires any local (value, child_*) events that are needed.
Sending of the data to the database happens on a separate thread. If you want to know when this has completed, you can register a CompletionListener.
If the server somehow cannot complete the write operation (typically because the write violates a security rule), the client will fire any additional events that are needed to get the app back into the correct state. So in the case of a value listener it will then fire a second value event with the previous value.
So I'm integrating SignalR and HotTowel, although really I think this is a matter of how to integrate with Durandal itself.
The issue is I have obviously multiple views. Some of these views I want to respond to SignalR messages. The question is how to do this integration considering that SignalR events have to be started before I call SignalR's hub start method.
So take the example I have view1 and view2. I want each to do something when a SignalR message is received and in the context of that view (so let's say update the DOM somehow). It's an SPA obviously so calling the SignalR start method for each view seems like a bad idea, so starting SignalR once at boot sounds like the right plan, but at that point my views may not have been loaded, and still how would I ensure that my events have the right context for the page.
This is based on my understanding that all events for SignalR have to be registered before I call start. Any thoughts clever people of StackOverflow?
Edit to expand on the problem
Part of the website involves uploading files for parsing and processing to import into a database. I have created a view where the file is selected and uploaded (using FineUploader) to a WebApiController. The controller does the basic steps of checking the uploaded file and then starts an async task to actually do the parsing and processing, while immediately returning the basic "Yep that uploaded fine" message.
This causes the list of 'in progress' files to refresh and the file appears with an 'Uploaded' status. As the async task occurs, the file is parsed, then processed against a rules system, and then finally imported into another back end data store. As each of these status changes occur, SignalR sends messages to the client to notify them of these changes, and thus update the status against the filename. In order for this to occur I must attach a function to the event as it received in SignalR. That even needs some kind of reference to my view (actually viewmodel) so it can update the correct value.
As SignalR should be started once with a call to hub.Start(), I am trying to do it during the 'boot' phase. However when my SPA starts, that view has not been loaded, and therefore neither has that viewmodel, and therefore my function that is responsible for initialising SignalR can have no understanding of the view/viewmodel it must update.
Examples I've seen on using SignalR show it being used in one view, but that doesn't really work surely if you need it in multiple views (you can't just keep calling hub.start() can you)?
Sorry, if this still doesn't make sense I'll post some code or something.
If you use
$.connection.myHub.on("myMethod", function (/* ... */) { /* ... */ });
instead of
$.connection.myHub.client.myMethod = function (/* ... */) { /* ... */ };
you can add client-side hub methods after calling $.connection.hub.start();
I am writing a custom Windows Workflow Foundation activity, that starts some process asynchronously, and then should wake up when an async event arrives.
All the samples I’ve found (e.g. this one by Kirk Evans) involve a custom workflow service, that does most of the work, and then posts an event to the activity-created queue. The main reason for that seems to be that the only method to post an event [that works from a non-WF thread] is WorkflowInstance.EnqueueItem, and the activities don’t have access to workflow instances, so they can't post events (from non-WF thread where I receive the result of async operation).
I don't like this design, as this splits functionality into two pieces, and requires adding a service to a host when a new activity type is added. Ugly.
So I wrote the following generic service that I call from the activity’s async event handler, and that can reused by various async activities (error handling omitted):
class WorkflowEnqueuerService : WorkflowRuntimeService
{
public void EnqueueItem(Guid workflowInstanceId, IComparable queueId, object item)
{
this.Runtime.GetWorkflow(workflowInstanceId).EnqueueItem(queueId, item, null, null);
}
}
Now in the activity code, I can obtain and store a reference to this service, start my async operation, an when it completes, use this service to post an event to my queue. The benefits of this - I keep all the activity-specific code inside activity, and I don't have to add new services for each activity types.
But seeing the official and internet samples doing it will specialized non-reusable services, I would like to check if this approach is OK, or I’m creating some problems here?
There is a potential problem here with regard to workflow persistence.
If you create long running worklfows that are persisted in a database to the runtime will be able to restart these workflows are not reloaded into memory until there is some external event that reloads them. As there they are responsible for triggering the event themselves but cannot until they are reloaded. And we have a catch 22 :-(
The proper way to do this is using an external service. And while this might feel like dividing the code into two places it really isn't. The reason is that the workflow is responsible for the big picture, IE what should be done. And the runtime service is responsible for the actual implementation or how it should be done. That way you can change the how without changing the why and when part.
A followup - regardless of all the reasons, why it "should be done" using a service, this will be directly supported by .NET 4.0, which provides a clean way for an activity to start an asynchronous work, while suspending the persistence of the activity.
See
http://msdn.microsoft.com/en-us/library/system.activities.codeactivitycontext.setupasyncoperationblock(VS.100).aspx
for details.