When I want one object out of an firebaselistobservable using rxjs, should I still use subscribe? - firebase

I am kind of confused about which methods belong with and when to use them.
Right now, I am using subscribe for basically everything and it is not working for me when I want a quick static value out of Firebase. Can you explain when I should use subscribe vs other methods other than for a strict observable?

When working with async values you have a few options: promises, rxjs, callbacks. Every option has its own drawbacks.
When you want to retrieve a single async value it is tempting to use promises for their simplicity (.then(myVal => {})). But this does not give you access to things like timeouts/throttling/retry behaviour etc. Rx streams, even for single values do give you these options.
So my recommendation would be, even if you want to have a single value, to use Observables. There is no async option for 'a quick static value out of a remote database'.
If you do not want to use the .subscribe() method there are other options which let you activate your subscription like .toPromise() which might be easier for retrieving a single value using Rx.
const getMyObjPromise = $firebase.get(myObjId)
.timeout(5000)
.retry(3)
.toPromise()
getMyObjPromise.then(obj => console.log('got my object'));

My guess is, that you have a subscribe method that contains a bunch of logic like it was a ’.then’ and you save the result to some local variable.
First: try to avoid any logic inside the subscription-method -> use stream-operators before that and then subscribe just to retrieve the data.
You much more flexible with that and it is much easier to unit-test those single parts of your stream than to test a whole component in itself.
Second: try to avoid using a manual subscriptions at all - in angular controllers they are prone to cause memory leaks if not unsubscribed.
Use the async-pipe instead in your template and let angular manage the subscription itself.

Related

how to start one flux after completion of another?

how can i start Flux after completion of Flux? Flux doesn't depend on result of Flux.
I thought operation then() should do the work, but it just return mono. So what is the correct operator?
Flux.thenMany is the operator that you need
abstract Flux<Integer> flux1();
abstract Flux<String> flux2();
public Flux<String> flux2AfterFlux1(){
return flux1().thenMany(flux2());
}
My reply is late, so you have probably either figured this out, or moved on. I need to do the same thing, so I seem to have achieved this by using Flux.usingWhen(). I need to obtain IDs from my database that exhibit a certain property. After retrieving these IDs, then I need to get some information about these IDs. Normally, a join would work, but I am using MongoDB, and joins are horrible, so it is better to do separate, sequential queries. So let's say I have a Mongo reactive repository with a method called Flux<String> findInterestingIds(). I have another method in the same repository called Flux<String> findInterestingInformation(String[] interestingIds) that I need to feed the IDs found in the call to the previous method. So this is how I handle it:
Mono<List<String>> interestingIdsMono = myRepo.findInterestingIds()
.collectList();
Flux.usingWhen(interestingIdsMono,
interestingIds -> myRepo.findInterestingInformation(interestingIds),
interestingIds -> Flux.empty().doOnComplete(() -> log.debug("Complete"));
The cleanup function (third parameter) was something that I don't quite yet understand how to use in a good way, so I just logged completion and I do not need to emit anything extra when the flux completes.
ProjectReactor used to have a compose function that is now called transformDeferred, and I had some hopes for that, but I do not quite see how it would apply in this type of situation. At any rate, this is my attempt at it, so feel free to show me a better way!

Chaining Handlers with MediatR

We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}

Redux / Flux Pattern for Fetching Data When Store Updates

I have what I believe is a very common scenario... I'm building a dashboard of components that will be driven by some datasource. At the top of the view would be a series of filters (e.g. a date range). When the date range is updated, the components on the screen would need to update their data based on the selected range. This would in turn force the individual components that are slave to that picker to need to fetch new data (async action/XHR) based on the newly selected range.
There can be many components on the screen and the user may wish to add/remove available displays, so it is not as simple as always refreshing the data for all components because they may or may not be present.
One way I thought to handle this was in the action dispatched when a new date range is selected was to figure out what components are on screen (derived from the Store) and dispatch async actions to fetch the data for those components. This seems like a lot of work will go into the DATE_CHANGED action.
Another alternative might be to detect date range changes in store.subscribe() callbacks from each of the components. This seems to decouple the logic to fetch the data from the action that caused this to happen. However, I thought it was bad practice (or even an error) to dispatch while dispatching. Sure I can wrap it in a setTimeout, but that feels wrong too.
Third thing that came to mind was just doing fetch calls directly in the component's store.subscribe() and dispatching when those return, but I thought this breaks the connect model.
This seems like a common pattern to fetch based on state changes, but I don't know where its best to put those. Any good documentation / examples on the above problem?
Don't use store.subscribe for this. When DATE_CHANGED reaches the reducer it's meant for, simply change the application state (I'm assuming the date range is part of the store somehow). So you have something like state.rangeStart and state.rangeEnd.
You didn't mention what view rendering library you're using, so I can only describe how this is typically done with React:
The components know wether they are currently mounted (visible) or not, so redux doesn't need to be concerned with that. What you need is a way to detect that state.rangeStart or state.rangeEnd changed.
In React there is a lifecycle hook for that (componentWillReceiveProps or getDerivedStateFromProps in the newest release). In this handler you dispatch async redux actions that fetch the data the component needs. Your view library will probably have something similar.
The components display some kind of "empty" or "loading" state while you're waiting for the new data typically. So a good practice is to invalidate/clear data from the store in the reducer that handles the DATE_CHANGED action. For example, if state.listOfThings (an array) entirely depends on the date range, you would set it to an empty array as soon as the date changes: return { ...state, listOfThings: [] }. This causes the components to display that data is being fetched again.
When all the async redux actions went through the REQUEST -> SUCCESS/FAILURE cycle and have populated the store with the data, connected components will automatically render it. This is kind of its own chapter, look into redux async actions if you need more information.
The tricky part are interdependencies between the components and the application they're rendering. If two different dashboard components for example want to fetch and render state.listOfThings for the current date range, you don't want to fetch this data twice. So there needs to be a way to detected that 1) the data range has changed but also 2) a request to fetch listOfThings is already on its way. This is usually done with boolean flags in the state: state.isFetchingListOfThings. The async actions fetching this data cause the reducer to set this flag to true. Your components need to be aware of this and dispatch actions conditionally: if (props.rangeStart !== nextProps.rangeStart && !nextProps.isFetchingListOfThings) { props.fetchListOfThings(); }.

Dynamic middleware in Redux

I'm using Redux to write a NodeJS app. I'm interested in allowing users to dynamically load middleware by specifying it at runtime.
How do I dynamically update the middleware of a running Redux application to add or remove middleware?
Middleware is not some separate extension, it's part of what your store is. Swapping it at runtime could lead to inconsistencies. How do you reason about your actions if you don't know what middleware they'll be run through? (Keep in mind that middlewares don't have to operate synchronously.)
You could try a naive implementation like the following:
const middlewares = [];
const middlewareMiddleware = store => next => act => {
const nextMiddleware = remaining => action => remaining.length
? remaining[0](store)(nextMiddleware(remaining.slice(1)))(action)
: next(action);
nextMiddleware(middlewares)(act);
};
// ... now add/remove middlewares to/from the array at runtime as you wish
but take note of the middleware contract, particularly the next argument. Each middleware receives a "pass to the next middleware" function as part of its construction. Even if you apply middlewares dynamically, you still need to tell them how to pass their result to the next middleware in line. Now you're faced with a loss-loss choice:
action will go through all of the middleware registered at the time it was dispatched (as shown above), even if it was removed or other middleware was added in the meantime, or
each time the action is passed on, it goes to the next currently registered middleware (implementation is a trivial excercise), so it's possible for an action to go through a combination of middlewares that were never registered together at a single point in time.
It might be a good idea to avoid these problems alltogether by sticking to static middleware.
Use redux-dynamic-middlewares
https://github.com/pofigizm/redux-dynamic-middlewares
Attempting to change middleware on-the-fly would violate the principle of 'pure' actions and reducer functions, because it introduces side-effects. The resulting app will be difficult to unit-test.
Off the top of my head, it might be possible to create multiple stores (one for each possible middleware configuration), and use a parent store to provide the state switch between them. You'd move the data between the sub-stores when switching. Caveat: I've not seen this done, and there might be good reasons for not doing it.

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

Resources