I'm creating an API method call which takes a Policy as an argument.
However, in my method I'd like to 'add onto' this policy by including my own retry Action(s) so that I can perform intermediate logging and telemetry of my own. Similar in concept to adding Click events to a Windows UI control.
Is there a way to modify a Policy after it's created?
Or, is there a hook mechanism where I can define my own callbacks in the Execute method perhaps?
A Polly Policy is immutable; it cannot be modified after configuration. However, there are several ways in which you can attach extra behaviour to a policy.
There could be several approaches depending what you want to achieve.
Note: All examples in this answer refer to synchronous policies / policy-hooks used when executing delegates synchronously, but all the same behaviour exists for the async forms of policies.
Option 1: All policy types do offer delegate hooks such as onRetry; onBreak; onCacheHit, and similar. Extra behaviour (for example logging) can be added in these. The delegates attached to these hooks must be defined at policy configuration time. There are many examples in the Polly readme and Polly-Samples project. The Polly wiki covers all such delegate hooks in depth.
Option 2: If the fact that these delegates (onRetry etc) must be defined at policy configuration time is a restriction: you can overcome this using Polly.Context. Most of the delegates such as onRetry exist in a form which takes Context as an input parameter. That Context is execution-scoped, can carry arbitrary data, and a Context instance can be passed in to the call to .Execute(...).
So you could define Context["ExtraAction"] = /* some Action */ and pass that in to .Execute(...). Then, the onRetry delegate could extract Action extraAction = Context["ExtraAction"] (with some defensive checks) and execute it extraAction(). This allows you to inject arbitrary behaviour to the onRetry delegate after the policy has been configured.
Option 3: Perform your extra logic in the delegate executed. Of course, you could write your own Execute(...) wrapper method which takes a delegate to execute, and a policy, but wraps in extra behaviour.
public TResult MyExecute(ISyncPolicy policy, Func<TResult> toExecute)
{
return policy.Execute(() =>
{
/* do my extra stuff */
return toExecute();
}
}
Related
The documentation for this method simply says:
Creates an IAsyncEnumerable that enables reading all of the data
from the channel.
Does the enumerable returned represent a snapshot of the Channel at the time of calling, or is it a 'live' view of the Channel that will behave correctly if items are added/removed by other actors while I am enumerating it?
According to the source code it enumerates the live channel, not a snapshot.
Of course, inherited classes that override this behavior. You need to examine your specific instance.
I have a PASOE Business Class Entity setup as a Web Service. I'm trying to determine how to create a custom header that will allow me to pass in a hashed token. Is this something that I need to upgrade to 11.7.4 for DOH(OpenEdge.Web.DataObject.DataObjectHandler)? Or is this something that I simply add into a method that's defined in the class? Apologies, for the lack of code to illustrate my situation, but I'm not sure where to begin.
If you're using a Business Entity with the web transport then you're using the DOH, and the below applies. If you're using the rest transport then you are not using the DOH, and are more limited in your choices.
There is doc available on the DOH at https://documentation.progress.com/output/oe117sp/index.html#page/gssp4/openedge-data-object-handler.html - it's for 11.7.4 but largely applies to all versions (that is, from 11.6.3+). This describes the JSON mapping file, which you'll need to create an override to the default, generated one.
If you want to use the header's value for all operations, then you may want to use one of the DOH's events. There's an example of event handlers at https://github.com/PeterJudge-PSC/http_samples/blob/master/web_handler/data_object_handler/DOHEventHandler.cls ; you will need to start that handler in a session startup procedure using new DOHEventHandler() (the way that code is written is that it makes itself a singleton).
You can now add handling code for the Invoking event which fires before the business logic is run.
If you want to pass the header value into the business logic you will need to
Copy the generated mapping file <service>.gen to a <service.map> , in the same folder. "gen" files are generated and will be overwritten by the tooling
In the .map file, add a new arg entry. This must be in the same order as the parameters to the BE's method.
The JSON should look something like the below. this will read the value of the header and pass it as an input parameter into the method.
{ "ablName": "<parameter_name>",
"ablType": "CHARACTER",
"ioMode": "INPUT",
"msgElem": {"type": "HEADER", "name": "<http-header-name>"}
}
Namely, what are the advantages and disadvantages of the following approaches to building a server-side database API in Meteor?
Method-based
import Db from 'Db';
Meteor.method({"insert": (data) => {Db.insert(data)});
Subclass-based
import {Mongo} from "meteor/mongo";
class MyCollcetion extends Mongo.Collection {
insert: (data) => {super.insert(data);}
}
This problem has been solved below; there is a similar question for further reading: Meteor method vs. deny/allow rules
This is mainly a matter of ease vs control. Subclassing may be easier for simple things, and methods are more powerful.
This can also be affected by your state of mind (or affect it): CRUD vs. action-based mutation.
insert/update/remove go well with a CRUD state-of-mind, while you can associate methods with action-centric RPC mutators.
Eventually, this is a matter of personal preference, so I will try to give a short factual description and let the readers to decide based on their taste.
Subclassing
By default, Meteor automatically generates mutation methods (insert, update, remove) when a collection is instantiated.
Those methods are called behind the scenes when calling MyCollection.insert(mutator, cb) on the client side (outside client-side method code). When arriving to the server, the data are first passed through allow/deny rules and then executed.
When subclassing, you override those methods and get a 'hook' into the process.
Using methods
When defining a Meteor method you get full control of the process.
You set the parameters and the name of the method and you can perform the validation and authorization as you wish.
You can also create a method stub for client-side use, which generates optimistic UI until the results of the method server execution are received.
You can use something like a validatedMethod to get some extra validation logic and modularity to your method.
You can also prevent the creation of the default mutation methods by setting a false value for the defineMutationMethods option when instantiating the collection. You can also forbid direct mutation from the client by supplying the appropriate deny rules.
While subclassing allows you to use MyCollection.insert(...), etc. on the client, you need to call the method name with the arguments that you defined in order to mutate data.
I've to list, in specific folders or collections, objects expired also to anonymous users.
You know, portal_catalog returns only brains not expired. It's a useful behavior but not in this case...
To force the Catalog to return also expired contents, we've to pass a specific parameter: show_inactive.
Browsing the folder_listing (&family) code I noticed that it's possible to pass, via request, optionals parameters (contentFilter) to the query/getFolderContents. It's a nice feature to customize the query avoiding the creation of very similar listing templates.
I suppose it's necessary to create a marker interface to mark context (folders or collection) where I want to list also expired contents. For ex. IListExpired.
I imagine to ways:
1) to make a subscriber that intercepts before_traverse and , in the handler, a test to verify if the context implements the IListExpired. In positive case I made a
request.set('folderListing', {'show_inactive':True})
2) to make a viewlet for the IListExpired that in the call set
request.set('folderListing', {'show_inactive':True})
What's the best way? I suppose the first one could be an unnecessary overhead.
Vito
AFAIK, these are two separate thing: folderListing uses a method available to all CMF-based Folderish content types; show_inactive is an option of the Plone catalog, so you're not going to make it work as you're planning.
I think you should override these views and rewrite the listing using a catalog call.
you better use a browser layer for you package to do so or, a marker interface as you're planning.
I have a tree-like structure with a couple of entities: a process is composed of steps and a step may have sub-processes. Let's say I have 2 failure modes: abort and re-do. I have tree traversal logic implemented that cascades the fail signal up and down the tree. In the case of abort, all is well; abort cascades correctly up and down, notifying its parent and its children. In the case of re-do, the same happens, EXCEPT a new process is created to replace the one that failed. Because I'm using the DataMapper pattern, the new object can't save itself, nor is there a way to pass the new object to the EntityManager for persistence, given that entities have no knowledge of persistence or even services in general.
So, if I don't pass the EntityManager to the domain layer, how can I pick up on the creation of new objects before they go out of scope?
Would this be a good case for implementing AOP, such as with the JMSAopBundle? This is something I've read about, but haven't really found a valid use case for.
If I understand your problem correctly (your description seems to be written a bit in a hurry), I would do the following:
mark your failed nodes and your new nodes with some kind of flag (i.e. dirty flag)
Have your tree iterator count the number of failed and new nodes
Repeat tree-iteration / Re-Do prcocess as often as you want, until no more failed or new nodes are there that need to be handled
I just found a contribution from Benjamin Eberlei, regarding business logic changes in the domain layer on a more abstract level: Doctrine and Domain Events
Brief quote and summary from the blog post:
The Domain Event Pattern allows to attach events to entities and
dispatch them to event listeners only when the transaction of the
entity was successfully executed. This has several benefits over
traditional event dispatching approaches:
Puts focus on the behavior in the domain and what changes the domain triggers.
Promotes decoupling in a very simple way
No reference to the event dispatcher and all the listeners required except in the Doctrine UnitOfWork.
No need to use unexplicit Doctrine Lifecycle events that are triggered on all update operations.
Each method requiring action should:
Call a "raise" method with the event name and properties.
The "raise" method should create a new DomainEvent object and set it into an events array stored in the entity in memory.
An event listener should listen to Doctrine lifecycle events (e.g. postInsert), keeping entities in memory that (a) implement events, and (b) have events to process.
This event listener should dispatch a new (custom) event in the preFlush/postFlush callback containing the entity of interest and any relevant information.
A second event listener should listen for these custom events and trigger the logic necessary (e.g. onNewEntityAddedToTree)
I have not implemented this yet, but it sounds like it should accomplish exactly what I'm looking for in a more automated fashion that the method I actually implemented.