Does ChannelReader<T>.ReadAllAsync return a static or dynamic enumeration? - .net-core

The documentation for this method simply says:
Creates an IAsyncEnumerable that enables reading all of the data
from the channel.
Does the enumerable returned represent a snapshot of the Channel at the time of calling, or is it a 'live' view of the Channel that will behave correctly if items are added/removed by other actors while I am enumerating it?

According to the source code it enumerates the live channel, not a snapshot.
Of course, inherited classes that override this behavior. You need to examine your specific instance.

Related

Can I modify a Policy after it is built?

I'm creating an API method call which takes a Policy as an argument.
However, in my method I'd like to 'add onto' this policy by including my own retry Action(s) so that I can perform intermediate logging and telemetry of my own. Similar in concept to adding Click events to a Windows UI control.
Is there a way to modify a Policy after it's created?
Or, is there a hook mechanism where I can define my own callbacks in the Execute method perhaps?
A Polly Policy is immutable; it cannot be modified after configuration. However, there are several ways in which you can attach extra behaviour to a policy.
There could be several approaches depending what you want to achieve.
Note: All examples in this answer refer to synchronous policies / policy-hooks used when executing delegates synchronously, but all the same behaviour exists for the async forms of policies.
Option 1: All policy types do offer delegate hooks such as onRetry; onBreak; onCacheHit, and similar. Extra behaviour (for example logging) can be added in these. The delegates attached to these hooks must be defined at policy configuration time. There are many examples in the Polly readme and Polly-Samples project. The Polly wiki covers all such delegate hooks in depth.
Option 2: If the fact that these delegates (onRetry etc) must be defined at policy configuration time is a restriction: you can overcome this using Polly.Context. Most of the delegates such as onRetry exist in a form which takes Context as an input parameter. That Context is execution-scoped, can carry arbitrary data, and a Context instance can be passed in to the call to .Execute(...).
So you could define Context["ExtraAction"] = /* some Action */ and pass that in to .Execute(...). Then, the onRetry delegate could extract Action extraAction = Context["ExtraAction"] (with some defensive checks) and execute it extraAction(). This allows you to inject arbitrary behaviour to the onRetry delegate after the policy has been configured.
Option 3: Perform your extra logic in the delegate executed. Of course, you could write your own Execute(...) wrapper method which takes a delegate to execute, and a policy, but wraps in extra behaviour.
public TResult MyExecute(ISyncPolicy policy, Func<TResult> toExecute)
{
return policy.Execute(() =>
{
/* do my extra stuff */
return toExecute();
}
}

Listing expired plone contents only in specific contexts (folders or collections)

I've to list, in specific folders or collections, objects expired also to anonymous users.
You know, portal_catalog returns only brains not expired. It's a useful behavior but not in this case...
To force the Catalog to return also expired contents, we've to pass a specific parameter: show_inactive.
Browsing the folder_listing (&family) code I noticed that it's possible to pass, via request, optionals parameters (contentFilter) to the query/getFolderContents. It's a nice feature to customize the query avoiding the creation of very similar listing templates.
I suppose it's necessary to create a marker interface to mark context (folders or collection) where I want to list also expired contents. For ex. IListExpired.
I imagine to ways:
1) to make a subscriber that intercepts before_traverse and , in the handler, a test to verify if the context implements the IListExpired. In positive case I made a
request.set('folderListing', {'show_inactive':True})
2) to make a viewlet for the IListExpired that in the call set
request.set('folderListing', {'show_inactive':True})
What's the best way? I suppose the first one could be an unnecessary overhead.
Vito
AFAIK, these are two separate thing: folderListing uses a method available to all CMF-based Folderish content types; show_inactive is an option of the Plone catalog, so you're not going to make it work as you're planning.
I think you should override these views and rewrite the listing using a catalog call.
you better use a browser layer for you package to do so or, a marker interface as you're planning.

AutoMapper: Action on each element during collection processing?

Is it possible to invoke a method on each object that is being copied from a source to a destination collection using AutoMapper? The destination object has a method called
Decrypt() and I would like it to be called for each CustomerDTO element that is created. The only thing that I can figure out is to perform the mapping conversion and then loop again to invoke the Decrypt() method. I'd appreciate your help with this question.
Thanks,
Mike
IQueryable<CustomerDTO> dtos = AutoMapper.Mapper.Map<IQueryable<CustomerEntity>, IQueryable<CustomerDTO>>((BaseRepository.List));
foreach (var item in dtos)
{
item.Decrypt(Seed);
}
It depends if you are decrypting just a property or the whole object. I wasn't sure based on your question.
If you are just decrypting properties, then I suggest that you look into AutoMapper's Custom Value Resolvers. They allow you to take control when resolving a destination property.
If you need to decrypt the whole object, then I suggest you look into AutoMapper's Custom Type Converters. That gives you complete control over the conversion, though it does sort of take the auto out of AutoMapper.

hashtable keys() keySet() which is better

Just curiously I am asking which is the better method to use Hashtable.keys() or Hashtable.keySet(). Any one would have been sufficient. Why have they given 2 methods with different return types. Is there any performance drawback/benefit of one over the other ?
keySet is there because
it returns a Set view of the keys contained in this Hashtable. The Set is backed by the Hashtable, so changes to the Hashtable are reflected in the Set, and vice-versa. The Set supports element removal (which removes the corresponding entry from the Hashtable), but not element addition.
And keys just returns an enumeration of the keys in this hashtable, no changes will be reflected after getting enumeration.
Besides the funcitonal difference mentioned by Rahul, Hashtable itself is an old artifact of earlier java version and retrofitted to implement Map interface.
So keySet is a later construct required by the Map interface.
Additionally, if this is new code that you are writing, you should read up the api details for this data structure on http://docs.oracle.com/javase/7/docs/api/java/util/Hashtable.html and see if you should consider the guideline and use HashMap or other later Collections instead.

What is the best way to tie a Flex Tree control to a tree stored in a database?

I have a local SQLite database that contains a tree (as Nested Sets). In an AIR application, I want to display that tree in a tree control and provide means to change the nodes' names and copy, move, add or delete nodes.
Now, I'm hiccupping a little on where to put which code. Obviously, I have a class which will perform operations like load / update / insert / delete against the database. This would load the whole tree into some storage variable and save changes made by the user back to the db.
Should this class be the dataProvider, the dataDescriptor or an extension of the Tree control itself? And when the user requests an operation like adding a node, should that update the dataProvider and let the database handler react on an event, or should it call the database handler's method and then update the dataProvider? I'd say that the latter is better, because it's easier to not update the Tree's data if something goes wrong with the db query.
There's methods to add and remove nodes in the DefaultDataDescriptor and in the Tree class (protected methods in the latter), should I use / extend those or ignore them?
The reason I'm confused about this is that, according to the docs, a Tree control uses the object stored in its 'dataDescriptor' property to parse and manipulate the actual data which is stored inside its 'dataProvider' property.
This seems to make sense, until you realize that unless you subclass it, it's never the Tree control that manipulates data (with the exception of drag&drop, if that's enabled), and it's not the dataDescriptor, either. Rather, in all examples, manipulating data happens directly via the dataProvider object and that triggers event handlers in the Tree control.
What is it I don't get here?
Take a look at mx.controls.treeClasses.HierarchicalCollectionView. It is not part of the public API, but its full source is available as part of Flex. The Tree controller uses this class internally to handle various data sources.

Resources