Determining type of CollectionBase via Reflections (or Microsoft.Cci) - reflection

Question:
Is there a static way to reliably determine the type contained by a type derived from CollectionBase, using Reflection or Microsoft.Cci?
Background:
I am working on a code generator that copies types, makes customized versions of those types, and converters between. It walks the types in the source assembly via Microsoft.Cci. It prints out source code using textual templates. It does a lot of conversion and customization, and tosses out code that I don't care about.
In my resulting code, I intend to replace List<T> everywhere that a CollectionBase, IEnumerable<T>, or T[] was previously used. I want to use List<T> because I am pretty sure I can serialize it without extra work, which is important for my application. T is concrete in every case. I am trying not to copy CollectionBase classes because I'd have to copy over the custom implementation, and I'd like to avoid having to do that in my code generator.
The only part I'm having a problem with is determining T for List<T> when replacing a custom CollectionBase.
What I've done so far:
I have briefly looked at the MSDN docs and samples for CollectionBase, and they mention creating a custom Add method on your derived type. I don't think this is in any way enforced, so I'm not sure I can rely on that. An implementor could name it something else, or worse, have a collection that supports multiple types, with Object as their only common ancestor.
Alternatives I have considered:
Maybe the default serialization does some tricks that I can take advantage of. Is there a default serialization for CollectionBase collections, or do you generally have to implement it yourself? If you have to do it yourself, is there some reliable metadata I could look at in order to determine the types? If it supports default serialization, does it rely on the runtime types of the items in the collection?
I could make a mapping in my code generator of known CollectionBase types, mapped to their corresponding T for List<T>. If a given CollectionBase type that I encounter isn't in the list, throw an exception. This is probably what I'll go with if I there isn't a reliable alternative.

I'm still not sure enough about what you want to do to give advice. Still, do your CollectionBase-derived classes all implement Add(T)? If so, you could look for an Add method with single parameter of type other than object, and use that type for T.

Related

Using a traditional interface for Dexterity schema or XML?

Plone Dexterity supports the definition of the content-type schema either through an interface (using zope.schema for the definition) or through an XML file. What is the preferred/recommended way?
In addition: is there documentation of the XML dialect used for defining a schema (models/mytype.xml) ?
This presentation appears close but not complete.
I personally much prefer the zope.schema route; I can, if I really wanted to, vary the interface attributes dynamically with python, while the XML definition is of course static.
Also, note that to register adapters and views against an XML-defined schema, you need to pull it into python code anyway:
from plone.dexterity import api
class IMyXMLDefinedType(api.Schema):
api.model('my_xml_defined_type.xml')
The XML dialect is part of plone.supermodel package; I was not able to locate any documentation beyond the source code.
I prefer an interface over an xml model. Partly that is because I prefer Python over XML. Partly it is because you cannot do some things with the XML. For example, if you want to register a field as searchable, with collective.dexteritytextindexer, you (currently) cannot set this in the Plone interface, so you will have to use Python code and therefore an interface. But Martijn shows in his answer that you can use api.model in an interface to refer to an xml file, so maybe that would be a way around it if you really want to.
I'm going to contribute to the mess by saying there is no hard and fast answer.
With simpler content types, or early in the development of more complex ones, I'm often oriented towards the supermodel XML because of how closely it works with the dexterity TTW editor. It allows me to work with a client with very rapid feedback on what they want from their content type.
Sometimes I'll even move into file system development of some features while still having the fields defined in the FTI via supermodel.
However, with more complex content types, you're nearly certainly going to hit something you can't do via supermodel alone. At that point, I usually translate to schema — and that's typically pretty easy to do.
Ideally, if you're doing a lot of dexterity development, you should probably be able to shift pretty easily back and forth. They're just different ways of representing the same objects and attributes.

What's best practice in this situation?

I was just writing a small asp.net web page to display a collection of objects by binding to a repeater, when this came to mind.
Basically the class I've created, let's call it 'Test', has a price property that's an integer data type (ignore the limitations of using this type, I'm just using it as an example). However I want to format this property so it displays a currency and the correct decimal places etc.
Is it best practice to have a function within the class that returns the formatted string for the object, or would it be better to have a function in the back end of my web form that operations on the object and returns the formatted string?
I've heard before that a class should contain all it's relative functions but I've also heard that presentation should be kept in the 'presentation layer' in my N-tier app.
What would be the best approach in my situation? (and apologies if I haven't explained this clearly enough!)
Thanks!
In my opinion, both options are valid from an OO point of view.
Since the value is a price (that just happens to have the wrong data type), it makes sense to put the formatting into the data class. It's not something that's specific to the web interface, and, if you develop a different kind of user interface, you are very likely to require this formatting again.
On the other hand, it's a presentation issue, so it also makes sense to put it into the presentation layer.
For general OOP stuff, the object should not be exposing implementation details. I choose to interpret this as "avoid setters and getters when possible".
In the context of your question, I suggest that you have a getPriceDisplay() method that returns a string containing the formatted price.
The actual implementation of the formatting is hidden in the implementation details. You could provide a generic function for formatting, use some backend call, or something else. Those details should make no difference to the consumer of the 'Test' object.
Though it's not an OOP approach, in my opinion, this is a good time for an extension method. Call it .ToCurrency() which has the format of the currency...this could be taken from the Web.Config file if you wanted.
Edit
To elaborate, I would simply call .ToString("your-format") (of course this could be as simple as .ToString("C") for your specific question) in the extension method. This allows you change the format throughout the UI in one place. I have found this to be very useful when dealing with DateTime formats in web applications.
Wouldn't .ToString("C"); do the job? This would be in the presentation layer I would imagine.

Is using setattr to set a simple attribute in a content-type in Plone a bad practice (I mean, is it going to haunt me in the future)?

I have two different contexts on a Plone instance.
The first context has some ATFolders. The second, have ATFolders too that have to be in sync with the first context using some subscribers.
In the second context, the ATFolders have to know that they are linked to some of the folders on the first context.
I thought about using setattr in them (setattr(obj_context1, attr, obj_context2.UID())) instead of creating a new Content-Type just to have a ReferenceField attribute (or using archetype.schemaextender), since this would be too much overkill for just a single parameter in a specific context: the folders that will have this attribute will not be deleted from ZODB for example. They will have a placeful workflow with just one state. This attribute is completely hidden from the user, and the folders on the second context are programatically created, with no user intervention.
This attribute should only exist in the second context, so creating an adapter or a new content-type, just to be used in this context seems to be too much.
I'm inclined to use setattr for the sake of pragmatism on this specific scenario, but I don't know if using the setattr approach is going to haunt me in the future (performance, zodb conflicts, etc). I mean: doing an update catalog, update workflow, is this new attribute going to have a problem?
Any thoughts? Anyone experienced with setattr in this situation? This attr will and should not be visible, it's only for some control.
I don't think it's bad practice at all, I do similar things for similar situations.
You could use an attribute annotation, which would help prevent conflicts with other attributes, but that's a style and performance choice more than anything. Attribute annotations are stored in their own ZODB persistent record, so it depends on how often this attribute will change compared to the other attributes on the folder what impact this has.
Last but not least, I would probably encapsulate the behaviour in an adapter, to make the implementation flexible for future uses. You can either register the adapter to the ATFolder interface, or to IAttributeAnnotatable, depending on how much your implementation relies on what the adapted object needs to provide.
Other notes: We've also used plone.app.relations connections between objects in the past (maintained outside the object schema, like your attribute), but found five.intid (the underlying machinery .relations relies on) to be too fragile and would use simple UID attributes with catalog searches in the future.
In reference to Ross's answer, if the information in question doesn't need to be end-user editable, a schemaextender attribute is overkill.
Maybe use archetypes.schemaextender? See also this doc. This way you can use an actual ReferenceField, get all sorts of stuff for free, and spend a lot less time re-implementing said free stuff.

ASP.NET. Is it better to pass an entire entity to a method or pass each property of this entity as parameters?

We're developing a business ASP.NET application. Is it better to pass an entire entity to a method or pass each property of this entity as parameters? What is the best practice?
Case 1. Pass Customer entity to a manager - InsertCustomer(Customer cust)
Case 2. Pass each property as a parameter - InsertCustomer(string name, string address...etc)
P.S. We're using Entity Framework as our data access layer
Pass the entire entity, not only for reasons given in the other answers, but generally methods with long parameter chains are bad. They are prone to error, and tough to work with from a development standpoint (just look at Interop with Office)
In general, if I see I am getting too many parameters (usually more than three), either I have a method trying to do too much, or I explore ways of encapsulating this data in a struct.
You should pass the entire entity as when you update the entity, e.g. add or remove members you do not have to update all your method calls in all your layers. You only need to change your datalayer and the layer where you are consuming the entity. asp.net is Object Oriented and therefore you should orientate your code around your objects
The whole concept of object orientation requires objects to be passed around. If all is happening internally I would go with this.
If this is being posted to a webservice / across a network etc you would need to serialize, and hence may find it better to pass each individual parameter, especially if the receiving framework is different.
Don't forget your Strings etc are all objects too.
I agree with another poster, passing a whole entity "encapsulates" everything so that it can be updated/modified so you have less to worry about.

What is Reflection?

I am VERY new to ASP.NET. I come from a VB6 / ASP (classic) / SQL Server 2000 background. I am reading a lot about Visual Studio 2008 (have installed it and am poking around). I have read about "reflection" and would like someone to explain, as best as you can to an older developer of the technologies I've written above, what exactly Reflection is and why I would use it... I am having trouble getting my head around that. Thanks!
Reflection is how you can explore the internals of different Types, without normally having access (ie. private, protected, etc members).
It's also used to dynamically load DLL's and get access to types and methods defined in them without statically compiling them into your project.
In a nutshell: Reflection is your toolkit for peeking under the hood of a piece of code.
As to why you would use it, it's generally only used in complex situations, or code analysis. The other common use is for loading precompiled plugins into your project.
Reflection lets you programmatically load an assembly, get a list of all the types in an assembly, get a list of all the properties and methods in these types, etc.
As an example:
myobject.GetType().GetProperty("MyProperty").SetValue(myobject, "wicked!", null)
It allows the internals of an object to be reflected to the outside world (code that is using said objects).
A practical use in statically typed languages like C# (and Java) is to allow invocation of methods/members at runtime via a string (eg the name of the method - perhaps you don't know the name of the method you will use at compile time).
In the context of dynamic languages I haven't heard the term as much (as generally you don't worry about the above), other then perhaps to iterate through a list of methods/members etc...
Reflection is .Net's means to manipulate or extract information of an assembly, class or method at run time. For example, you can create a class at runtime, including it's methods. As stated by monoxide, reflection is used to dynamically load assembly as plugins, or in advance cases, it is used to create .Net compiler targeting .Net, like IronPython.
Updated: You may refer to the topic on metaprogramming and its related topics for more details.
When you build any assembly in .NET (ASP.NET, Windows Forms, Command line, class library etc), a number of meta-data "definition tables" are also created within the assembly storing information about methods, fields and types corresponding to the types, fields and methods you wrote in your code.
The classes in System.Reflection namespace in .NET allow you to enumerate and interate over these tables, providing an "object model" for you to query and access items in these tables.
One common use of Reflection is providing extensibility (plug-ins) to your application. For example, Reflection allows you to load an assembly dynamically from a file path, query its types for a specific useful type (such as an Interface your application can call) and then actually invoke a method on this external assembly.
Custom Attributes also go hand in hand with reflection. For example the NUnit unit testing framework allows you to indicate a testing class and test methods by adding [Test] {TestFixture] attributes to your own code.
However then the NUnit test runner must use Reflection to load your assembly, search for all occurrences of methods that have the test attribute and then actually call your test.
This is simplifying it a lot, however it gives you a good practical example of where Reflection is essential.
Reflection certainly is powerful, however be ware that it allows you to completely disregard the fundamental concept of access modifiers (encapsulation) in object oriented programming.
For example you can easily use it to retrieve a list of Private methods in a class and actually call them. For this reason you need to think carefully about how and where you use it to avoid bypassing encapsulation and very tightly coupling (bad) code.
Reflection is the process of inspecting the metadata of an application. In other words,When reading attributes, you’ve already looked at some of the functionality that reflection
offers. Reflection enables an application to collect information about itself and act on this in-
formation. Reflection is slower than normally executing static code. It can, however, give you
a flexibility that static code can’t provide

Resources