How to use intids? - plone

Looks like intid support has landed to Plone (with Dexterity?)
However there is little information how intids behave or how they should be used (when to use, getting id, look-up by id, when to set id manually) and what packages are involved.
Are there any, even short, instructions regarding this matter?
https://github.com/plone/plone.app.intid

Intid support won't land in Plone any time soon, I think it is optional in Dexterity 1.1 and certainly will be in Dexterity 2.0 which is the version PLIP'ed for Plone 4.3. Just use the plone.uuid support instead (the broken-out Archetypes UID functionality) which is a dependency for Dexterity 1.1 and up.

Related

Xunit - Moq - Autofixture changes.. are the removed features coming back?

It seems that Xunit no longer supports extending TraitAttributes. They have sealed the class.
There are also some other issues with Autofixture's plugin for AutoData() where we can inject random created data through an attribute. There are a few work around's for this, however I am attempting to evaluate this for a larger product overall. I liked the demo's since they could do small things like SQL, Excel, custom Attributes for category.
It seems there was more functionality before the changes. I have looked at the site and still see some of the features are returning and there isn't much information.
Is there a new set of functionality coming out? Or possibly a change that will allow us to recreate the older functionality in a new way? It seems the SQL and Excel have a work around, however I can't find any information about when the latest version will be compatible with "Autofixture with xUnit.net data theories" Nuget package. I really like what I have seen, though I can say I don't like breaking changes when I look at enterprise solutions. I cringe a little when I think about if I had this in place in an enterprise and I had made a lot of custom attributes, or used Moq and Autofixture to populate and now all my tests were broken. So I guess the other question is, does Xunit seem to change a lot with breaking changes? There is the other option of moving Xunit back a version. Though at some point I would need to know if these things would be fixed or if they were permanently removed, since I wouldn't want to spend time using functionality that is being removed.
Another is AutoFixtureMoqAutoDataAttribute that doesn't load without that side Nuget package. With the side nuget packages not being updated.
I guess the end question may be.. Does anyone know of any plans to get these features to work with the current version of xunit so that I can start implementing and then expect to do mass replaces later? Or are these permanently breaking changes where we shouldn't implement anything that is currently missing.
Thank you in advance.
Short answer
If you want to use xUnit.net 1.x with AutoFixture, use AutoFixture.Xunit.
If you want to use xUnit.net 2.x with AutoFixture, use AutoFixture.Xunit2.
Explanation
xUnit.net 2.0 introduced breaking changes, compared to xUnit.net 1.x (e.g. 1.9.2). For AutoFixture, we wanted to make sure that AutoFixture supports both. There are people who want to upgrade to xUnit.net 2.x as soon as possible, but there are also people who, for various reasons, will need to stay with xUnit.net 1.x for a while longer.
For the people who wanted or needed to stay with xUnit.net 1.x for the time being, we wanted to make sure that they'd still get all the benefits of various bug fixes and new features for the AutoFixture core, so we're maintaining two parallel (but feature complete) Glue Libraries for AutoFixture and xUnit.net.
As an example, we've just released AutoFixture 3.30.3, which addresses a defect in AutoFixture itself. This bug fix thus becomes available for both xUnit.net 1.x and 2.x users.
Thus, when you need to migrate from xUnit.net 1.x to xUnit.net 2.x, you should uninstall AutoFixture.Xunit and instead install AutoFixture.Xunit2. As far as I know, there should be feature parity between the two.
Traits
AutoFixture.Xunit and AutoFixture.Xunit2 don't use the [Trait] attribute, so I don't know exactly what you have in mind regarding this.
AutoMoq
Again, when it comes to AutoFixture.AutoMoq, it doesn't depend on xUnit.net, so I don't understand the question here as well. It sounds like a separate concern, so you may want to consider asking a separate question.

Using a traditional interface for Dexterity schema or XML?

Plone Dexterity supports the definition of the content-type schema either through an interface (using zope.schema for the definition) or through an XML file. What is the preferred/recommended way?
In addition: is there documentation of the XML dialect used for defining a schema (models/mytype.xml) ?
This presentation appears close but not complete.
I personally much prefer the zope.schema route; I can, if I really wanted to, vary the interface attributes dynamically with python, while the XML definition is of course static.
Also, note that to register adapters and views against an XML-defined schema, you need to pull it into python code anyway:
from plone.dexterity import api
class IMyXMLDefinedType(api.Schema):
api.model('my_xml_defined_type.xml')
The XML dialect is part of plone.supermodel package; I was not able to locate any documentation beyond the source code.
I prefer an interface over an xml model. Partly that is because I prefer Python over XML. Partly it is because you cannot do some things with the XML. For example, if you want to register a field as searchable, with collective.dexteritytextindexer, you (currently) cannot set this in the Plone interface, so you will have to use Python code and therefore an interface. But Martijn shows in his answer that you can use api.model in an interface to refer to an xml file, so maybe that would be a way around it if you really want to.
I'm going to contribute to the mess by saying there is no hard and fast answer.
With simpler content types, or early in the development of more complex ones, I'm often oriented towards the supermodel XML because of how closely it works with the dexterity TTW editor. It allows me to work with a client with very rapid feedback on what they want from their content type.
Sometimes I'll even move into file system development of some features while still having the fields defined in the FTI via supermodel.
However, with more complex content types, you're nearly certainly going to hit something you can't do via supermodel alone. At that point, I usually translate to schema — and that's typically pretty easy to do.
Ideally, if you're doing a lot of dexterity development, you should probably be able to shift pretty easily back and forth. They're just different ways of representing the same objects and attributes.

Removing custom property sheet with uninstall profile

I'm storing information in a custom property sheet for one of my custom products (I'm then using that information in a javascript file). I want this product to uninstall cleanly, but I can't seem to figure out how to remove a custom property sheet on uninstall using genericsetup. I know that remove="True" doesn't work, but I'm not having much luck figuring out the correct way (or any way for that matter) for removing this. Any suggestions would be greatly appreciated.
This is confusing for at least two reasons:
We have both "old style" and "new style" technologies actively in use. Old style refers to Extensions/Install.py (Python code) and new style refers to profiles/default (GS XML + setuphandlers.py Python code).
Successfully installing and uninstalling add-ons in all possible cases still requires the use of both old and new style technologies.
If you don't care about uninstall, you never need to use Extensions/Install.py. If you do care about uninstall, create an Extensions/Install.py with install and uninstall methods.
Also create an "uninstall" profile (in addition to the "default" profile) e.g. profiles/uninstall. Configure the Extensions/Install.py:install() method to execute your "normal" profiles/default steps on installation. Now comes the "fun" part.
Because some technologies can be uninstalled "properly" via GS i.e. they respect the remove=True parameter, your Extensions/Install.py:uninstall() method should execute the "proper" GS profiles to do the uninstall. But if your add-on uses technologies that cannot be uninstalled "properly" via GS i.e. those that do not respect the remove=True parameter, then you will need to write Python code to perform the uninstall.
See:
http://plone.org/documentation/kb/genericsetup/creating-an-uninstall-profile
for more information.

Is Plone doing enough to keep up with other CMSes?

I do Drupal for a living and I like the system. However I've always been intrigued by Plone and wanted to learn it well to broad base my knowledge of CMSes in general. I've played around with Plone in the past and was both mesmerized and repulsed by it -- depending on the day.
But then again here is what I saw as the advantages of Plone
Python sweet Python
Built on battle hardened and uber mature Zope 2
Zope 3 style which is now available in Zope 2 also and therefore in Plone
Objects and not SQL
True separation of configuration and content (unlike Drupal where configuration and content is totally mixed up in the database)
Very powerful system to make custom content types (unfortunately not via a UI)
However it surprised me that there was nothing that I could find equivalent to views ( http://drupal.org/project/views ) and that taxonomy (i.e. classification) was not a first class citizen. Every Plone product seemed to take its own approach to taxonomy. All in all, though I loved its extreme and idealistic approach, it always struck me that everything was so darn difficult to accomplish in it.
I've really been hoping for Plone to succeed and every few months will explore its RSS feeds only to go back dejected.
I thought I'd test out Plone 4. The new feature list in Plone 4 was totally underwhelming to me ( http://plone.org/products/plone/features ).
Drupal 7 new features ( http://drupalcode.org/viewvc/drupal/drupal/CHANGELOG.txt?revision=1.373&view=markup) and Wordpress 3 ( http://codex.wordpress.org/Version_3.0 ) seem to have done tons more in their new major releases.
Moreover replacement to Archetypes through Dexterity ( http://plone.org/products/dexterity/documentation/faq/how-is-dexterity-related-to-archetypes/view ) is also a great step forward. So while Plone 4 itself may be an improvement over 3.x is it enough to keep Plone in the reckoning amongst other CMSes?
Which brings me to my question:
Is Plone on a steady decline? Whats the future of Plone? Am I wrong in my assessment that Plone is not adding functionality and features at the rate other top tier CMSes are?
This http://www.google.com/trends?q=plone seems to confirm my fears.
Should I give Plone 4 a try and make it my "second" CMS?
Let me get the bias out of the way first: I'm one of the co-founders of Plone, so make of that what you will. ;)
Plone 4 is in many ways an "intermediary" release — the original plan was to make it into a large release with new UI approach (new layout system Deco), improved type definition system (Dexterity) and improved theming story (currently referred to as XDV, name will probably change).
Along the way, we realized that we needed a smaller release before we did that, so the major improvements got pushed to a new Plone 5 milestone, and Plone 4 was turned into a infrastructure / cleanup type release.
With that goal in mind, the team delivered the fastest Plone yet (it trounces Drupal, Joomla and WordPress for speed), improved a lot of very important infrastructure (files are now stored outside of the database, it uses much less memory than it used to, and scales a lot better to large number of parallel requests).
The innovation is still ongoing, and now that Plone 4 is out, we're fully focused on delivering Plone 5, which should have a lot of the new features and improvements that were originally planned as Plone 4. In the meantime, we have an extremely solid and fast base to work from, and deploy customers on.
You can also make make use of a lot of the Plone 5 tech in Plone 4 already — examples include the aforementioned Dexterity type definition system, the XDV theming system, and several other infrastructure improvements like the Chameleon template language (adds ~50% speedup for most pages).
So, no — we're not adding features at any slower pace — if you look at the source code history and activity instead of Google Trends (which isn't a very useful metric for something as niche as a CMS system), you'll see that there are more active developers and more code improvements than ever before.
Yep Collections do most of what is described by that description of Drupals Views. One thing that collections don't do out of the box is the grouping/taxonomy. There are additional plugins that can help do that such as collective.collection.yearview. Taxonomy options could be stronger but in reality nested collections work for most use cases.
As for the future of plone? Plone's popularity has remained static for the last couple of years as it has gone through it's massive internal restructuring. It's lost developers and gained developers. Compared to the rise Drupal and CMS's in general that may look like decline. The important thing now is that, due to that restructuring, Plone is now very developer friendly. Due to Diazo/XDV which most Plone integrators are switching to, Plone is now very designer friendly. It's also now fast and just as secure and flexible as it always has been. Expect Plone to start getting a lot more outside attention and growth from now on.
As Limi mentioned the mantra has been 'Plone 4 is the evolutionary release, Plone 5 is the revolutionary release'. As DisplacedAussie said, look at 'Collections' in Plone, they are like saved searches and combined with the Collections Portlet are pretty powerful.
Coming up in Plone 5 is the Deco/Tiles system for content editing, this is going to be really pretty amazing, and you can see in initial preview of it here: http://www.mefeedia.com/watch/32696814
Basically the entire page is made up of composite elements, each one is a first class item and addressable with its own URL. They can be dragged about the page on a grid as you see fit.
-Matt

Product management & assembly versions

We are approaching the initial release of a new product at our company, and I am trying to determine the best method of managing the versions of all of the different components and cross referencng those components with the marketing department version of our software. For various reasons, marketing has determined that the initial release of our product will be 10.1, however all of the components will initially start out at 1.0.0. Through normal bug fixes and patching and continued development work, the different components will no longer be at the same version number, so when marketing department decides it's time for version 10.2, it might contain 1.1.54, 1.2.32, 1.8.2, etc. Obviously, I could use a simple spreadsheet, but that isn't exactly the most user friendly method, and has issues for our tech support people to cross-reference the component versions (the customer is really only aware of version 10.1, 10.2, etc).
Is there a more "professional" method for this, or is a simple spreadsheet the best option?
The main principle I'd suggest is: Use the simplest scheme you can.
Consider making things easy for yourself, your marketing department, and your users.
When you do a release, increment the major/minor version number, and then stamp that across all your components. So in the 10.1 release, all your assemblies will have version numbers 10.1.xx.yy
Then if you really want to complicate matters with different versions within a release (e.g. for minor patches/updates, different customer variants, or just for internal daily or CI builds), then use the xx.yy fields. (In many cases you can get the compiler to automatically fill these two fields in with the date/time of compilation, for example).
This means you have a meaningful "marketing version" which is actually linked to your code versions (so you and marketing can talk about a particular release without any chance of confusion), and you can add extra information (e.g. build date) if (and only if) needed on the dev side.
edit: P.S. Even if a component doesn't change, rebuild it with the new version number. Trying to track a hundred out-of-sync version numbers is an avoidable nightmare.
At places where I've worked, we'd force the software version number to match the official, public (i.e. Marketing's) release number: if they wanted to ship "10.1" then that's what we'd set the software's version resource to, as part of the release build.
Why not leave all components at their "random" version numbers and create one super tag/label with the marketing version that encompasses all components? This allows you to keep updating the components in-between marketing builds and increment their build versions (without having to go to 10.1.001, 10.1.002 that may be visible to the customer) and also keep track of the marketing build. Also, what happens if you update some components for the next marketing build, but not others? Do you need to build those components just to update the version number?
Depending on your source control system, you should be able to easily create a release with a specified name/version that contains all of these components at different versions.
You should also just need to update one properties file with the marketing build number so it shows on all about screens, splash screens, tool bars, etc. If you don't have such a configuration in place, you may want to move to such a system. This allows for easy changes to the customer visible number while maintaining all component build numbers. Besides, what happens when marketing determines that the next version isn't going to be 10.2, but "Crimson?"

Resources