Providing backward compatibility for a zope component - plone

I'm working on a new release of collective.imagetags in which all the functionality that was carried by a browser view (imagetags-manage) is now moved to a new adapter (not commited yet) which provides almost the same interface that the browser view::
class IManageTags(Interface):
"""
imagetags-manage view interface
Tag management browser view
"""
def get_tag(id, create_on_fail=True):
""" Gets / creates a specific tag """
def get_tags():
""" Gets all the tags for the object """
def get_sorted_tags():
""" Sorted list of tags
"""
def save_tag(data):
""" Saves a tag with the passed data """
I really don't know if anybody is using this product in a project, however, I think it would be a sensible idea to provide some backward compatibility mechanism, in case anyone is using the browser view methods outside the out-of-the-box functionalities.
What should I do?
Keep the interface for the browser view with stub methods that relay on the new adapter?
Any suggestion?

This kind of change is quite hard ! It is not question of API but of design.
browser views are component where you mix use/request with the context.
adapters are component where you don't care of the user or its request.
utility are component where you don't care of the context.
So you should keep your browserview and make it use the adapter, it should be enough to keep compatbility.
Upgrades are used when you make changes to the default profile. First the metadata.xml version must be an integer (1000 is often used as first stable version) Next, every changes to the profile should follow an increase of this version number and you must add an upgrade step:
<gs:upgradeStep
title="Upgrade collective.myaddon from 1000 to 1010"
description=""
source="1000"
destination="1010"
handler=".upgrades.upgrade_1000_1010"
profile="collective.myaddon:default"/>
upgrades.py
def upgrade_1000_1010(context):
"""documentation
"""
context.runImportStepFromProfile(default_profile, 'viewlets')
portal_javascripts = getToolByName(context, 'portal_javascripts')
portal_javascripts.cookResources()

Related

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter)

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter),i want to know why design it in such way.I just curious about it.
void QObject::installEventFilter(QObject *obj)
{
Q_D(QObject);
if (!obj)
return;
if (d->threadData != obj->d_func()->threadData) {
qWarning("QObject::installEventFilter(): Cannot filter events for objects in a different thread.");
return;
}
// clean up unused items in the list
d->eventFilters.removeAll((QObject*)0);
d->eventFilters.removeAll(obj);
d->eventFilters.prepend(obj);
}
It's done that way because the most recently installed event filter is to be processed first, i.e. it needs to be at the beginning of the filter list. The filters are invoked by traversing the list in sequential order from begin() to end().
The most recently installed filter is to be processed first because the only two simple choices are to either process it first or last. And the second choice is not useful: when you filter events, you want to decide what happens before anyone else does. Well, but then some new user's filter will go before yours, so how that can be? As follows: event filters are used to amend functionality - functionality that already exists. If you added a filter somewhere inside the existing functionality, you'd effectively be interfacing to a partially defined system, with unknown behavior. After all, even Qt's implementation uses event filters. They provide the documented behavior. By inserting your event filter last, you couldn't be sure at all what events it will see - it'd all depend on implementation details of every layer of functionality above your filter.
A system with some event filter installed is like a layer of skin on the onion - the user of that system only sees the skin, not what's inside, not the implementation. But they can add their own skin on top if they wish so, and implement new functionality that way. They can't dig into the onion, because they don't know what's in it. Of course that's a generalization: they don't know because it doesn't form an API, a contract between them and the implementation of the system. They are free to read the source code and/or reverse engineer the system, and then insert the event filter anywhere in the list they wish. After all, once you get access to QObjectPrivate, you can modify the event filter list as you wish. But then you're responsible for the behavior of not only what you added on top of the public API, but of many of the underlying layers too - and your responsibility broadens. Updating the toolkit becomes next to impossible, because you'd have to audit the code and/or verify test coverage to make sure that something somewhere in the internals didn't get broken.

What are the relationships between twisted.cred.portal.IRealm, Portal and avatar

I'm trying to use Twisted's HTTP basic authentication to control access to some protected resources.
According to some articles, it is necessary to use three important concepts: Realm, Portal and avatar. Now I'm wondering if the Realm and avatar is one to one correspondence.
Let's look at an example
import sys
from zope.interface import implements
from twisted.python import log
from twisted.internet import reactor
from twisted.web import server, resource, guard
from twisted.cred.portal import IRealm, Portal
from twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse
class GuardedResource(resource.Resource):
"""
A resource which is protected by guard
and requires authentication in order
to access.
"""
def getChild(self, path, request):
return self
def render(self, request):
return "Authorized!"
class SimpleRealm(object):
"""
A realm which gives out L{GuardedResource} instances for authenticated
users.
"""
implements(IRealm)
def requestAvatar(self, avatarId, mind, *interfaces):
if resource.IResource in interfaces:
return resource.IResource, GuardedResource(), lambda: None
raise NotImplementedError()
def main():
log.startLogging(sys.stdout)
checkers = [InMemoryUsernamePasswordDatabaseDontUse(joe='blow')]
wrapper = guard.HTTPAuthSessionWrapper(
Portal(SimpleRealm(), checkers),
[guard.DigestCredentialFactory('md5', 'example.com')])
reactor.listenTCP(8889, server.Site(
resource = wrapper))
reactor.run()
if __name__ == '__main__':
main()
Of course I know the SimpleRealm is used to return the corresponding resource, e.g. GuardedResource in above example. However, I don't know what to do when there lots of resources to be guarded. For example, I have GuardedResource1, GuardedResource2 and GuardedResource3, maybe they need the same or different number of parameters when they are initialized; If so, is it necessary to implement SimpleRealm1, SimpleRealm2 and SimpleRealm3, respectively?
Someone asked this same question on the Twisted mailing list, with very similar code samples - http://twistedmatrix.com/pipermail/twisted-python/2015-December/030042.html - so I'll refer you to my answer there: http://twistedmatrix.com/pipermail/twisted-python/2015-December/030068.html
Rather than thinking of a resource as always existing and just needing to have a lock on it or not, consider the more flexible model (the one that cred actually implements) where a single Avatar object (in this case: the top IResource returned from SimpleRealm) is the top level of "everything the user has access to". In other words, 'GuardedResource' should have a 'getChild' method which makes the determination if the user they represent (really, at least the avatarId should be supplied to GuardedResource.init) has access to other resources, and return them if so, and appropriate errors if not.
Even the resources available to a not-logged-in user (see twisted.cred.credentials.Anonymous) is just another avatar, the one served up to unauthenticated people.
So, if you have https://myapp.example.com/a/b/secure/c/d https://myapp.example.com/a/b/secure/c/d, https://myapp.example.com/a/b/secure https://myapp.example.com/a/b/secure would be the guarded resource, and then SecureResource.getChild("c", ...) would return "c", which would in turn return "d" if the logged-in user has access to it.
Hopefully this answer worked for you on the list :-).

session in class file

Hi I have a list called Event Source. In that am adding new item. Once i added , the background process takes this newly created id and start importing EventFields in the Specific List.In EventSource list item i have an ECB menu item called Sync and I should not click the Sync until the import completes in the EventField list. My Client not accepted creating a flag field in the "Event Source List". So wanna maintain some flag in session until the import finishes. for me oncei created EventSource and Clicked Sync first time the HttpContex.current is null but next time it is not.But i need to maintain the flag very first time. That import code is written in the class library.How to maintain.If i use static it s cleaing the value or another instances.,
You may be wording wrong your question but a Windows Service doesn't have an HttpContext, you simply can't do that.

Why did getSite() return a FormlibValidation object

I've installed collective.quickupload on a blank Plone 4.1 site,
and noticed that when you add a quickupload portlet, kss calls for field validation (plone.app.form.kss), getSite function will return a FormlibValidation object, which cause the quickupload vocabularies crash.
The traceback is here: http://pastebin.com/nvwChpZd
My question is:
Is that (getSite function returns a FormlibValidation object) a bug or intended behaviour ?
Solution to fix/work around/make collective.quickupload work ?
getSite() returns the nearest component site (where local utilities can be stored), which really just means whatever was last set with setSite(), which usually happens on traversal.
Most of the time, the only traversal hook that calls setSite() is the one that's triggered when you traverse over the Plone site. But I think the old KSS inline form validation machinery used (uses?) a hack that creates a local component site on the fly (in a view) and sets that as the local site during the remainder of the request so that it can override certain things.
You can disable validation (e.g. disable the relevant KSS file in portal_kss) or fix c.quickupload to check whether the result of getSite() is an ISiteRoot. If it isn't, it should be acquisition-wrapped, so you can do aq_parent(site) (or maybe site.parent) to get the parent in a loop until you find an ISiteRoot.

Avoid deletion of an object (using IObjectWillBeRemovedEvent) and do a redirect to a custom view/template?

I would like to abort the deletion of an object (A custom Content-Type), and redirect to a page (a view) that sets the workflow to a custom state named Unavailable, shows a message to the user "You succesfully deleted the object!". The object will still be on ZODB, but for some groups it'll simply not be seen, as if it was really deleted.
I can do a raise in a subscriber using IObjectWillBeRemovedEvent, but trying to use raise zExceptions.Redirect("url") doesn't work. The raise call avoids the deletion, but a message "The object could not be deleted" is shown instead of the redirection.
Anyone has a solution to this scenario?
As you can see Plone / Zope 2 object management is messy (yes, I am willing to burn karma just to say this). You need to override delete action in the user interface level, not on the object level.
Try to figure out how to customize delete actions in Plone user interface.
Make sure the default Delete actions is no longer visible and available (e.g. set higher needed permission for it e.g. cmf.ManagePortal)
Create another Delete action which goes according to your specialized workflow
I believe Delete can be configured from portal_actions, but there might be separate cases for deleting one object (Actions menu) and deleting multiple objects (folder_contents).
You need REQUEST.response.redirect("url"). I'm pretty sure that zExceptions.Redirect is the way that Zope internally handles response.redirect() calls. Be sure you still raise another exception after calling redirect() so that the transaction is aborte.
That said, this sure seems like the wrong way to accomplish this. For one thing, you'll do at least double indexing, which is done before the transaction aborts. Catalog indexing is the most expensive part of processing a request that modifies content so this creates wasteful load on your server.
Events are for doing additional stuff which is only tangentially related to the event. What you want is to fundamentally change what happens when someone deletes. Maybe you should patch/override the underlying deletion method on the container objects (folders?) to do your worklfow transition.
You could raise a OFS.ObjectManager.BeforeDeleteException in the event handler to stop the deletion. If you raise a LinkIntegrityNotificationException you get redirected to Plones nice Link intergrity page.
from OFS.interfaces import IObjectWillBeRemovedEvent
from plone.app.linkintegrity.exceptions import LinkIntegrityNotificationException
import grok
#grok.subscribe(ICSRDocument, IObjectWillBeRemovedEvent)
def document_willbemoved(doc, event):
raise LinkIntegrityNotificationException(doc)

Resources